Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby
2016-06-01
This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through 'doing' quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a 'magic bullet' and that a wider programme of content and assessment diversification across the curriculum is preferential.
Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby
2015-01-01
This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through ‘doing’ quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a ‘magic bullet’ and that a wider programme of content and assessment diversification across the curriculum is preferential. PMID:27330225
Steganographic embedding in containers-images
NASA Astrophysics Data System (ADS)
Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.
2018-05-01
Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.
Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding
NASA Astrophysics Data System (ADS)
Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin
We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
Gear Fatigue Crack Diagnosis by Vibration Analysis Using Embedded Modeling
2001-04-05
gave references on Wigner - Ville Distribution ( WVD ) and some statistical based methods including FM4, NA4 and NB4. There are limitations for vibration...Embedded Modeling DISTRIBUTION : Approved for public release, distribution unlimited This paper is part of the following report: TITLE: New Frontiers in
On Some Assumptions of the Null Hypothesis Statistical Testing
ERIC Educational Resources Information Center
Patriota, Alexandre Galvão
2017-01-01
Bayesian and classical statistical approaches are based on different types of logical principles. In order to avoid mistaken inferences and misguided interpretations, the practitioner must respect the inference rules embedded into each statistical method. Ignoring these principles leads to the paradoxical conclusions that the hypothesis…
Properties of nanocrystalline Si layers embedded in structure of solar cell
NASA Astrophysics Data System (ADS)
Jurečka, Stanislav; Imamura, Kentaro; Matsumoto, Taketoshi; Kobayashi, Hikaru
2017-12-01
Suppression of spectral reflectance from the surface of solar cell is necessary for achieving a high energy conversion efficiency. We developed a simple method for forming nanocrystalline layers with ultralow reflectance in a broad range of wavelengths. The method is based on metal assisted etching of the silicon surface. In this work, we prepared Si solar cell structures with embedded nanocrystalline layers. The microstructure of embedded layer depends on the etching conditions. We examined the microstructure of the etched layers by a transmission electron microscope and analysed the experimental images by statistical and Fourier methods. The obtained results provide information on the applied treatment operations and can be used to optimize the solar cell forming procedure.
A novel sample preparation method to avoid influence of embedding medium during nano-indentation
NASA Astrophysics Data System (ADS)
Meng, Yujie; Wang, Siqun; Cai, Zhiyong; Young, Timothy M.; Du, Guanben; Li, Yanjun
2013-02-01
The effect of the embedding medium on the nano-indentation measurements of lignocellulosic materials was investigated experimentally using nano-indentation. Both the reduced elastic modulus and the hardness of non-embedded cell walls were found to be lower than those of the embedded samples, proving that the embedding medium used for specimen preparation on cellulosic material during nano-indentation can modify cell-wall properties. This leads to structural and chemical changes in the cell-wall constituents, changes that may significantly alter the material properties. Further investigation was carried out to detect the influence of different vacuum times on the cell-wall mechanical properties during the embedding procedure. Interpretation of the statistical analysis revealed no linear relationships between vacuum time and the mechanical properties of cell walls. The quantitative measurements confirm that low-viscosity resin has a rapid penetration rate early in the curing process. Finally, a novel sample preparation method aimed at preventing resin diffusion into lignocellulosic cell walls was developed using a plastic film to wrap the sample before embedding. This method proved to be accessible and straightforward for many kinds of lignocellulosic material, but is especially suitable for small, soft samples.
Practical steganalysis of digital images: state of the art
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2002-04-01
Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandford, M.T. II; Bradley, J.N.; Handel, T.G.
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in Microsoft{reg_sign} bitmap (.BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits,more » is termed {open_quote}steganography.{close_quote} Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or {open_quote}lossy{close_quote} compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is in data an analysis algorithm.« less
NASA Astrophysics Data System (ADS)
Sandford, Maxwell T., II; Bradley, Jonathan N.; Handel, Theodore G.
1996-01-01
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in MicrosoftTM bitmap (BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits, is termed `steganography.' Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or `lossy' compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is derived from the original host data by an analysis algorithm.
Rackauckas, Christopher; Nie, Qing
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.
Rackauckas, Christopher
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs. PMID:29527134
Low-dimensional approximation searching strategy for transfer entropy from non-uniform embedding
2018-01-01
Transfer entropy from non-uniform embedding is a popular tool for the inference of causal relationships among dynamical subsystems. In this study we present an approach that makes use of low-dimensional conditional mutual information quantities to decompose the original high-dimensional conditional mutual information in the searching procedure of non-uniform embedding for significant variables at different lags. We perform a series of simulation experiments to assess the sensitivity and specificity of our proposed method to demonstrate its advantage compared to previous algorithms. The results provide concrete evidence that low-dimensional approximations can help to improve the statistical accuracy of transfer entropy in multivariate causality analysis and yield a better performance over other methods. The proposed method is especially efficient as the data length grows. PMID:29547669
New Developments in the Embedded Statistical Coupling Method: Atomistic/Continuum Crack Propagation
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2008-01-01
A concurrent multiscale modeling methodology that embeds a molecular dynamics (MD) region within a finite element (FEM) domain has been enhanced. The concurrent MD-FEM coupling methodology uses statistical averaging of the deformation of the atomistic MD domain to provide interface displacement boundary conditions to the surrounding continuum FEM region, which, in turn, generates interface reaction forces that are applied as piecewise constant traction boundary conditions to the MD domain. The enhancement is based on the addition of molecular dynamics-based cohesive zone model (CZM) elements near the MD-FEM interface. The CZM elements are a continuum interpretation of the traction-displacement relationships taken from MD simulations using Cohesive Zone Volume Elements (CZVE). The addition of CZM elements to the concurrent MD-FEM analysis provides a consistent set of atomistically-based cohesive properties within the finite element region near the growing crack. Another set of CZVEs are then used to extract revised CZM relationships from the enhanced embedded statistical coupling method (ESCM) simulation of an edge crack under uniaxial loading.
USDA-ARS?s Scientific Manuscript database
Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...
Classifying the embedded young stellar population in Perseus and Taurus and the LOMASS database
NASA Astrophysics Data System (ADS)
Carney, M. T.; Yıldız, U. A.; Mottram, J. C.; van Dishoeck, E. F.; Ramchandani, J.; Jørgensen, J. K.
2016-02-01
Context. The classification of young stellar objects (YSOs) is typically done using the infrared spectral slope or bolometric temperature, but either can result in contamination of samples. More accurate methods to determine the evolutionary stage of YSOs will improve the reliability of statistics for the embedded YSO population and provide more robust stage lifetimes. Aims: We aim to separate the truly embedded YSOs from more evolved sources. Methods: Maps of HCO+J = 4-3 and C18O J = 3-2 were observed with HARP on the James Clerk Maxwell Telescope (JCMT) for a sample of 56 candidate YSOs in Perseus and Taurus in order to characterize the presence and morphology of emission from high density (ncrit > 106 cm-3) and high column density gas, respectively. These are supplemented with archival dust continuum maps observed with SCUBA on the JCMT and Herschel PACS to compare the morphology of the gas and dust in the protostellar envelopes. The spatial concentration of HCO+J = 4-3 and 850 μm dust emission are used to classify the embedded nature of YSOs. Results: Approximately 30% of Class 0+I sources in Perseus and Taurus are not Stage I, but are likely to be more evolved Stage II pre-main sequence (PMS) stars with disks. An additional 16% are confused sources with an uncertain evolutionary stage. Outflows are found to make a negligible contribution to the integrated HCO+ intensity for the majority of sources in this study. Conclusions: Separating classifications by cloud reveals that a high percentage of the Class 0+I sources in the Perseus star forming region are truly embedded Stage I sources (71%), while the Taurus cloud hosts a majority of evolved PMS stars with disks (68%). The concentration factor method is useful to correct misidentified embedded YSOs, yielding higher accuracy for YSO population statistics and Stage timescales. Current estimates (0.54 Myr) may overpredict the Stage I lifetime on the order of 30%, resulting in timescales down to 0.38 Myr for the embedded phase.
Jointly learning word embeddings using a corpus and a knowledge base
Bollegala, Danushka; Maehara, Takanori; Kawarabayashi, Ken-ichi
2018-01-01
Methods for representing the meaning of words in vector spaces purely using the information distributed in text corpora have proved to be very valuable in various text mining and natural language processing (NLP) tasks. However, these methods still disregard the valuable semantic relational structure between words in co-occurring contexts. These beneficial semantic relational structures are contained in manually-created knowledge bases (KBs) such as ontologies and semantic lexicons, where the meanings of words are represented by defining the various relationships that exist among those words. We combine the knowledge in both a corpus and a KB to learn better word embeddings. Specifically, we propose a joint word representation learning method that uses the knowledge in the KBs, and simultaneously predicts the co-occurrences of two words in a corpus context. In particular, we use the corpus to define our objective function subject to the relational constrains derived from the KB. We further utilise the corpus co-occurrence statistics to propose two novel approaches, Nearest Neighbour Expansion (NNE) and Hedged Nearest Neighbour Expansion (HNE), that dynamically expand the KB and therefore derive more constraints that guide the optimisation process. Our experimental results over a wide-range of benchmark tasks demonstrate that the proposed method statistically significantly improves the accuracy of the word embeddings learnt. It outperforms a corpus-only baseline and reports an improvement of a number of previously proposed methods that incorporate corpora and KBs in both semantic similarity prediction and word analogy detection tasks. PMID:29529052
Bahlmann, Claus; Burkhardt, Hans
2004-03-01
In this paper, we give a comprehensive description of our writer-independent online handwriting recognition system frog on hand. The focus of this work concerns the presentation of the classification/training approach, which we call cluster generative statistical dynamic time warping (CSDTW). CSDTW is a general, scalable, HMM-based method for variable-sized, sequential data that holistically combines cluster analysis and statistical sequence modeling. It can handle general classification problems that rely on this sequential type of data, e.g., speech recognition, genome processing, robotics, etc. Contrary to previous attempts, clustering and statistical sequence modeling are embedded in a single feature space and use a closely related distance measure. We show character recognition experiments of frog on hand using CSDTW on the UNIPEN online handwriting database. The recognition accuracy is significantly higher than reported results of other handwriting recognition systems. Finally, we describe the real-time implementation of frog on hand on a Linux Compaq iPAQ embedded device.
Recurrence plot statistics and the effect of embedding
NASA Astrophysics Data System (ADS)
March, T. K.; Chapman, S. C.; Dendy, R. O.
2005-01-01
Recurrence plots provide a graphical representation of the recurrent patterns in a timeseries, the quantification of which is a relatively new field. Here we derive analytical expressions which relate the values of key statistics, notably determinism and entropy of line length distribution, to the correlation sum as a function of embedding dimension. These expressions are obtained by deriving the transformation which generates an embedded recurrence plot from an unembedded plot. A single unembedded recurrence plot thus provides the statistics of all possible embedded recurrence plots. If the correlation sum scales exponentially with embedding dimension, we show that these statistics are determined entirely by the exponent of the exponential. This explains the results of Iwanski and Bradley [J.S. Iwanski, E. Bradley, Recurrence plots of experimental data: to embed or not to embed? Chaos 8 (1998) 861-871] who found that certain recurrence plot statistics are apparently invariant to embedding dimension for certain low-dimensional systems. We also examine the relationship between the mutual information content of two timeseries and the common recurrent structure seen in their recurrence plots. This allows time-localized contributions to mutual information to be visualized. This technique is demonstrated using geomagnetic index data; we show that the AU and AL geomagnetic indices share half their information, and find the timescale on which mutual features appear.
Concurrent Movement Impairs Incidental but Not Intentional Statistical Learning
ERIC Educational Resources Information Center
Stevens, David J.; Arciuli, Joanne; Anderson, David I.
2015-01-01
The effect of concurrent movement on incidental versus intentional statistical learning was examined in two experiments. In Experiment 1, participants learned the statistical regularities embedded within familiarization stimuli implicitly, whereas in Experiment 2 they were made aware of the embedded regularities and were instructed explicitly to…
On estimation of secret message length in LSB steganography in spatial domain
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2004-06-01
In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.
NASA Astrophysics Data System (ADS)
Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.
2014-03-01
The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.
Data series embedding and scale invariant statistics.
Michieli, I; Medved, B; Ristov, S
2010-06-01
Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.
An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses
NASA Technical Reports Server (NTRS)
Saether, E.; Glaessgen, E.H.; Yamakov, V.
2008-01-01
The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.
Noncontact power/interrogation system for smart structures
NASA Astrophysics Data System (ADS)
Spillman, William B., Jr.; Durkee, S.
1994-05-01
The field of smart structures has been largely driven by the development of new high performance designed materials. Use of these materials has been generally limited due to the fact that they have not been in use long enough for statistical data bases to be developed on their failure modes. Real time health monitoring is therefore required for the benefits of structures using these materials to be realized. In this paper a non-contact method of powering and interrogating embedded electronic and opto-electronic systems is described. The technique utilizes inductive coupling between external and embedded coils etched on thin electronic circuit cards. The technique can be utilized to interrogate embedded sensors and to provide > 250 mW for embedded electronics. The system has been successfully demonstrated with a number of composite and plastic materials through material thicknesses up to 1 cm. An analytical description of the system is provided along with experimental results.
Shedge, Sapana V; Zhou, Xiuwen; Wesolowski, Tomasz A
2014-09-01
Recent application of the Frozen-Density Embedding Theory based continuum model of the solvent, which is used for calculating solvatochromic shifts in the UV/Vis range, are reviewed. In this model, the solvent is represented as a non-uniform continuum taking into account both the statistical nature of the solvent and specific solute-solvent interactions. It offers, therefore, a computationally attractive alternative to methods in which the solvent is described at atomistic level. The evaluation of the solvatochromic shift involves only two calculations of excitation energy instead of at least hundreds needed to account for inhomogeneous broadening. The present review provides a detailed graphical analysis of the key quantities of this model: the average charge density of the solvent (<ρB>) and the corresponding Frozen-Density Embedding Theory derived embedding potential for coumarin 153.
ERIC Educational Resources Information Center
Jones, Julie Scott; Goldring, John E.
2017-01-01
The issue of poor statistical literacy amongst undergraduates in the United Kingdom is well documented. At university level, where poor statistics skills impact particularly on social science programmes, embedding is often used as a remedy. However, embedding represents a surface approach to the problem. It ignores the barriers to learning that…
Quasi-Static Probabilistic Structural Analyses Process and Criteria
NASA Technical Reports Server (NTRS)
Goldberg, B.; Verderaime, V.
1999-01-01
Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.
Greyson, Devon; Surette, Soleil; Dennett, Liz; Chatterley, Trish
2013-10-01
Embedded librarianship has received much attention in recent years. A model of embeddedness rarely discussed to date is that of research-embedded health librarians (REHLs). This study explores the characteristics of Canadian REHLs and the situations in which they are employed. The authors employed a sequential, mixed-method design. An online survey provided descriptive statistics about REHLs' positions and work experiences. This informed a series of focus group interviews that expanded upon the survey. Through constant comparison, we conducted qualitative descriptive analysis of the interviews. Based on twenty-nine survey responses and four group interviews, we created a portrait of a "typical" REHL and discovered themes relevant to REHL work. REHLs may identify more strongly as researchers than as librarians, with corresponding professional needs and rewards. REHLs value "belonging" to the research team, involvement in full project lifecycles, and in-depth relationships with nonlibrarian colleagues. Despite widely expressed job satisfaction, many REHLs struggle with isolation from library and information science peers and relative lack of job security. REHLs differ from non-embedded health librarians, as well as from other types of embedded librarians. REHLs' work also differs from just a decade or two ago, prior to widespread Internet access to digital resources. Given that research-embedded librarianship appears to be a distinct and growing subset of health librarianship, libraries, master's of library and information science programs, and professional associations will need to respond to the support and education needs of REHLs or risk losing them to the health research field.
Effect of using different cover image quality to obtain robust selective embedding in steganography
NASA Astrophysics Data System (ADS)
Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer
2014-05-01
One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.
Switching theory-based steganographic system for JPEG images
NASA Astrophysics Data System (ADS)
Cherukuri, Ravindranath C.; Agaian, Sos S.
2007-04-01
Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.
Greyson, Devon; Surette, Soleil; Dennett, Liz; Chatterley, Trish
2013-01-01
Objective: Embedded librarianship has received much attention in recent years. A model of embeddedness rarely discussed to date is that of research-embedded health librarians (REHLs). This study explores the characteristics of Canadian REHLs and the situations in which they are employed. Methods: The authors employed a sequential, mixed-method design. An online survey provided descriptive statistics about REHLs' positions and work experiences. This informed a series of focus group interviews that expanded upon the survey. Through constant comparison, we conducted qualitative descriptive analysis of the interviews. Results: Based on twenty-nine survey responses and four group interviews, we created a portrait of a “typical” REHL and discovered themes relevant to REHL work. REHLs may identify more strongly as researchers than as librarians, with corresponding professional needs and rewards. REHLs value “belonging” to the research team, involvement in full project lifecycles, and in-depth relationships with nonlibrarian colleagues. Despite widely expressed job satisfaction, many REHLs struggle with isolation from library and information science peers and relative lack of job security. Conclusions: REHLs differ from non-embedded health librarians, as well as from other types of embedded librarians. REHLs' work also differs from just a decade or two ago, prior to widespread Internet access to digital resources. Implications: Given that research-embedded librarianship appears to be a distinct and growing subset of health librarianship, libraries, master's of library and information science programs, and professional associations will need to respond to the support and education needs of REHLs or risk losing them to the health research field. PMID:24163600
Ross, Joseph S; Bates, Jonathan; Parzynski, Craig S; Akar, Joseph G; Curtis, Jeptha P; Desai, Nihar R; Freeman, James V; Gamble, Ginger M; Kuntz, Richard; Li, Shu-Xia; Marinac-Dabic, Danica; Masoudi, Frederick A; Normand, Sharon-Lise T; Ranasinghe, Isuru; Shaw, Richard E; Krumholz, Harlan M
2017-01-01
Machine learning methods may complement traditional analytic methods for medical device surveillance. Using data from the National Cardiovascular Data Registry for implantable cardioverter-defibrillators (ICDs) linked to Medicare administrative claims for longitudinal follow-up, we applied three statistical approaches to safety-signal detection for commonly used dual-chamber ICDs that used two propensity score (PS) models: one specified by subject-matter experts (PS-SME), and the other one by machine learning-based selection (PS-ML). The first approach used PS-SME and cumulative incidence (time-to-event), the second approach used PS-SME and cumulative risk (Data Extraction and Longitudinal Trend Analysis [DELTA]), and the third approach used PS-ML and cumulative risk (embedded feature selection). Safety-signal surveillance was conducted for eleven dual-chamber ICD models implanted at least 2,000 times over 3 years. Between 2006 and 2010, there were 71,948 Medicare fee-for-service beneficiaries who received dual-chamber ICDs. Cumulative device-specific unadjusted 3-year event rates varied for three surveyed safety signals: death from any cause, 12.8%-20.9%; nonfatal ICD-related adverse events, 19.3%-26.3%; and death from any cause or nonfatal ICD-related adverse event, 27.1%-37.6%. Agreement among safety signals detected/not detected between the time-to-event and DELTA approaches was 90.9% (360 of 396, k =0.068), between the time-to-event and embedded feature-selection approaches was 91.7% (363 of 396, k =-0.028), and between the DELTA and embedded feature selection approaches was 88.1% (349 of 396, k =-0.042). Three statistical approaches, including one machine learning method, identified important safety signals, but without exact agreement. Ensemble methods may be needed to detect all safety signals for further evaluation during medical device surveillance.
Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding
Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping
2015-01-01
Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771
A framework for optimal kernel-based manifold embedding of medical image data.
Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma
2015-04-01
Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.
New methods in iris recognition.
Daugman, John
2007-10-01
This paper presents the following four advances in iris recognition: 1) more disciplined methods for detecting and faithfully modeling the iris inner and outer boundaries with active contours, leading to more flexible embedded coordinate systems; 2) Fourier-based methods for solving problems in iris trigonometry and projective geometry, allowing off-axis gaze to be handled by detecting it and "rotating" the eye into orthographic perspective; 3) statistical inference methods for detecting and excluding eyelashes; and 4) exploration of score normalizations, depending on the amount of iris data that is available in images and the required scale of database search. Statistical results are presented based on 200 billion iris cross-comparisons that were generated from 632500 irises in the United Arab Emirates database to analyze the normalization issues raised in different regions of receiver operating characteristic curves.
Multilinear Graph Embedding: Representation and Regularization for Images.
Chen, Yi-Lei; Hsu, Chiou-Ting
2014-02-01
Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.
NASA Astrophysics Data System (ADS)
Mazzitello, Karina I.; Candia, Julián
2012-12-01
In every country, public and private agencies allocate extensive funding to collect large-scale statistical data, which in turn are studied and analyzed in order to determine local, regional, national, and international policies regarding all aspects relevant to the welfare of society. One important aspect of that process is the visualization of statistical data with embedded geographical information, which most often relies on archaic methods such as maps colored according to graded scales. In this work, we apply nonstandard visualization techniques based on physical principles. We illustrate the method with recent statistics on homicide rates in Brazil and their correlation to other publicly available data. This physics-based approach provides a novel tool that can be used by interdisciplinary teams investigating statistics and model projections in a variety of fields such as economics and gross domestic product research, public health and epidemiology, sociodemographics, political science, business and marketing, and many others.
Ross, Joseph S; Bates, Jonathan; Parzynski, Craig S; Akar, Joseph G; Curtis, Jeptha P; Desai, Nihar R; Freeman, James V; Gamble, Ginger M; Kuntz, Richard; Li, Shu-Xia; Marinac-Dabic, Danica; Masoudi, Frederick A; Normand, Sharon-Lise T; Ranasinghe, Isuru; Shaw, Richard E; Krumholz, Harlan M
2017-01-01
Background Machine learning methods may complement traditional analytic methods for medical device surveillance. Methods and results Using data from the National Cardiovascular Data Registry for implantable cardioverter–defibrillators (ICDs) linked to Medicare administrative claims for longitudinal follow-up, we applied three statistical approaches to safety-signal detection for commonly used dual-chamber ICDs that used two propensity score (PS) models: one specified by subject-matter experts (PS-SME), and the other one by machine learning-based selection (PS-ML). The first approach used PS-SME and cumulative incidence (time-to-event), the second approach used PS-SME and cumulative risk (Data Extraction and Longitudinal Trend Analysis [DELTA]), and the third approach used PS-ML and cumulative risk (embedded feature selection). Safety-signal surveillance was conducted for eleven dual-chamber ICD models implanted at least 2,000 times over 3 years. Between 2006 and 2010, there were 71,948 Medicare fee-for-service beneficiaries who received dual-chamber ICDs. Cumulative device-specific unadjusted 3-year event rates varied for three surveyed safety signals: death from any cause, 12.8%–20.9%; nonfatal ICD-related adverse events, 19.3%–26.3%; and death from any cause or nonfatal ICD-related adverse event, 27.1%–37.6%. Agreement among safety signals detected/not detected between the time-to-event and DELTA approaches was 90.9% (360 of 396, k=0.068), between the time-to-event and embedded feature-selection approaches was 91.7% (363 of 396, k=−0.028), and between the DELTA and embedded feature selection approaches was 88.1% (349 of 396, k=−0.042). Conclusion Three statistical approaches, including one machine learning method, identified important safety signals, but without exact agreement. Ensemble methods may be needed to detect all safety signals for further evaluation during medical device surveillance. PMID:28860874
Various Effects of Embedded Intrapulse Communications on Pulsed Radar
2017-06-01
specific type of interference that may be encountered by radar; however, this introductory information should suffice to illustrate to the reader why...chapter we seek to not merely understand the overall statistical performance of the radar with embedded intrapulse communications but rather to evaluate...Theory Probability of detection, discussed in Chapter 4, assesses the statistical probability of a radar accurately identifying a target given a
A robust embedded vision system feasible white balance algorithm
NASA Astrophysics Data System (ADS)
Wang, Yuan; Yu, Feihong
2018-01-01
White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.
Montgomery, Eric; Gao, Chen; de Luca, Julie; Bower, Jessie; Attwood, Kristropher; Ylagan, Lourdes
2014-12-01
The Cellient(®) cell block system has become available as an alternative, partially automated method to create cell blocks in cytology. We sought to show a validation method for immunohistochemical (IHC) staining on the Cellient cell block system (CCB) in comparison with the formalin fixed paraffin embedded traditional cell block (TCB). Immunohistochemical staining was performed using 31 antibodies on 38 patient samples for a total of 326 slides. Split samples were processed using both methods by following the Cellient(®) manufacturer's recommendations for the Cellient cell block (CCB) and the Histogel method for preparing the traditional cell block (TCB). Interpretation was performed by three pathologists and two cytotechnologists. Immunohistochemical stains were scored as: 0/1+ (negative) and 2/3+ (positive). Inter-rater agreement for each antibody was evaluated for CCB and TCB, as well as the intra-rater agreement between TCB and CCB between observers. Interobserver staining concordance for the TCB was obtained with statistical significance (P < 0.05) in 24 of 31 antibodies. Interobserver staining concordance for the CCB was obtained with statistical significance in 27 of 31 antibodies. Intra-observer staining concordance between TCB and CCB was obtained with statistical significance in 24 of 31 antibodies tested. In conclusions, immunohistochemical stains on cytologic specimens processed by the Cellient system are reliable and concordant with stains performed on the same split samples processed via a formalin fixed-paraffin embedded (FFPE) block. The Cellient system is a welcome adjunct to cytology work-flow by producing cell block material of sufficient quality to allow the use of routine IHC. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Chen, Cheng-ping; Wang, Chang-Hwa
2015-12-01
Studies have proven that merging hands-on and online learning can result in an enhanced experience in learning science. In contrast to traditional online learning, multiple in-classroom activities may be involved in an augmented-reality (AR)-embedded e-learning process and thus could reduce the effects of individual differences. Using a three-stage AR-embedded instructional process, we conducted an experiment to investigate the influences of individual differences on learning earth science phenomena of "day, night, and seasons" for junior highs. The mixed-methods sequential explanatory design was employed. In the quantitative phase, factors of learning styles and ICT competences were examined alongside with the overall learning achievement. Independent t tests and ANCOVAs were employed to achieve inferential statistics. The results showed that overall learning achievement was significant for the AR-embedded instruction. Nevertheless, neither of the two learner factors exhibited significant effect on learning achievement. In the qualitative phase, we analyzed student interview records, and a wide variation on student's preferred instructional stages were revealed. These findings could provide an alternative rationale for developing ICT-supported instruction, as our three-stage AR-embedded comprehensive e-learning scheme could enhance instruction adaptiveness to disperse the imparities of individual differences between learners.
Sungjun Lim; Nowak, Michael R; Yoonsuck Choe
2016-08-01
We present a novel, parallelizable algorithm capable of automatically reconstructing and calculating anatomical statistics of cerebral vascular networks embedded in large volumes of Rat Nissl-stained data. In this paper, we report the results of our method using Rattus somatosensory cortical data acquired using Knife-Edge Scanning Microscopy. Our algorithm performs the reconstruction task with averaged precision, recall, and F2-score of 0.978, 0.892, and 0.902 respectively. Calculated anatomical statistics show some conformance to values previously reported. The results that can be obtained from our method are expected to help explicate the relationship between the structural organization of the microcirculation and normal (and abnormal) cerebral functioning.
Inlet Flow Control and Prediction Technologies for Embedded Propulsion Systems
NASA Technical Reports Server (NTRS)
McMillan, Michelle L.; Mackie, Scott A.; Gissen, Abe; Vukasinovic, Bojan; Lakebrink, Matthew T.; Glezer, Ari; Mani, Mori; Mace, James L.
2011-01-01
Fail-safe, hybrid, flow control (HFC) is a promising technology for meeting high-speed cruise efficiency, low-noise signature, and reduced fuel-burn goals for future, Hybrid-Wing-Body (HWB) aircraft with embedded engines. This report details the development of HFC technology that enables improved inlet performance in HWB vehicles with highly integrated inlets and embedded engines without adversely affecting vehicle performance. In addition, new test techniques for evaluating Boundary-Layer-Ingesting (BLI)-inlet flow-control technologies developed and demonstrated through this program are documented, including the ability to generate a BLI-like inlet-entrance flow in a direct-connect, wind-tunnel facility, as well as, the use of D-optimal, statistically designed experiments to optimize test efficiency and enable interpretation of results. Validated improvements in numerical analysis tools and methods accomplished through this program are also documented, including Reynolds-Averaged Navier-Stokes CFD simulations of steady-state flow physics for baseline, BLI-inlet diffuser flow, as well as, that created by flow-control devices. Finally, numerical methods were employed in a ground-breaking attempt to directly simulate dynamic distortion. The advances in inlet technologies and prediction tools will help to meet and exceed "N+2" project goals for future HWB aircraft.
NASA Astrophysics Data System (ADS)
Suhana; Srilestari, A.; Marbun, M. B. H.; Mihardja, H.
2017-08-01
Hypertension is common a health problem and its prevalence in Indonesia is quite high (31.7%). Catgut embedding—an acupuncture technique—is known to reduce blood pressure; however, no study has confirmed the underlying mechanism. This study examines the effect of catgut embedding on serum nitric oxide (NO) concentration and blood pressure in patients with essential hypertension. Forty hypertension patients were randomly assigned to two groups: the control group received anti-hypertensive drugs whereas the case group received anti-hypertensive drugs and catgut embedding. Results showed a statistically significant mean difference in NO concentration (p < 0.05) and statistically and clinically significant mean difference in systolic and diastolic blood pressure between the two groups (p < 0.05). The results confirm that catgut embedding can influence serum NO concentration and blood pressure in essential hypertension patients.
Graph embedding and extensions: a general framework for dimensionality reduction.
Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen
2007-01-01
Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.
NASA Astrophysics Data System (ADS)
Wang, H.; Jing, X. J.
2017-07-01
This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.
Imaging of neural oscillations with embedded inferential and group prevalence statistics.
Donhauser, Peter W; Florin, Esther; Baillet, Sylvain
2018-02-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience.
Imaging of neural oscillations with embedded inferential and group prevalence statistics
2018-01-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience. PMID:29408902
NASA Astrophysics Data System (ADS)
Tsukanov, A. A.; Gorbatnikov, A. V.
2018-01-01
Study of the statistical parameters of the Earth's random microseismic field makes it possible to obtain estimates of the properties and structure of the Earth's crust and upper mantle. Different approaches are used to observe and process the microseismic records, which are divided into several groups of passive seismology methods. Among them are the well-known methods of surface-wave tomography, the spectral H/ V ratio of the components in the surface wave, and microseismic sounding, currently under development, which uses the spectral ratio V/ V 0 of the vertical components between pairs of spatially separated stations. In the course of previous experiments, it became clear that these ratios are stable statistical parameters of the random field that do not depend on the properties of microseism sources. This paper proposes to expand the mentioned approach and study the possibilities for using the ratio of the horizontal components H 1/ H 2 of the microseismic field. Numerical simulation was used to study the influence of an embedded velocity inhomogeneity on the spectral ratio of the horizontal components of the random field of fundamental Rayleigh modes, based on the concept that the Earth's microseismic field is represented by these waves in a significant part of the frequency spectrum.
Where are compact groups in the local Universe?
NASA Astrophysics Data System (ADS)
Díaz-Giménez, Eugenia; Zandivarez, Ariel
2015-06-01
Aims: The purpose of this work is to perform a statistical analysis of the location of compact groups in the Universe from observational and semi-analytical points of view. Methods: We used the velocity-filtered compact group sample extracted from the Two Micron All Sky Survey for our analysis. We also used a new sample of galaxy groups identified in the 2M++ galaxy redshift catalogue as tracers of the large-scale structure. We defined a procedure to search in redshift space for compact groups that can be considered embedded in other overdense systems and applied this criterion to several possible combinations of different compact and galaxy group subsamples. We also performed similar analyses for simulated compact and galaxy groups identified in a 2M++ mock galaxy catalogue constructed from the Millennium Run Simulation I plus a semi-analytical model of galaxy formation. Results: We observed that only ~27% of the compact groups can be considered to be embedded in larger overdense systems, that is, most of the compact groups are more likely to be isolated systems. The embedded compact groups show statistically smaller sizes and brighter surface brightnesses than non-embedded systems. No evidence was found that embedded compact groups are more likely to inhabit galaxy groups with a given virial mass or with a particular dynamical state. We found very similar results when the analysis was performed using mock compact and galaxy groups. Based on the semi-analytical studies, we predict that 70% of the embedded compact groups probably are 3D physically dense systems. Finally, real space information allowed us to reveal the bimodal behaviour of the distribution of 3D minimum distances between compact and galaxy groups. Conclusions: The location of compact groups should be carefully taken into account when comparing properties of galaxies in environments that are a priori different. Appendices are available in electronic form at http://www.aanda.orgFull Tables B.1 and B.2 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A61
A new method to unveil embedded stellar clusters
NASA Astrophysics Data System (ADS)
Lombardi, Marco; Lada, Charles J.; Alves, João
2017-11-01
In this paper we present a novel method to identify and characterize stellar clusters deeply embedded in a dark molecular cloud. The method is based on measuring stellar surface density in wide-field infrared images using star counting techniques. It takes advantage of the differing H-band luminosity functions (HLFs) of field stars and young stellar populations and is able to statistically associate each star in an image as a member of either the background stellar population or a young stellar population projected on or near the cloud. Moreover, the technique corrects for the effects of differential extinction toward each individual star. We have tested this method against simulations as well as observations. In particular, we have applied the method to 2MASS point sources observed in the Orion A and B complexes, and the results obtained compare very well with those obtained from deep Spitzer and Chandra observations where presence of infrared excess or X-ray emission directly determines membership status for every star. Additionally, our method also identifies unobscured clusters and a low resolution version of the Orion stellar surface density map shows clearly the relatively unobscured and diffuse OB 1a and 1b sub-groups and provides useful insights on their spatial distribution.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Learning linear transformations between counting-based and prediction-based word embeddings
Hayashi, Kohei; Kawarabayashi, Ken-ichi
2017-01-01
Despite the growing interest in prediction-based word embedding learning methods, it remains unclear as to how the vector spaces learnt by the prediction-based methods differ from that of the counting-based methods, or whether one can be transformed into the other. To study the relationship between counting-based and prediction-based embeddings, we propose a method for learning a linear transformation between two given sets of word embeddings. Our proposal contributes to the word embedding learning research in three ways: (a) we propose an efficient method to learn a linear transformation between two sets of word embeddings, (b) using the transformation learnt in (a), we empirically show that it is possible to predict distributed word embeddings for novel unseen words, and (c) empirically it is possible to linearly transform counting-based embeddings to prediction-based embeddings, for frequent words, different POS categories, and varying degrees of ambiguities. PMID:28926629
Jalouli, Miranda; Jalouli, Jamshid; Ibrahim, Salah O; Hirsch, Jan-Michaél; Sand, Lars
2015-01-01
Infection with human papilloma virus (HPV) has been implicated as one of the risk factors for the development of oropharyngeal cancer. Many different HPV tests exist, and information regarding their specific technical, analytical, and clinical properties is increasing. This study aimed to compare the level of detection of HPV using two reliable polymerase chain reaction (PCR) methods, nested PCR (NPCR) and single PCR (SPCR), in archival paraffin-embedded oral squamous cell carcinoma (OSCC) samples and fresh oral mucosa specimens. The presence of HPV genome in two groups of tissue samples was analyzed: (i) 57 paraffin-embedded OSCC samples from Sudan and (ii) eight healthy fresh oral mucosal samples from Swedish volunteers. The specimens were tested by SPCR with primer pair MY9/MY11 and NPCR using GP5+/GP6+ primer sets. Eighteen (32%) out of the 57 paraffin-embedded OSCC samples, and five (62%) out of the eight fresh clinically healthy samples were found to be HPV-positive with NPCR. With SPCR, four (7%) out of the paraffin-embedded OSCC samples were HPV-positive. A statistically significant difference between HPV-positive and -negative samples was found when comparing NPCR and SPCR in OSCC and fresh oral mucosa (p<0.0001). The comparative test between SPCR and NPCR showed 100% sensitivity and 69% specificity for OSCC. The use of the GP5+/GP6+ nested PCR increased the positivity rate, efficiency rate and sensitivity of HPV detection in oral samples significantly and should be considered as the method of choice. Copyright © 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
IMPROVEMENTS IN EPOXY RESIN EMBEDDING METHODS
Luft, John H.
1961-01-01
Epoxy embedding methods of Glauert and Kushida have been modified so as to yield rapid, reproducible, and convenient embedding methods for electron microscopy. The sections are robust and tissue damage is less than with methacrylate embedding. PMID:13764136
Permutation entropy with vector embedding delays
NASA Astrophysics Data System (ADS)
Little, Douglas J.; Kane, Deb M.
2017-12-01
Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.
SU-E-J-261: Statistical Analysis and Chaotic Dynamics of Respiratory Signal of Patients in BodyFix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalski, D; Huq, M; Bednarz, G
Purpose: To quantify respiratory signal of patients in BodyFix undergoing 4DCT scan with and without immobilization cover. Methods: 20 pairs of respiratory tracks recorded with RPM system during 4DCT scan were analyzed. Descriptive statistic was applied to selected parameters of exhale-inhale decomposition. Standardized signals were used with the delay method to build orbits in embedded space. Nonlinear behavior was tested with surrogate data. Sample entropy SE, Lempel-Ziv complexity LZC and the largest Lyapunov exponents LLE were compared. Results: Statistical tests show difference between scans for inspiration time and its variability, which is bigger for scans without cover. The same ismore » for variability of the end of exhalation and inhalation. Other parameters fail to show the difference. For both scans respiratory signals show determinism and nonlinear stationarity. Statistical test on surrogate data reveals their nonlinearity. LLEs show signals chaotic nature and its correlation with breathing period and its embedding delay time. SE, LZC and LLE measure respiratory signal complexity. Nonlinear characteristics do not differ between scans. Conclusion: Contrary to expectation cover applied to patients in BodyFix appears to have limited effect on signal parameters. Analysis based on trajectories of delay vectors shows respiratory system nonlinear character and its sensitive dependence on initial conditions. Reproducibility of respiratory signal can be evaluated with measures of signal complexity and its predictability window. Longer respiratory period is conducive for signal reproducibility as shown by these gauges. Statistical independence of the exhale and inhale times is also supported by the magnitude of LLE. The nonlinear parameters seem more appropriate to gauge respiratory signal complexity since its deterministic chaotic nature. It contrasts with measures based on harmonic analysis that are blind for nonlinear features. Dynamics of breathing, so crucial for 4D-based clinical technologies, can be better controlled if nonlinear-based methodology, which reflects respiration characteristic, is applied. Funding provided by Varian Medical Systems via Investigator Initiated Research Project.« less
2012-01-01
Background Because of the large volume of data and the intrinsic variation of data intensity observed in microarray experiments, different statistical methods have been used to systematically extract biological information and to quantify the associated uncertainty. The simplest method to identify differentially expressed genes is to evaluate the ratio of average intensities in two different conditions and consider all genes that differ by more than an arbitrary cut-off value to be differentially expressed. This filtering approach is not a statistical test and there is no associated value that can indicate the level of confidence in the designation of genes as differentially expressed or not differentially expressed. At the same time the fold change by itself provide valuable information and it is important to find unambiguous ways of using this information in expression data treatment. Results A new method of finding differentially expressed genes, called distributional fold change (DFC) test is introduced. The method is based on an analysis of the intensity distribution of all microarray probe sets mapped to a three dimensional feature space composed of average expression level, average difference of gene expression and total variance. The proposed method allows one to rank each feature based on the signal-to-noise ratio and to ascertain for each feature the confidence level and power for being differentially expressed. The performance of the new method was evaluated using the total and partial area under receiver operating curves and tested on 11 data sets from Gene Omnibus Database with independently verified differentially expressed genes and compared with the t-test and shrinkage t-test. Overall the DFC test performed the best – on average it had higher sensitivity and partial AUC and its elevation was most prominent in the low range of differentially expressed features, typical for formalin-fixed paraffin-embedded sample sets. Conclusions The distributional fold change test is an effective method for finding and ranking differentially expressed probesets on microarrays. The application of this test is advantageous to data sets using formalin-fixed paraffin-embedded samples or other systems where degradation effects diminish the applicability of correlation adjusted methods to the whole feature set. PMID:23122055
Wavelet based mobile video watermarking: spread spectrum vs. informed embedding
NASA Astrophysics Data System (ADS)
Mitrea, M.; Prêteux, F.; Duţă, S.; Petrescu, M.
2005-11-01
The cell phone expansion provides an additional direction for digital video content distribution: music clips, news, sport events are more and more transmitted toward mobile users. Consequently, from the watermarking point of view, a new challenge should be taken: very low bitrate contents (e.g. as low as 64 kbit/s) are now to be protected. Within this framework, the paper approaches for the first time the mathematical models for two random processes, namely the original video to be protected and a very harmful attack any watermarking method should face the StirMark attack. By applying an advanced statistical investigation (combining the Chi square, Ro, Fisher and Student tests) in the discrete wavelet domain, it is established that the popular Gaussian assumption can be very restrictively used when describing the former process and has nothing to do with the latter. As these results can a priori determine the performances of several watermarking methods, both of spread spectrum and informed embedding types, they should be considered in the design stage.
Experience, Challenges, and Opportunities of Being Fully Embedded in a User Group.
Wu, Lin; Thornton, Joel
2017-01-01
Embedded librarian models can assume different forms and levels, depending on patron needs and a library's choice of delivery services. An academic health sciences library decided to enhance its service delivery model by integrating a librarian into the College of Pharmacy, approximately 250 miles away from the main library. This article describes the embedded librarian's first-year experience, challenges, and opportunities working as a library faculty in the college. The comparison of one-year recorded statistics on preembedded and postembedded activities demonstrated the effectiveness and impact of such an embedded librarian model.
Perraton, L; Machotka, Z; Grimmer, K; Gibbs, C; Mahar, C; Kennedy, K
2017-04-01
Little has been published about the effectiveness of training postgraduate physiotherapy coursework students in research methods and evidence-based practice (EBP) theory. Graduate qualities in most universities include lifelong learning. Inclusion of EBP in post-graduate coursework students' training is one way for students to develop the knowledge and skills needed to implement current best evidence in their clinical practice after graduation, thereby facilitating lifelong learning. This paper reports on change in confidence and anxiety in knowledge of statistical terminology and concepts related to research design and EBP in eight consecutive years of post-graduate physiotherapy students at one Australian university. Pre-survey/post-survey instruments were administered to students in an intensive 3-week post-graduate course, which taught health research methods, biostatistics and EBP. This course was embedded into a post-graduate physiotherapy programme from 2007 to 2014. The organization and delivery of the course was based on best pedagogical evidence for effectively teaching adult physiotherapists. The course was first delivered each year in the programme, and no other course was delivered concurrently. There were significant improvements in confidence, significantly decreased anxiety and improvements in knowledge of statistical terminology and concepts related to research design and EBP, at course completion. Age, gender and country of origin were not confounders on learning outcomes, although there was a (non-significant) trend that years of practice negatively impacted on learning outcomes (p = 0.09). There was a greater improvement in confidence in statistical terminology than in concepts related to research design and EBP. An intensive teaching programme in health research methods and biostatistics and EBP, based on best practice adult physiotherapy learning principles, is effective immediately post-course, in decreasing anxiety and increasing confidence in the terminology used in research methods and EBP. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Steganalysis based on reducing the differences of image statistical characteristics
NASA Astrophysics Data System (ADS)
Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao
2018-04-01
Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.
Geodesic Monte Carlo on Embedded Manifolds
Byrne, Simon; Girolami, Mark
2013-01-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024
Statistical Learning and Language: An Individual Differences Study
ERIC Educational Resources Information Center
Misyak, Jennifer B.; Christiansen, Morten H.
2012-01-01
Although statistical learning and language have been assumed to be intertwined, this theoretical presupposition has rarely been tested empirically. The present study investigates the relationship between statistical learning and language using a within-subject design embedded in an individual-differences framework. Participants were administered…
NASA Technical Reports Server (NTRS)
Duffy, S. F.; Hu, J.; Hopkins, D. A.
1995-01-01
The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.
Time Delay Embedding Increases Estimation Precision of Models of Intraindividual Variability
ERIC Educational Resources Information Center
von Oertzen, Timo; Boker, Steven M.
2010-01-01
This paper investigates the precision of parameters estimated from local samples of time dependent functions. We find that "time delay embedding," i.e., structuring data prior to analysis by constructing a data matrix of overlapping samples, increases the precision of parameter estimates and in turn statistical power compared to standard…
NASA Astrophysics Data System (ADS)
Saber, Sámar; Macías, David; Ortiz de Urbina, Josetxu; Kjesbu, Olav Sigurd
2015-01-01
Traditional histological protocols in marine fish reproductive laboratories using paraffin as the embedding medium are now increasingly being replaced with protocols using resin instead. These procedures entail different degrees of tissue shrinkage complicating direct comparisons of measurement results across laboratories or articles. In this work we selected ovaries of spawning Mediterranean albacore (Thunnus alalunga) as the subject of our study to address the issue of structural changes, by contrasting values on oocyte recruitment and final batch fecundity given from the same tissue samples in both paraffin and resin. A modern stereological method, the oocyte packing density (OPD) theory, was used supported by initial studies on ovarian tissue sampling and measurement design. Examples of differences in the volume fraction of oocyte stages, free space and connective tissue were found between the embedding media. Mean oocyte diameters were smaller in paraffin than in resin with differences ranging between 0.5% in primary growth and 24.3% in hydration (HYD) stage oocytes. Fresh oocyte measurements showed that oocytes shrank as a consequence of the embedding process, reaching the maximal degree of shrinkage for oocytes in the HYD stage (45.8% in paraffin and 26.5% in resin). In order to assess the effect of oocyte shrinkage on the OPD result, and thereby on relative batch fecundity (Fr), oocyte diameters corrected and uncorrected for shrinkage, were used for estimations. Statistical significant differences were found (P < 0.05) between these two approaches in both embedding media. The average Fr was numerically smaller in paraffin compared to resin (86 ± 61 vs. 106 ± 54 oocytes per gram of body mass (mean ± SD)). For both embedding media statistical significant differences (P < 0.05) were seen between Fr results based on either oocytes in the germinal vesicle migration stage or HYD stage. As a valuable adjunct, the present use of the OPD theory made it possible to document that the oocyte recruitment of spawning ovaries of Mediterranean albacore followed the typical pattern of an asynchronous oocyte development and indeterminate fecundity.
Embedded arrays of vertically aligned carbon nanotube carpets and methods for making them
Kim, Myung Jong; Nicholas, Nolan Walker; Kittrell, W. Carter; Schmidt, Howard K.
2015-06-30
According to some embodiments, the present invention provides a system and method for supporting a carbon nanotube array that involve an entangled carbon nanotube mat integral with the array, where the mat is embedded in an embedding material. The embedding material may be depositable on a carbon nanotube. A depositable material may be metallic or nonmetallic. The embedding material may be an adhesive material. The adhesive material may optionally be mixed with a metal powder. The embedding material may be supported by a substrate or self-supportive. The embedding material may be conductive or nonconductive. The system and method provide superior mechanical and, when applicable, electrical, contact between the carbon nanotubes in the array and the embedding material. The optional use of a conductive material for the embedding material provides a mechanism useful for integration of carbon nanotube arrays into electronic devices.
2017-01-01
Methodology 3 2.1 Modified Embedded-Atom Method Theory 3 2.1.1 Embedding Energy Function 3 2.1.2 Screening Factor 8 2.1.3 Modified Embedded-Atom...Simulation Methodology 2.1 Modified Embedded-Atom Method Theory In the EAM and MEAM formalisms1,2,5 the total energy of a system of atoms (Etot) is...An interatomic potential for saturated hydrocarbons using the modified embedded-atom method (MEAM), a semiempirical many-body potential based on
Quantum mechanical fragment methods based on partitioning atoms or partitioning coordinates.
Wang, Bo; Yang, Ke R; Xu, Xuefei; Isegawa, Miho; Leverentz, Hannah R; Truhlar, Donald G
2014-09-16
Conspectus The development of more efficient and more accurate ways to represent reactive potential energy surfaces is a requirement for extending the simulation of large systems to more complex systems, longer-time dynamical processes, and more complete statistical mechanical sampling. One way to treat large systems is by direct dynamics fragment methods. Another way is by fitting system-specific analytic potential energy functions with methods adapted to large systems. Here we consider both approaches. First we consider three fragment methods that allow a given monomer to appear in more than one fragment. The first two approaches are the electrostatically embedded many-body (EE-MB) expansion and the electrostatically embedded many-body expansion of the correlation energy (EE-MB-CE), which we have shown to yield quite accurate results even when one restricts the calculations to include only electrostatically embedded dimers. The third fragment method is the electrostatically embedded molecular tailoring approach (EE-MTA), which is more flexible than EE-MB and EE-MB-CE. We show that electrostatic embedding greatly improves the accuracy of these approaches compared with the original unembedded approaches. Quantum mechanical fragment methods share with combined quantum mechanical/molecular mechanical (QM/MM) methods the need to treat a quantum mechanical fragment in the presence of the rest of the system, which is especially challenging for those parts of the rest of the system that are close to the boundary of the quantum mechanical fragment. This is a delicate matter even for fragments that are not covalently bonded to the rest of the system, but it becomes even more difficult when the boundary of the quantum mechanical fragment cuts a bond. We have developed a suite of methods for more realistically treating interactions across such boundaries. These methods include redistributing and balancing the external partial atomic charges and the use of tuned fluorine atoms for capping dangling bonds, and we have shown that they can greatly improve the accuracy. Finally we present a new approach that goes beyond QM/MM by combining the convenience of molecular mechanics with the accuracy of fitting a potential function to electronic structure calculations on a specific system. To make the latter practical for systems with a large number of degrees of freedom, we developed a method to interpolate between local internal-coordinate fits to the potential energy. A key issue for the application to large systems is that rather than assigning the atoms or monomers to fragments, we assign the internal coordinates to reaction, secondary, and tertiary sets. Thus, we make a partition in coordinate space rather than atom space. Fits to the local dependence of the potential energy on tertiary coordinates are arrayed along a preselected reaction coordinate at a sequence of geometries called anchor points; the potential energy function is called an anchor points reactive potential. Electrostatically embedded fragment methods and the anchor points reactive potential, because they are based on treating an entire system by quantum mechanical electronic structure methods but are affordable for large and complex systems, have the potential to open new areas for accurate simulations where combined QM/MM methods are inadequate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiefer, Ryan M., E-mail: rkiefer11@gmail.com; Pandey, Nirnimesh; Trerotola, Scott O.
PurposeAccurately detecting inferior vena cava (IVC) filter complications is important for safe and successful retrieval as tip-embedded filters require removal with non-standard techniques. Venography prior to IVC filter retrieval has traditionally used a single anterior–posterior (AP) projection. This study compares the utility of rotational venography to AP venography prior to IVC filter removal.Materials and MethodsThe rotational venograms from 100 consecutive IVC filter retrievals over a 35-month period were evaluated retrospectively. The AP view of the rotational venogram was examined separately from the full series by a radiologist blinded to alternative imaging and operative findings. The venograms were evaluated for tipmore » embedding, filter fracture, filter thrombus, and IVC thrombus. Statistical analysis was performed.ResultsUsing operative findings and peri-procedural imaging as the reference standard, tip embedding occurred in 59 of the 100 filters (59 %). AP venography was used to correctly identify 31 tip-embedded filters (53 % sensitivity) with two false positives (95 % specificity) for an accuracy of 70 %. Rotational venography was used to correctly identify 58 tip-embedded filters (98 % sensitivity) with one false positive (98 % specificity) for an accuracy of 98 %. A significant difference was found in the sensitivities of the two diagnostic approaches (P < .01). Other findings of thrombus and filter fracture were not significantly different between the two groups.ConclusionRotational venograms allow for more accurate detection of tip-embedded IVC filters compared to AP views alone. As this determines the approach taken, rotational venograms are helpful if obtained prior to IVC filter retrieval.« less
The Path Resistance Method for Bounding the Smallest Nontrivial Eigenvalue of a Laplacian
NASA Technical Reports Server (NTRS)
Guattery, Stephen; Leighton, Tom; Miller, Gary L.
1997-01-01
We introduce the path resistance method for lower bounds on the smallest nontrivial eigenvalue of the Laplacian matrix of a graph. The method is based on viewing the graph in terms of electrical circuits; it uses clique embeddings to produce lower bounds on lambda(sub 2) and star embeddings to produce lower bounds on the smallest Rayleigh quotient when there is a zero Dirichlet boundary condition. The method assigns priorities to the paths in the embedding; we show that, for an unweighted tree T, using uniform priorities for a clique embedding produces a lower bound on lambda(sub 2) that is off by at most an 0(log diameter(T)) factor. We show that the best bounds this method can produce for clique embeddings are the same as for a related method that uses clique embeddings and edge lengths to produce bounds.
A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2007-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.
Exact density functional and wave function embedding schemes based on orbital localization
NASA Astrophysics Data System (ADS)
Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály
2016-08-01
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.
Generalized watermarking attack based on watermark estimation and perceptual remodulation
NASA Astrophysics Data System (ADS)
Voloshynovskiy, Sviatoslav V.; Pereira, Shelby; Herrigel, Alexander; Baumgartner, Nazanin; Pun, Thierry
2000-05-01
Digital image watermarking has become a popular technique for authentication and copyright protection. For verifying the security and robustness of watermarking algorithms, specific attacks have to be applied to test them. In contrast to the known Stirmark attack, which degrades the quality of the image while destroying the watermark, this paper presents a new approach which is based on the estimation of a watermark and the exploitation of the properties of Human Visual System (HVS). The new attack satisfies two important requirements. First, image quality after the attack as perceived by the HVS is not worse than the quality of the stego image. Secondly, the attack uses all available prior information about the watermark and cover image statistics to perform the best watermark removal or damage. The proposed attack is based on a stochastic formulation of the watermark removal problem, considering the embedded watermark as additive noise with some probability distribution. The attack scheme consists of two main stages: (1) watermark estimation and partial removal by a filtering based on a Maximum a Posteriori (MAP) approach; (2) watermark alteration and hiding through addition of noise to the filtered image, taking into account the statistics of the embedded watermark and exploiting HVS characteristics. Experiments on a number of real world and computer generated images show the high efficiency of the proposed attack against known academic and commercial methods: the watermark is completely destroyed in all tested images without altering the image quality. The approach can be used against watermark embedding schemes that operate either in coordinate domain, or transform domains like Fourier, DCT or wavelet.
Embedded WENO: A design strategy to improve existing WENO schemes
NASA Astrophysics Data System (ADS)
van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.
2017-02-01
Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hégely, Bence; Nagy, Péter R.; Kállay, Mihály, E-mail: kallay@mail.bme.hu
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up themore » system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.« less
RESLanjut: The learning media for improve students understanding in embedded systems
NASA Astrophysics Data System (ADS)
Indrianto, Susanti, Meilia Nur Indah; Karina, Djunaidi
2017-08-01
The use of network in embedded system can be done with many kinds of network, with the use of mobile phones, bluetooths, modems, ethernet cards, wireless technology and so on. Using network in embedded system could help people to do remote controlling. On previous research, researchers found that many students have the ability to comprehend the basic concept of embedded system. They could also make embedded system tools but without network integration. And for that, a development is needed for the embedded system module. The embedded system practicum module design needs a prototype method in order to achieve the desired goal. The prototype method is often used in the real world. Or even, a prototype method is a part of products that consist of logic expression or external physical interface. The embedded system practicum module is meant to increase student comprehension of embedded system course, and also to encourage students to innovate on technology based tools. It is also meant to help teachers to teach the embedded system concept on the course. The student comprehension is hoped to increase with the use of practicum course.
The architecture of the management system of complex steganographic information
NASA Astrophysics Data System (ADS)
Evsutin, O. O.; Meshcheryakov, R. V.; Kozlova, A. S.; Solovyev, T. M.
2017-01-01
The aim of the study is to create a wide area information system that allows one to control processes of generation, embedding, extraction, and detection of steganographic information. In this paper, the following problems are considered: the definition of the system scope and the development of its architecture. For creation of algorithmic maintenance of the system, classic methods of steganography are used to embed information. Methods of mathematical statistics and computational intelligence are used to identify the embedded information. The main result of the paper is the development of the architecture of the management system of complex steganographic information. The suggested architecture utilizes cloud technology in order to provide service using the web-service via the Internet. It is meant to provide streams of multimedia data processing that are streams with many sources of different types. The information system, built in accordance with the proposed architecture, will be used in the following areas: hidden transfer of documents protected by medical secrecy in telemedicine systems; copyright protection of online content in public networks; prevention of information leakage caused by insiders.
Load-embedded inertial measurement unit reveals lifting performance.
Tammana, Aditya; McKay, Cody; Cain, Stephen M; Davidson, Steven P; Vitali, Rachel V; Ojeda, Lauro; Stirling, Leia; Perkins, Noel C
2018-07-01
Manual lifting of loads arises in many occupations as well as in activities of daily living. Prior studies explore lifting biomechanics and conditions implicated in lifting-induced injuries through laboratory-based experimental methods. This study introduces a new measurement method using load-embedded inertial measurement units (IMUs) to evaluate lifting tasks in varied environments outside of the laboratory. An example vertical load lifting task is considered that is included in an outdoor obstacle course. The IMU data, in the form of the load acceleration and angular velocity, is used to estimate load vertical velocity and three lifting performance metrics: the lifting time (speed), power, and motion smoothness. Large qualitative differences in these parameters distinguish exemplar high and low performance trials. These differences are further supported by subsequent statistical analyses of twenty three trials (including a total of 115 total lift/lower cycles) from fourteen healthy participants. Results reveal that lifting time is strongly correlated with lifting power (as expected) but also correlated with motion smoothness. Thus, participants who lift rapidly do so with significantly greater power using motions that minimize motion jerk. Copyright © 2018 Elsevier Ltd. All rights reserved.
The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, C. G.; School of Physics, University of Melbourne, Parkville VIC; CODES Centre of Excellence, University of Tasmania, Hobart TAS
2010-04-06
Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.
The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, C.G.; Siddons, D.P.; Kirkham, R.
2010-05-25
Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.
NASA Astrophysics Data System (ADS)
Sangueza, Cheryl Ramirez
This mixed-method, dual-phase, embedded-case study employed the Social Cognitive Theory and the construct of self-efficacy to examine the contributors to science teaching self-efficacy and science teaching practices across different levels of efficacy in six pre-service elementary teachers during their science methods course and student teaching experiences. Data sources included the Science Teaching Efficacy Belief Instrument (STEBI-B) for pre-service teachers, questionnaires, journals, reflections, student teaching lesson observations, and lesson debriefing notes. Results from the STEBI-B show that all participants measured an increase in efficacy throughout the study. The ANOVA analysis of the STEBI-B revealed a statistically significant increase in level of efficacy during methods course, student teaching, and from the beginning of the study to the end. Of interest in this study was the examination of the participants' science teaching practices across different levels of efficacy. Results of this analysis revealed how the pre-service elementary teachers in this study contextualized their experiences in learning to teach science and its influences on their science teaching practices. Key implications involves the value in exploring how pre-service teachers interpret their learning to teach experiences and how their interpretations influence the development of their science teaching practices.
An extinction/reignition dynamic method for turbulent combustion
NASA Astrophysics Data System (ADS)
Knaus, Robert; Pantano, Carlos
2011-11-01
Quasi-randomly distributed locations of high strain in turbulent combustion can cause a nonpremixed or partially premixed flame to develop local regions of extinction called ``flame holes''. The presence and extent of these holes can increase certain pollutants and reduce the amount of fuel burned. Accurately modeling the dynamics of these interacting regions can improve the accuracy of combustion simulations by effectively incorporating finite-rate chemistry effects. In the proposed method, the flame hole state is characterized by a progress variable that nominally exists on the stoichiometric surface. The evolution of this field is governed by a partial-differential equation embedded in the time-dependent two-manifold of the flame surface. This equation includes advection, propagation, and flame hole formation (flame hole healing or collapse is accounted by propagation naturally). We present a computational algorithm that solves this equation by embedding it in the usual three-dimensional space. A piece-wise parabolic WENO scheme combined with a compression algorithm are used to evolve the flame hole progress variable. A key aspect of the method is the extension of the surface data to the three-dimensional space in an efficient manner. We present results of this method applied to canonical turbulent combusting flows where the flame holes interact and describe their statistics.
A comparative analysis of the statistical properties of large mobile phone calling networks.
Li, Ming-Xia; Jiang, Zhi-Qiang; Xie, Wen-Jie; Miccichè, Salvatore; Tumminello, Michele; Zhou, Wei-Xing; Mantegna, Rosario N
2014-05-30
Mobile phone calling is one of the most widely used communication methods in modern society. The records of calls among mobile phone users provide us a valuable proxy for the understanding of human communication patterns embedded in social networks. Mobile phone users call each other forming a directed calling network. If only reciprocal calls are considered, we obtain an undirected mutual calling network. The preferential communication behavior between two connected users can be statistically tested and it results in two Bonferroni networks with statistically validated edges. We perform a comparative analysis of the statistical properties of these four networks, which are constructed from the calling records of more than nine million individuals in Shanghai over a period of 110 days. We find that these networks share many common structural properties and also exhibit idiosyncratic features when compared with previously studied large mobile calling networks. The empirical findings provide us an intriguing picture of a representative large social network that might shed new lights on the modelling of large social networks.
NASA Astrophysics Data System (ADS)
He, Xiulan; Sonnenborg, Torben O.; Jørgensen, Flemming; Jensen, Karsten H.
2017-03-01
Stationarity has traditionally been a requirement of geostatistical simulations. A common way to deal with non-stationarity is to divide the system into stationary sub-regions and subsequently merge the realizations for each region. Recently, the so-called partition approach that has the flexibility to model non-stationary systems directly was developed for multiple-point statistics simulation (MPS). The objective of this study is to apply the MPS partition method with conventional borehole logs and high-resolution airborne electromagnetic (AEM) data, for simulation of a real-world non-stationary geological system characterized by a network of connected buried valleys that incise deeply into layered Miocene sediments (case study in Denmark). The results show that, based on fragmented information of the formation boundaries, the MPS partition method is able to simulate a non-stationary system including valley structures embedded in a layered Miocene sequence in a single run. Besides, statistical information retrieved from the AEM data improved the simulation of the geology significantly, especially for the deep-seated buried valley sediments where borehole information is sparse.
ERIC Educational Resources Information Center
Frank, Stefan L.; Trompenaars, Thijs; Vasishth, Shravan
2016-01-01
An English double-embedded relative clause from which the middle verb is omitted can often be processed more easily than its grammatical counterpart, a phenomenon known as the grammaticality illusion. This effect has been found to be reversed in German, suggesting that the illusion is language specific rather than a consequence of universal…
Lexical statistics of competition in L2 versus L1 listening
NASA Astrophysics Data System (ADS)
Cutler, Anne
2005-09-01
Spoken-word recognition involves multiple activation of alternative word candidates and competition between these alternatives. Phonemic confusions in L2 listening increase the number of potentially active words, thus slowing word recognition by adding competitors. This study used a 70,000-word English lexicon backed by frequency statistics from a 17,900,000-word corpus to assess the competition increase resulting from two representative phonemic confusions, one vocalic (ae/E) and one consonantal (r/l), in L2 versus L1 listening. The first analysis involved word embedding. Embedded words (cat in cattle, rib in ribbon) cause competition, which phonemic confusion can increase (cat in kettle, rib in liberty). The average increase in number of embedded words was 59.6 and 48.3 temporary ambiguity. Even when no embeddings are present, multiple alternatives are possible: para- can become parrot, paradise, etc., but also pallet, palace given /r/-/l/ confusion. Phoneme confusions (vowel or consonant) in first or second position in the word approximately doubled the number of activated candidates; confusions later in the word increased activation by on average 53 third, 42 confusions significantly increase competition for L2 compared with L1 listeners.
Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI
NASA Astrophysics Data System (ADS)
Hwuang, Eileen; Rusu, Mirabela; Karthigeyan, Sudha; Agner, Shannon C.; Sparks, Rachel; Shih, Natalie; Tomaszewski, John E.; Rosen, Mark; Feldman, Michael; Madabhushi, Anant
2014-03-01
Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons to allow the algorithm to more effectively register multimodal images. SERg is also tested within the free-form deformation framework driven by mutual information. Nine pairs of synthetic T1-weighted to T2-weighted brain MRI were registered under the following conditions: five levels of noise (0%, 1%, 3%, 5%, and 7%) and two levels of bias field (20% and 40%) each with and without noise. We demonstrate that across all of these conditions, SERg yields a mean squared error that is 81.51% lower than that of Demons driven by MRI intensity alone. We also spatially align twenty-six ex vivo histology sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer onto corresponding radiologic imaging. SERg performs better than intensity registration by decreasing the root mean squared distance of annotated landmarks in the prostate gland via both Demons algorithm and mutual information-driven free-form deformation. In both synthetic and clinical experiments, the observed improvement in alignment of the template and target images suggest the utility of parametric eigenvector representations and hence SERg for multimodal image registration.
Using DEWIS and R for Multi-Staged Statistics e-Assessments
ERIC Educational Resources Information Center
Gwynllyw, D. Rhys; Weir, Iain S.; Henderson, Karen L.
2016-01-01
We demonstrate how the DEWIS e-Assessment system may use embedded R code to facilitate the assessment of students' ability to perform involved statistical analyses. The R code has been written to emulate SPSS output and thus the statistical results for each bespoke data set can be generated efficiently and accurately using standard R routines.…
An embedded formula of the Chebyshev collocation method for stiff problems
NASA Astrophysics Data System (ADS)
Piao, Xiangfan; Bu, Sunyoung; Kim, Dojin; Kim, Philsu
2017-12-01
In this study, we have developed an embedded formula of the Chebyshev collocation method for stiff problems, based on the zeros of the generalized Chebyshev polynomials. A new strategy for the embedded formula, using a pair of methods to estimate the local truncation error, as performed in traditional embedded Runge-Kutta schemes, is proposed. The method is performed in such a way that not only the stability region of the embedded formula can be widened, but by allowing the usage of larger time step sizes, the total computational costs can also be reduced. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have an 8th order convergence and it exhibits A-stability. Through several numerical experimental results, we have demonstrated that the proposed method is numerically more efficient, compared to several existing implicit methods.
Embedding of multidimensional time-dependent observations.
Barnard, J P; Aldrich, C; Gerber, M
2001-10-01
A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.
Embedding of multidimensional time-dependent observations
NASA Astrophysics Data System (ADS)
Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius
2001-10-01
A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.
NASA Astrophysics Data System (ADS)
Kambe, Hidetoshi; Mitsui, Hiroyasu; Endo, Satoshi; Koizumi, Hisao
The applications of embedded system technologies have spread widely in various products, such as home appliances, cellular phones, automobiles, industrial machines and so on. Due to intensified competition, embedded software has expanded its role in realizing sophisticated functions, and new development methods like a hardware/software (HW/SW) co-design for uniting HW and SW development have been researched. The shortfall of embedded SW engineers was estimated to be approximately 99,000 in the year 2006, in Japan. Embedded SW engineers should understand HW technologies and system architecture design as well as SW technologies. However, a few universities offer this kind of education systematically. We propose a student experiment method for learning the basics of embedded system development, which includes a set of experiments for developing embedded SW, developing embedded HW and experiencing HW/SW co-design. The co-design experiment helps students learn about the basics of embedded system architecture design and the flow of designing actual HW and SW modules. We developed these experiments and evaluated them.
Saadati, Farzaneh; Ahmad Tarmizi, Rohani; Mohd Ayub, Ahmad Fauzi; Abu Bakar, Kamariah
2015-01-01
Because students' ability to use statistics, which is mathematical in nature, is one of the concerns of educators, embedding within an e-learning system the pedagogical characteristics of learning is 'value added' because it facilitates the conventional method of learning mathematics. Many researchers emphasize the effectiveness of cognitive apprenticeship in learning and problem solving in the workplace. In a cognitive apprenticeship learning model, skills are learned within a community of practitioners through observation of modelling and then practice plus coaching. This study utilized an internet-based Cognitive Apprenticeship Model (i-CAM) in three phases and evaluated its effectiveness for improving statistics problem-solving performance among postgraduate students. The results showed that, when compared to the conventional mathematics learning model, the i-CAM could significantly promote students' problem-solving performance at the end of each phase. In addition, the combination of the differences in students' test scores were considered to be statistically significant after controlling for the pre-test scores. The findings conveyed in this paper confirmed the considerable value of i-CAM in the improvement of statistics learning for non-specialized postgraduate students.
NASA Astrophysics Data System (ADS)
Wiedermann, Marc; Donges, Jonathan F.; Kurths, Jürgen; Donner, Reik V.
2016-04-01
Networks with nodes embedded in a metric space have gained increasing interest in recent years. The effects of spatial embedding on the networks' structural characteristics, however, are rarely taken into account when studying their macroscopic properties. Here, we propose a hierarchy of null models to generate random surrogates from a given spatially embedded network that can preserve certain global and local statistics associated with the nodes' embedding in a metric space. Comparing the original network's and the resulting surrogates' global characteristics allows one to quantify to what extent these characteristics are already predetermined by the spatial embedding of the nodes and links. We apply our framework to various real-world spatial networks and show that the proposed models capture macroscopic properties of the networks under study much better than standard random network models that do not account for the nodes' spatial embedding. Depending on the actual performance of the proposed null models, the networks are categorized into different classes. Since many real-world complex networks are in fact spatial networks, the proposed approach is relevant for disentangling the underlying complex system structure from spatial embedding of nodes in many fields, ranging from social systems over infrastructure and neurophysiology to climatology.
NASA Astrophysics Data System (ADS)
Giri, P. B. S. W.; Srilestari, A.; Abdurrohim, K.; Yunus, F.
2017-08-01
Chronic Obstructive Pulmonary Disease (COPD) is now the fourth leading cause of death in the world. As COPD medications are associated with high mortality levels, continuous research into the improvement of treatment modalities is being conducted. This study aimed to identify the effects of acupoint-catgut embedment combined with medical treatment on the Body mass index, airflow Obstruction, Dyspnea and Exercise capacity (BODE) index scores of COPD patients. A single-blind randomized controlled trial was conducted on 48 patients; participants were allocated into either the acupoint-catgut embedment with medication group (case group) or the sham acupuncture with medication group (control group). Acupoint-catgut embedment was conducted at the BL13 Feishu, BL43 Gaohuangshu, BL20 Pishu, BL23 Shenshu, and ST40 Fenglong points two times at an interval of 15 days. The BODE index, a primary outcome indicator, was assessed on Day 1 and Day 30. The results showed statistically and clinically significant differences between the two groups—in fact, BODE index scores were reduced by 1.83 points in the case group (p = 0.000). Ultimately, BODE index scores were lower in the intervention group than in the control group, thus indicating a statistically significant and clinically important improvement of COPD-related symptoms. According to these results, acupoint-catgut embedment combined with medical treatment is concluded to be more effective than medical treatment alone in reducing BODE index scores.
NASA Astrophysics Data System (ADS)
Penven, Pierrick; Debreu, Laurent; Marchesiello, Patrick; McWilliams, James C.
What most clearly distinguishes near-shore and off-shore currents is their dominant spatial scale, O (1-30) km near-shore and O (30-1000) km off-shore. In practice, these phenomena are usually both measured and modeled with separate methods. In particular, it is infeasible for any regular computational grid to be large enough to simultaneously resolve well both types of currents. In order to obtain local solutions at high resolution while preserving the regional-scale circulation at an affordable computational cost, a 1-way grid embedding capability has been integrated into the Regional Oceanic Modeling System (ROMS). It takes advantage of the AGRIF (Adaptive Grid Refinement in Fortran) Fortran 90 package based on the use of pointers. After a first evaluation in a baroclinic vortex test case, the embedding procedure has been applied to a domain that covers the central upwelling region off California, around Monterey Bay, embedded in a domain that spans the continental U.S. Pacific Coast. Long-term simulations (10 years) have been conducted to obtain mean-seasonal statistical equilibria. The final solution shows few discontinuities at the parent-child domain boundary and a valid representation of the local upwelling structure, at a CPU cost only slightly greater than for the inner region alone. The solution is assessed by comparison with solutions for the whole US Pacific Coast at both low and high resolutions and to solutions for only the inner region at high resolution with mean-seasonal boundary conditions.
NASA Astrophysics Data System (ADS)
Soares, J. B.; Bica, E.; Ahumada, A. V.; Clariá, J. J.
2008-02-01
Aims:Among the star clusters in the Galaxy, those embedded in nebulae represent the youngest group, which has only recently been explored. The analysis of a sample of 22 candidate embedded stellar systems in reflection nebulae and/or HII environments is presented. Methods: We employed optical spectroscopic observations of stars in the directions of the clusters carried out at CASLEO (Argentina) together with near infrared photometry from the 2MASS catalogue. Our analysis is based on source surface density, colour-colour diagrams and on theoretical pre-main sequence isochrones. We take into account the field star contamination by carrying out a statistical subtraction. Results: The studied objects have the characteristics of low mass systems. We derive their fundamental parameters. Most of the cluster ages are younger than 2 Myr. The studied embedded stellar systems in reflection nebulae and/or HII region complexes do not have stars of spectral types earlier than B. The total stellar masses locked in the clusters are in the range 20-220 M⊙. They are found to be gravitationally unstable and are expected to dissolve in a timescale of a few Myr. Based on observations made at Complejo Astronómico El Leoncito, which is operated under agreement between the Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina and the National Universities of La Plata, Córdoba and San Juan, Argentina.
NASA Astrophysics Data System (ADS)
Wisniewski, Nicholas Andrew
This dissertation is divided into two parts. First we present an exact solution to a generalization of the Behrens-Fisher problem by embedding the problem in the Riemannian manifold of Normal distributions. From this we construct a geometric hypothesis testing scheme. Secondly we investigate the most commonly used geometric methods employed in tensor field interpolation for DT-MRI analysis and cardiac computer modeling. We computationally investigate a class of physiologically motivated orthogonal tensor invariants, both at the full tensor field scale and at the scale of a single interpolation by doing a decimation/interpolation experiment. We show that Riemannian-based methods give the best results in preserving desirable physiological features.
Discriminative graph embedding for label propagation.
Nguyen, Canh Hao; Mamitsuka, Hiroshi
2011-09-01
In many applications, the available information is encoded in graph structures. This is a common problem in biological networks, social networks, web communities and document citations. We investigate the problem of classifying nodes' labels on a similarity graph given only a graph structure on the nodes. Conventional machine learning methods usually require data to reside in some Euclidean spaces or to have a kernel representation. Applying these methods to nodes on graphs would require embedding the graphs into these spaces. By embedding and then learning the nodes on graphs, most methods are either flexible with different learning objectives or efficient enough for large scale applications. We propose a method to embed a graph into a feature space for a discriminative purpose. Our idea is to include label information into the embedding process, making the space representation tailored to the task. We design embedding objective functions that the following learning formulations become spectral transforms. We then reformulate these spectral transforms into multiple kernel learning problems. Our method, while being tailored to the discriminative tasks, is efficient and can scale to massive data sets. We show the need of discriminative embedding on some simulations. Applying to biological network problems, our method is shown to outperform baselines.
Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc
2016-07-01
Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality.
Synaptic dynamics contribute to long-term single neuron response fluctuations.
Reinartz, Sebastian; Biro, Istvan; Gal, Asaf; Giugliano, Michele; Marom, Shimon
2014-01-01
Firing rate variability at the single neuron level is characterized by long-memory processes and complex statistics over a wide range of time scales (from milliseconds up to several hours). Here, we focus on the contribution of non-stationary efficacy of the ensemble of synapses-activated in response to a given stimulus-on single neuron response variability. We present and validate a method tailored for controlled and specific long-term activation of a single cortical neuron in vitro via synaptic or antidromic stimulation, enabling a clear separation between two determinants of neuronal response variability: membrane excitability dynamics vs. synaptic dynamics. Applying this method we show that, within the range of physiological activation frequencies, the synaptic ensemble of a given neuron is a key contributor to the neuronal response variability, long-memory processes and complex statistics observed over extended time scales. Synaptic transmission dynamics impact on response variability in stimulation rates that are substantially lower compared to stimulation rates that drive excitability resources to fluctuate. Implications to network embedded neurons are discussed.
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...
2016-12-01
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
When will Low-Contrast Features be Visible in a STEM X-Ray Spectrum Image?
Parish, Chad M
2015-06-01
When will a small or low-contrast feature, such as an embedded second-phase particle, be visible in a scanning transmission electron microscopy (STEM) X-ray map? This work illustrates a computationally inexpensive method to simulate X-ray maps and spectrum images (SIs), based upon the equations of X-ray generation and detection. To particularize the general procedure, an example of nanostructured ferritic alloy (NFA) containing nm-sized Y2Ti2O7 embedded precipitates in ferritic stainless steel matrix is chosen. The proposed model produces physically appearing simulated SI data sets, which can either be reduced to X-ray dot maps or analyzed via multivariate statistical analysis. Comparison to NFA X-ray maps acquired using three different STEM instruments match the generated simulations quite well, despite the large number of simplifying assumptions used. A figure of merit of electron dose multiplied by X-ray collection solid angle is proposed to compare feature detectability from one data set (simulated or experimental) to another. The proposed method can scope experiments that are feasible under specific analysis conditions on a given microscope. Future applications, such as spallation proton-neutron irradiations, core-shell nanoparticles, or dopants in polycrystalline photovoltaic solar cells, are proposed.
Suemitsu, Yoshikazu; Nara, Shigetoshi
2004-09-01
Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.
Prediction of tautomer ratios by embedded-cluster integral equation theory
NASA Astrophysics Data System (ADS)
Kast, Stefan M.; Heil, Jochen; Güssregen, Stefan; Schmidt, K. Friedemann
2010-04-01
The "embedded cluster reference interaction site model" (EC-RISM) approach combines statistical-mechanical integral equation theory and quantum-chemical calculations for predicting thermodynamic data for chemical reactions in solution. The electronic structure of the solute is determined self-consistently with the structure of the solvent that is described by 3D RISM integral equation theory. The continuous solvent-site distribution is mapped onto a set of discrete background charges ("embedded cluster") that represent an additional contribution to the molecular Hamiltonian. The EC-RISM analysis of the SAMPL2 challenge set of tautomers proceeds in three stages. Firstly, the group of compounds for which quantitative experimental free energy data was provided was taken to determine appropriate levels of quantum-chemical theory for geometry optimization and free energy prediction. Secondly, the resulting workflow was applied to the full set, allowing for chemical interpretations of the results. Thirdly, disclosure of experimental data for parts of the compounds facilitated a detailed analysis of methodical issues and suggestions for future improvements of the model. Without specifically adjusting parameters, the EC-RISM model yields the smallest value of the root mean square error for the first set (0.6 kcal mol-1) as well as for the full set of quantitative reaction data (2.0 kcal mol-1) among the SAMPL2 participants.
Accuracy of Protein Embedding Potentials: An Analysis in Terms of Electrostatic Potentials.
Olsen, Jógvan Magnus Haugaard; List, Nanna Holmgaard; Kristensen, Kasper; Kongsted, Jacob
2015-04-14
Quantum-mechanical embedding methods have in recent years gained significant interest and may now be applied to predict a wide range of molecular properties calculated at different levels of theory. To reach a high level of accuracy in embedding methods, both the electronic structure model of the active region and the embedding potential need to be of sufficiently high quality. In fact, failures in quantum mechanics/molecular mechanics (QM/MM)-based embedding methods have often been associated with the QM/MM methodology itself; however, in many cases the reason for such failures is due to the use of an inaccurate embedding potential. In this paper, we investigate in detail the quality of the electronic component of embedding potentials designed for calculations on protein biostructures. We show that very accurate explicitly polarizable embedding potentials may be efficiently designed using fragmentation strategies combined with single-fragment ab initio calculations. In fact, due to the self-interaction error in Kohn-Sham density functional theory (KS-DFT), use of large full-structure quantum-mechanical calculations based on conventional (hybrid) functionals leads to less accurate embedding potentials than fragment-based approaches. We also find that standard protein force fields yield poor embedding potentials, and it is therefore not advisable to use such force fields in general QM/MM-type calculations of molecular properties other than energies and structures.
Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow.
Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong
2015-01-01
Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time.
Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow
Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong
2015-01-01
Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time. PMID:26351657
ERIC Educational Resources Information Center
Cascio, Ted V.
2017-01-01
This study assesses the effectiveness of critical thinking drills (CTDs), a repetitious classroom activity designed to improve methodological and statistical thinking in relation to psychological claims embedded in popular press articles. In each of four separate CTDs, students critically analyzed a brief article reporting a recent psychological…
Saadati, Farzaneh; Ahmad Tarmizi, Rohani
2015-01-01
Because students’ ability to use statistics, which is mathematical in nature, is one of the concerns of educators, embedding within an e-learning system the pedagogical characteristics of learning is ‘value added’ because it facilitates the conventional method of learning mathematics. Many researchers emphasize the effectiveness of cognitive apprenticeship in learning and problem solving in the workplace. In a cognitive apprenticeship learning model, skills are learned within a community of practitioners through observation of modelling and then practice plus coaching. This study utilized an internet-based Cognitive Apprenticeship Model (i-CAM) in three phases and evaluated its effectiveness for improving statistics problem-solving performance among postgraduate students. The results showed that, when compared to the conventional mathematics learning model, the i-CAM could significantly promote students’ problem-solving performance at the end of each phase. In addition, the combination of the differences in students' test scores were considered to be statistically significant after controlling for the pre-test scores. The findings conveyed in this paper confirmed the considerable value of i-CAM in the improvement of statistics learning for non-specialized postgraduate students. PMID:26132553
Generation of Controlled Analog Emissions from Embedded Devices using Software Stress Methods
2017-03-01
Generation of Controlled Analog Emissions from Embedded Devices using Software Stress Methods Oren Sternberg, Jonathan H. Nelson, Israel Perez...Abstract: In this paper, we present a new method that uses software diagnostic tools to study the generation of induced spurious physical emissions from...types of attacks warrants an understanding of unwanted signal generation. We examine this connection by observing the emission profile of an embedded
Nonuniform continuum model for solvatochromism based on frozen-density embedding theory.
Shedge, Sapana Vitthal; Wesolowski, Tomasz A
2014-10-20
Frozen-density embedding theory (FDET) provides the formal framework for multilevel numerical simulations, such that a selected subsystem is described at the quantum mechanical level, whereas its environment is described by means of the electron density (frozen density; ${\\rho _{\\rm{B}} (\\vec r)}$). The frozen density ${\\rho _{\\rm{B}} (\\vec r)}$ is usually obtained from some lower-level quantum mechanical methods applied to the environment, but FDET is not limited to such choices for ${\\rho _{\\rm{B}} (\\vec r)}$. The present work concerns the application of FDET, in which ${\\rho _{\\rm{B}} (\\vec r)}$ is the statistically averaged electron density of the solvent ${\\left\\langle {\\rho _{\\rm{B}} (\\vec r)} \\right\\rangle }$. The specific solute-solvent interactions are represented in a statistical manner in ${\\left\\langle {\\rho _{\\rm{B}} (\\vec r)} \\right\\rangle }$. A full self-consistent treatment of solvated chromophore, thus involves a single geometry of the chromophore in a given state and the corresponding ${\\left\\langle {\\rho _{\\rm{B}} (\\vec r)} \\right\\rangle }$. We show that the coupling between the two descriptors might be made in an approximate manner that is applicable for both absorption and emission. The proposed protocol leads to accurate (error in the range of 0.05 eV) descriptions of the solvatochromic shifts in both absorption and emission. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hyperlink-Embedded Journal Articles Improve Statistical Knowledge and Reader Satisfaction
Saxon, David; Pearson, Alexander T.; Wu, Peter
2015-01-01
Background To practice evidence-based medicine, physicians should have a solid understanding of fundamental epidemiological and biostatistical concepts. Research suggests that only a minority of physicians have such an understanding of biostatistics. Objective To collect pilot data on a novel biostatistical educational tool, a hyperlink-embedded journal article, which is aimed at improving knowledge in biostatistics. Methods Forty-four physicians-in-training participated in this pilot study. Participants completed a pretest consisting of 5 questions about biostatistical terms that would be encountered in the article. They were randomized to either an unmodified journal article or to the same article with hyperlinked biostatistical terms. All participants then completed a posttest that was identical to the pretest. Results Having access to hyperlinked information had a positive association with the number of improved test answers (P = .05). Use of hyperlinks varied, and were seemingly dependent on user comfort with terms; well-understood definitions (“average”) were clicked on a few times (5.5% of participants), whereas more obscure method terms (“Lexis diagram”) were clicked on by 94% of participants. While only 42% of participants stated they would have looked up definitions of the biostatistical terms if they had not been provided in the hyperlinked article, 94% of participants identified the hyperlink tool as something they would use if readily available to them when reading journal articles. Conclusions Results of this pilot study of a novel educational intervention suggest that embedded hyperlinks within journal articles may be a useful tool to teach biostatistical terms to physicians. PMID:26692981
Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark
1999-01-01
A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.
NASA Astrophysics Data System (ADS)
Modegi, Toshio
We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.
Embedding beyond electrostatics-The role of wave function confinement.
Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Holmgaard List, Nanna; Solanko, Lukasz M; Wüstner, Daniel; Kongsted, Jacob
2016-09-14
We study excited states of cholesterol in solution and show that, in this specific case, solute wave-function confinement is the main effect of the solvent. This is rationalized on the basis of the polarizable density embedding scheme, which in addition to polarizable embedding includes non-electrostatic repulsion that effectively confines the solute wave function to its cavity. We illustrate how the inclusion of non-electrostatic repulsion results in a successful identification of the intense π → π(∗) transition, which was not possible using an embedding method that only includes electrostatics. This underlines the importance of non-electrostatic repulsion in quantum-mechanical embedding-based methods.
Lossless Data Embedding—New Paradigm in Digital Watermarking
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2002-12-01
One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small and perceptual models are used to minimize its visibility, the distortion may not be acceptable for medical imagery (for legal reasons) or for military images inspected under nonstandard viewing conditions (after enhancement or extreme zoom). In this paper, we introduce a new paradigm for data embedding in images (lossless data embedding) that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted. We present lossless embedding methods for the uncompressed formats (BMP, TIFF) and for the JPEG format. We also show how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of nontrivial tasks, including lossless authentication using fragile watermarks, steganalysis of LSB embedding, and distortion-free robust watermarking.
Non-integer expansion embedding techniques for reversible image watermarking
NASA Astrophysics Data System (ADS)
Xiang, Shijun; Wang, Yi
2015-12-01
This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.
Scanning capacitance microscopy of ErAs nanoparticles embedded in GaAs pn junctions
NASA Astrophysics Data System (ADS)
Park, K. W.; Nair, H. P.; Crook, A. M.; Bank, S. R.; Yu, E. T.
2011-09-01
Scanning capacitance microscopy is used to characterize the electronic properties of ErAs nanoparticles embedded in GaAs pn junctions grown by molecular beam epitaxy. Voltage-dependent capacitance images reveal localized variations in subsurface electronic structure near buried ErAs nanoparticles at lateral length scales of 20-30 nm. Numerical modeling indicates that these variations arise from inhomogeneities in charge modulation due to Fermi level pinning behavior associated with the embedded ErAs nanoparticles. Statistical analysis of image data yields an average particle radius of 6-8 nm—well below the direct resolution limit in scanning capacitance microscopy but discernible via analysis of patterns in nanoscale capacitance images.
ERIC Educational Resources Information Center
Jing,Lei; Cheng, Zixue; Wang, Junbo; Zhou, Yinghui
2011-01-01
Embedded system technologies are undergoing dramatic change. Competent embedded system engineers are becoming a scarce resource in the industry. Given this, universities should revise their specialist education to meet industry demands. In this paper, a spirally tight-coupled step-by-step educational method, based on an analysis of industry…
Reading color barcodes using visual snakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaub, Hanspeter
2004-05-01
Statistical pressure snakes are used to track a mono-color target in an unstructured environment using a video camera. The report discusses an algorithm to extract a bar code signal that is embedded within the target. The target is assumed to be rectangular in shape, with the bar code printed in a slightly different saturation and value in HSV color space. Thus, the visual snake, which primarily weighs hue tracking errors, will not be deterred by the presence of the color bar codes in the target. The bar code is generate with the standard 3 of 9 method. Using this method,more » the numeric bar codes reveal if the target is right-side-up or up-side-down.« less
Grain-Boundary Resistance in Copper Interconnects: From an Atomistic Model to a Neural Network
NASA Astrophysics Data System (ADS)
Valencia, Daniel; Wilson, Evan; Jiang, Zhengping; Valencia-Zapata, Gustavo A.; Wang, Kuang-Chung; Klimeck, Gerhard; Povolotskyi, Michael
2018-04-01
Orientation effects on the specific resistance of copper grain boundaries are studied systematically with two different atomistic tight-binding methods. A methodology is developed to model the specific resistance of grain boundaries in the ballistic limit using the embedded atom model, tight- binding methods, and nonequilibrium Green's functions. The methodology is validated against first-principles calculations for thin films with a single coincident grain boundary, with 6.4% deviation in the specific resistance. A statistical ensemble of 600 large, random structures with grains is studied. For structures with three grains, it is found that the distribution of specific resistances is close to normal. Finally, a compact model for grain-boundary-specific resistance is constructed based on a neural network.
A statistical learning strategy for closed-loop control of fluid flows
NASA Astrophysics Data System (ADS)
Guéniat, Florimond; Mathelin, Lionel; Hussaini, M. Yousuff
2016-12-01
This work discusses a closed-loop control strategy for complex systems utilizing scarce and streaming data. A discrete embedding space is first built using hash functions applied to the sensor measurements from which a Markov process model is derived, approximating the complex system's dynamics. A control strategy is then learned using reinforcement learning once rewards relevant with respect to the control objective are identified. This method is designed for experimental configurations, requiring no computations nor prior knowledge of the system, and enjoys intrinsic robustness. It is illustrated on two systems: the control of the transitions of a Lorenz'63 dynamical system, and the control of the drag of a cylinder flow. The method is shown to perform well.
Development of a Mandarin-English Bilingual Speech Recognition System for Real World Music Retrieval
NASA Astrophysics Data System (ADS)
Zhang, Qingqing; Pan, Jielin; Lin, Yang; Shao, Jian; Yan, Yonghong
In recent decades, there has been a great deal of research into the problem of bilingual speech recognition-to develop a recognizer that can handle inter- and intra-sentential language switching between two languages. This paper presents our recent work on the development of a grammar-constrained, Mandarin-English bilingual Speech Recognition System (MESRS) for real world music retrieval. Two of the main difficult issues in handling the bilingual speech recognition systems for real world applications are tackled in this paper. One is to balance the performance and the complexity of the bilingual speech recognition system; the other is to effectively deal with the matrix language accents in embedded language**. In order to process the intra-sentential language switching and reduce the amount of data required to robustly estimate statistical models, a compact single set of bilingual acoustic models derived by phone set merging and clustering is developed instead of using two separate monolingual models for each language. In our study, a novel Two-pass phone clustering method based on Confusion Matrix (TCM) is presented and compared with the log-likelihood measure method. Experiments testify that TCM can achieve better performance. Since potential system users' native language is Mandarin which is regarded as a matrix language in our application, their pronunciations of English as the embedded language usually contain Mandarin accents. In order to deal with the matrix language accents in embedded language, different non-native adaptation approaches are investigated. Experiments show that model retraining method outperforms the other common adaptation methods such as Maximum A Posteriori (MAP). With the effective incorporation of approaches on phone clustering and non-native adaptation, the Phrase Error Rate (PER) of MESRS for English utterances was reduced by 24.47% relatively compared to the baseline monolingual English system while the PER on Mandarin utterances was comparable to that of the baseline monolingual Mandarin system. The performance for bilingual utterances achieved 22.37% relative PER reduction.
Parandekar, Priya V; Hratchian, Hrant P; Raghavachari, Krishnan
2008-10-14
Hybrid QM:QM (quantum mechanics:quantum mechanics) and QM:MM (quantum mechanics:molecular mechanics) methods are widely used to calculate the electronic structure of large systems where a full quantum mechanical treatment at a desired high level of theory is computationally prohibitive. The ONIOM (our own N-layer integrated molecular orbital molecular mechanics) approximation is one of the more popular hybrid methods, where the total molecular system is divided into multiple layers, each treated at a different level of theory. In a previous publication, we developed a novel QM:QM electronic embedding scheme within the ONIOM framework, where the model system is embedded in the external Mulliken point charges of the surrounding low-level region to account for the polarization of the model system wave function. Therein, we derived and implemented a rigorous expression for the embedding energy as well as analytic gradients that depend on the derivatives of the external Mulliken point charges. In this work, we demonstrate the applicability of our QM:QM method with point charge embedding and assess its accuracy. We study two challenging systems--zinc metalloenzymes and silicon oxide cages--and demonstrate that electronic embedding shows significant improvement over mechanical embedding. We also develop a modified technique for the energy and analytic gradients using a generalized asymmetric Mulliken embedding method involving an unequal splitting of the Mulliken overlap populations to offer improvement in situations where the Mulliken charges may be deficient.
Application of RT-PCR in formalin-fixed and paraffin-embedded lung cancer tissues.
Zhang, Fan; Wang, Zhuo-min; Liu, Hong-yu; Bai, Yun; Wei, Sen; Li, Ying; Wang, Min; Chen, Jun; Zhou, Qing-hua
2010-01-01
To analyze gene expression in formalin-fixed, paraffin-embedded lung cancer tissues using modified method. Total RNA from frozen tissues was extracted using TRIZOL reagent. RNA was extracted from formalin-fixed, paraffin-embedded tissues by digestion with proteinase K before the acid-phenol:chloroform extraction and carrier precipitation. We modified this method by using a higher concentration of proteinase K and a longer digestion time, optimized to 16 hours. RT-PCR and real-time RT-PCR were used to check reproducibility and the concordance between frozen and paraffin-embedded samples. The results showed that the RNA extracted from the paraffin-embedded lung tissues had high quality with the most fragment length between 28S and 18S bands (about 1000 to 2000 bases). The housekeeping gene GUSB exhibited low variation of expression in frozen and paraffin-embedded lung tissues, whereas PGK1 had the lowest variation in lymphoma tissues. Furthermore, real-time PCR analysis of the expression of known prognostic genes in non-small cell lung carcinoma (NSCLC) demonstrated an extremely high correlation (r>0.880) between the paired frozen and formalin-fixed, paraffin-embedded specimens. This improved method of RNA extraction is suitable for real-time quantitative RT-PCR, and may be used for global gene expression profiling of paraffin-embedded tissues.
ERIC Educational Resources Information Center
Thompson, Bruce; Melancon, Janet G.
Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…
Jiang, JingLe; Marathe, Amar R.; Keene, Jennifer C.; Taylor, Dawn M.
2016-01-01
Background Custom-fitted skull replacement pieces are often used after a head injury or surgery to replace damaged bone. Chronic brain recordings are beneficial after injury/surgery for monitoring brain health and seizure development. Embedding electrodes directly in these artificial skull replacement pieces would be a novel, low-risk way to perform chronic brain monitoring in these patients. Similarly, embedding electrodes directly in healthy skull would be a viable minimally-invasive option for many other neuroscience and neurotechnology applications requiring chronic brain recordings. New Method We demonstrate a preclinical testbed that can be used for refining electrode designs embedded in artificial skull replacement pieces or for embedding directly into the skull itself. Options are explored to increase the surface area of the contacts without increasing recording contact diameter to maximize recording resolution. Results Embedding electrodes in real or artificial skull allows one to lower electrode impedance without increasing the recording contact diameter by making use of conductive channels that extend into the skull. The higher density of small contacts embedded in the artificial skull in this testbed enables one to optimize electrode spacing for use in real bone. Comparison with Existing Methods For brain monitoring applications, skull-embedded electrodes fill a gap between electroencephalograms recorded on the scalp surface and the more invasive epidural or subdural electrode sheets. Conclusions Embedding electrodes into the skull or in skull replacement pieces may provide a safe, convenient, minimally-invasive alternative for chronic brain monitoring. The manufacturing methods described here will facilitate further testing of skull-embedded electrodes in animal models. PMID:27979758
Welvaert, Marijke; Caley, Peter
2016-01-01
Citizen science and crowdsourcing have been emerging as methods to collect data for surveillance and/or monitoring activities. They could be gathered under the overarching term citizen surveillance . The discipline, however, still struggles to be widely accepted in the scientific community, mainly because these activities are not embedded in a quantitative framework. This results in an ongoing discussion on how to analyze and make useful inference from these data. When considering the data collection process, we illustrate how citizen surveillance can be classified according to the nature of the underlying observation process measured in two dimensions-the degree of observer reporting intention and the control in observer detection effort. By classifying the observation process in these dimensions we distinguish between crowdsourcing, unstructured citizen science and structured citizen science. This classification helps the determine data processing and statistical treatment of these data for making inference. Using our framework, it is apparent that published studies are overwhelmingly associated with structured citizen science, and there are well developed statistical methods for the resulting data. In contrast, methods for making useful inference from purely crowd-sourced data remain under development, with the challenges of accounting for the unknown observation process considerable. Our quantitative framework for citizen surveillance calls for an integration of citizen science and crowdsourcing and provides a way forward to solve the statistical challenges inherent to citizen-sourced data.
An Embedded Librarian Program: Eight Years On.
Freiburger, Gary; Martin, Jennifer R; Nuñez, Annabelle V
2016-01-01
This article examines an embedded librarian program eight years after implementation in a large academic health center. Librarians were physically moved into the colleges of pharmacy, public health, and nursing. Statistics are reported as well as comments from the participating librarians and faculty members. Strong relationships have been built between librarians, faculty members, and students. Locating the librarians among faculty and students led to a better understanding of client needs and an increased awareness of librarian competencies and services resulting in partnerships and greater utilization of library services.
Farthouat, Juliane; Franco, Ana; Mary, Alison; Delpouve, Julie; Wens, Vincent; Op de Beeck, Marc; De Tiège, Xavier; Peigneux, Philippe
2017-03-01
Humans are highly sensitive to statistical regularities in their environment. This phenomenon, usually referred as statistical learning, is most often assessed using post-learning behavioural measures that are limited by a lack of sensibility and do not monitor the temporal dynamics of learning. In the present study, we used magnetoencephalographic frequency-tagged responses to investigate the neural sources and temporal development of the ongoing brain activity that supports the detection of regularities embedded in auditory streams. Participants passively listened to statistical streams in which tones were grouped as triplets, and to random streams in which tones were randomly presented. Results show that during exposure to statistical (vs. random) streams, tritone frequency-related responses reflecting the learning of regularities embedded in the stream increased in the left supplementary motor area and left posterior superior temporal sulcus (pSTS), whereas tone frequency-related responses decreased in the right angular gyrus and right pSTS. Tritone frequency-related responses rapidly developed to reach significance after 3 min of exposure. These results suggest that the incidental extraction of novel regularities is subtended by a gradual shift from rhythmic activity reflecting individual tone succession toward rhythmic activity synchronised with triplet presentation, and that these rhythmic processes are subtended by distinct neural sources.
NASA Astrophysics Data System (ADS)
Simos, T. E.
2017-11-01
A family of four stages high algebraic order embedded explicit six-step methods, for the numerical solution of second order initial or boundary-value problems with periodical and/or oscillating solutions, are studied in this paper. The free parameters of the new proposed methods are calculated solving the linear system of equations which is produced by requesting the vanishing of the phase-lag of the methods and the vanishing of the phase-lag's derivatives of the schemes. For the new obtained methods we investigate: • Its local truncation error (LTE) of the methods.• The asymptotic form of the LTE obtained using as model problem the radial Schrödinger equation.• The comparison of the asymptotic forms of LTEs for several methods of the same family. This comparison leads to conclusions on the efficiency of each method of the family.• The stability and the interval of periodicity of the obtained methods of the new family of embedded finite difference pairs.• The applications of the new obtained family of embedded finite difference pairs to the numerical solution of several second order problems like the radial Schrödinger equation, astronomical problems etc. The above applications lead to conclusion on the efficiency of the methods of the new family of embedded finite difference pairs.
Gang, Yadong; Zhou, Hongfu; Jia, Yao; Liu, Ling; Liu, Xiuli; Rao, Gong; Li, Longhui; Wang, Xiaojun; Lv, Xiaohua; Xiong, Hanqing; Yang, Zhongqin; Luo, Qingming; Gong, Hui; Zeng, Shaoqun
2017-01-01
Resin embedding has been widely applied to fixing biological tissues for sectioning and imaging, but has long been regarded as incompatible with green fluorescent protein (GFP) labeled sample because it reduces fluorescence. Recently, it has been reported that resin-embedded GFP-labeled brain tissue can be imaged with high resolution. In this protocol, we describe an optimized protocol for resin embedding and chemical reactivation of fluorescent protein labeled mouse brain, we have used mice as experiment model, but the protocol should be applied to other species. This method involves whole brain embedding and chemical reactivation of the fluorescent signal in resin-embedded tissue. The whole brain embedding process takes a total of 7 days. The duration of chemical reactivation is ~2 min for penetrating 4 μm below the surface in the resin-embedded brain. This protocol provides an efficient way to prepare fluorescent protein labeled sample for high-resolution optical imaging. This kind of sample was demonstrated to be imaged by various optical micro-imaging methods. Fine structures labeled with GFP across a whole brain can be detected. PMID:28352214
Equality of Educational Opportunity, Merit and the New Zealand Education System
ERIC Educational Resources Information Center
Seve-Williams, Nuhisifa
2013-01-01
Pacific students in New Zealand (NZ) quickly learn that they are not very smart. The statistics tell them this. They also come to believe that they do not try very hard. The talk of equal opportunities tells them this, especially when it is coupled with negative statistics. This is not surprising. Education in NZ has been embedded in notions of…
Diverse Power Iteration Embeddings and Its Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang H.; Yoo S.; Yu, D.
2014-12-14
Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detectionmore » and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.« less
Bellomo-Brandao, Maria Angela; Andrade, Paula D; Costa, Sandra CB; Escanhoela, Cecilia AF; Vassallo, Jose; Porta, Gilda; De Tommaso, Adriana MA; Hessel, Gabriel
2009-01-01
AIM: To determine cytomegalovirus (CMV) frequency in neonatal intrahepatic cholestasis by serology, histological revision (searching for cytomegalic cells), immunohistochemistry, and polymerase chain reaction (PCR), and to verify the relationships among these methods. METHODS: The study comprised 101 non-consecutive infants submitted for hepatic biopsy between March 1982 and December 2005. Serological results were obtained from the patient’s files and the other methods were performed on paraffin-embedded liver samples from hepatic biopsies. The following statistical measures were calculated: frequency, sensibility, specific positive predictive value, negative predictive value, and accuracy. RESULTS: The frequencies of positive results were as follows: serology, 7/64 (11%); histological revision, 0/84; immunohistochemistry, 1/44 (2%), and PCR, 6/77 (8%). Only one patient had positive immunohistochemical findings and a positive PCR. The following statistical measures were calculated between PCR and serology: sensitivity, 33.3%; specificity, 88.89%; positive predictive value, 28.57%; negative predictive value, 90.91%; and accuracy, 82.35%. CONCLUSION: The frequency of positive CMV varied among the tests. Serology presented the highest positive frequency. When compared to PCR, the sensitivity and positive predictive value of serology were low. PMID:19610143
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Bloch, B. Nicholas; Chappelow, Jonathan; Patel, Pratik; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant
2011-03-01
Currently, there is significant interest in developing methods for quantitative integration of multi-parametric (structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities and scales of individual protocols, while deriving an integrated multi-parametric data representation which best captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric (T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging (MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols via automated image registration, followed by quantification of image attributes from individual protocols. Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier, yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground truth for CaP extent on MRI (obtained by spatially registering pre-operative MRI with available whole-mount histological specimens) reveals that EMPrAvISE yields a statistically significant improvement (AUC=0.77) over classifiers constructed from individual protocols (AUC=0.62, 0.62, 0.65, for T2w, DCE, DWI respectively) as well as one trained using multi-parametric feature concatenation (AUC=0.67).
Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.
2017-12-01
Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.
When will low-contrast features be visible in a STEM X-ray spectrum image?
Parish, Chad M.
2015-04-01
When will a small or low-contrast feature, such as an embedded second-phase particle, be visible in a scanning transmission electron microscopy (STEM) X-ray map? This work illustrates a computationally inexpensive method to simulate X-ray maps and spectrum images (SIs), based upon the equations of X-ray generation and detection. To particularize the general procedure, an example of nanostructured ferritic alloy (NFA) containing nm-sized Y 2Ti 2O 7 embedded precipitates in ferritic stainless steel matrix is chosen. The proposed model produces physically appearing simulated SI data sets, which can either be reduced to X-ray dot maps or analyzed via multivariate statistical analysis.more » Comparison to NFA X-ray maps acquired using three different STEM instruments match the generated simulations quite well, despite the large number of simplifying assumptions used. A figure of merit of electron dose multiplied by X-ray collection solid angle is proposed to compare feature detectability from one data set (simulated or experimental) to another. The proposed method can scope experiments that are feasible under specific analysis conditions on a given microscope. As a result, future applications, such as spallation proton–neutron irradiations, core-shell nanoparticles, or dopants in polycrystalline photovoltaic solar cells, are proposed.« less
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; ...
2017-10-10
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
Theoretical approaches to the steady-state statistical physics of interacting dissipative units
NASA Astrophysics Data System (ADS)
Bertin, Eric
2017-02-01
The aim of this review is to provide a concise overview of some of the generic approaches that have been developed to deal with the statistical description of large systems of interacting dissipative ‘units’. The latter notion includes, e.g. inelastic grains, active or self-propelled particles, bubbles in a foam, low-dimensional dynamical systems like driven oscillators, or even spatially extended modes like Fourier modes of the velocity field in a fluid. We first review methods based on the statistical properties of a single unit, starting with elementary mean-field approximations, either static or dynamic, that describe a unit embedded in a ‘self-consistent’ environment. We then discuss how this basic mean-field approach can be extended to account for spatial dependences, in the form of space-dependent mean-field Fokker-Planck equations, for example. We also briefly review the use of kinetic theory in the framework of the Boltzmann equation, which is an appropriate description for dilute systems. We then turn to descriptions in terms of the full N-body distribution, starting from exact solutions of one-dimensional models, using a matrix-product ansatz method when correlations are present. Since exactly solvable models are scarce, we also present some approximation methods which can be used to determine the N-body distribution in a large system of dissipative units. These methods include the Edwards approach for dense granular matter and the approximate treatment of multiparticle Langevin equations with colored noise, which models systems of self-propelled particles. Throughout this review, emphasis is put on methodological aspects of the statistical modeling and on formal similarities between different physical problems, rather than on the specific behavior of a given system.
NASA Astrophysics Data System (ADS)
Chen, Tao; Ye, Meng-li; Liu, Shu-liang; Deng, Yan
2018-03-01
In view of the principle for occurrence of cross-sensitivity, a series of calibration experiments are carried out to solve the cross-sensitivity problem of embedded fiber Bragg gratings (FBGs) using the reference grating method. Moreover, an ultrasonic-vibration-assisted grinding (UVAG) model is established, and finite element analysis (FEA) is carried out under the monitoring environment of embedded temperature measurement system. In addition, the related temperature acquisition tests are set in accordance with requirements of the reference grating method. Finally, comparative analyses of the simulation and experimental results are performed, and it may be concluded that the reference grating method may be utilized to effectively solve the cross-sensitivity of embedded FBGs.
Wang, Yun; Huang, Fangzhou
2018-01-01
The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC2), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible. PMID:29666661
Xu, Jiucheng; Mu, Huiyu; Wang, Yun; Huang, Fangzhou
2018-01-01
The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC 2 ), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible.
Sandford, II, Maxwell T.; Handel, Theodore G.
1997-01-01
A method of embedding auxiliary information into a set of host data, such as a photograph, television signal, facsimile transmission, or identification card. All such host data contain intrinsic noise, allowing pixels in the host data which are nearly identical and which have values differing by less than the noise value to be manipulated and replaced with auxiliary data. As the embedding method does not change the elemental values of the host data, the auxiliary data do not noticeably affect the appearance or interpretation of the host data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user.
Sandford, M.T. II; Handel, T.G.
1997-08-19
A method is disclosed for embedding auxiliary information into a set of host data, such as a photograph, television signal, facsimile transmission, or identification card. All such host data contain intrinsic noise, allowing pixels in the host data which are nearly identical and which have values differing by less than the noise value to be manipulated and replaced with auxiliary data. As the embedding method does not change the elemental values of the host data, the auxiliary data do not noticeably affect the appearance or interpretation of the host data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. 19 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind
Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less
Daikoku, Tatsuya
2018-06-19
Statistical learning (SL) is a method of learning based on the transitional probabilities embedded in sequential phenomena such as music and language. It has been considered an implicit and domain-general mechanism that is innate in the human brain and that functions independently of intention to learn and awareness of what has been learned. SL is an interdisciplinary notion that incorporates information technology, artificial intelligence, musicology, and linguistics, as well as psychology and neuroscience. A body of recent study has suggested that SL can be reflected in neurophysiological responses based on the framework of information theory. This paper reviews a range of work on SL in adults and children that suggests overlapping and independent neural correlations in music and language, and that indicates disability of SL. Furthermore, this article discusses the relationships between the order of transitional probabilities (TPs) (i.e., hierarchy of local statistics) and entropy (i.e., global statistics) regarding SL strategies in human's brains; claims importance of information-theoretical approaches to understand domain-general, higher-order, and global SL covering both real-world music and language; and proposes promising approaches for the application of therapy and pedagogy from various perspectives of psychology, neuroscience, computational studies, musicology, and linguistics.
Ehdaie, Beeta; Rento, Chloe T.; Son, Veronica; Turner, Sydney S.; Samie, Amidou; Dillingham, Rebecca A.
2017-01-01
The World Health Organization (WHO) recognizes point-of-use water treatment (PoUWT) technologies as effective means to improve water quality. This paper investigates long-term performance and social acceptance of a novel PoUWT technology, a silver-infused ceramic tablet, in Limpopo Province, South Africa. When placed in a water storage container, the silver-embedded ceramic tablet releases silver ions into water, thereby disinfecting microbial pathogens and leaving the water safe for human consumption. As a result of its simplicity and efficiency, the silver-embedded ceramic tablet can serve as a stand-alone PoUWT method and as a secondary PoUWT to improve exisitng PoUWT methods, such as ceramic water filters. In this paper, three PoUWT interventions were conducted to evaluate the silver-embedded ceramic tablet: (1) the silver-embedded ceramic tablet as a stand-alone PoUWT method, (2) ceramic water filters stand-alone, and (3) a filter-tablet combination. The filter-tablet combination evaluates the silver-embedded ceramic tablet as a secondary PoUWT method when placed in the lower reservoir of the ceramic water filter system to provide residual disinfection post-filtration. Samples were collected from 79 households over one year and analyzed for turbidity, total silver levels and coliform bacteria. Results show that the silver-embedded ceramic tablet effectively reduced total coliform bacteria (TC) and E. coli when used as a stand-alone PoUWT method and when used in combination with ceramic water filters. The silver-embedded ceramic tablet’s performance as a stand-alone PoUWT method was comparable to current inexpensive, single-use PoUWT methods, demonstrating 100% and 75% median reduction in E. coli and TC, respectively, after two months of use. Overall, the the filter-tablet combination performed the best of the three interventions, providing a 100% average percent reduction in E. coli over one year. User surveys were also conducted and indicated that the silver-embedded ceramic tablet was simple to use and culturally appropriate. Also, silver levels in all treated water samples remained below 20 μg/L, significantly lower than the drinking water standard of 100 μg/L, making it safe for consumption. Long-term data demonstrates that the silver-embedded ceramic tablet has beneficial effects even after one year of use. This study demonstrates that the silver-embedded ceramic tablet can effectively improve water quality when used alone, or with ceramic water filters, to reduce rates of recontamination. Therefore, the tablet has the potential to provide a low-cost means to purify water in resource-limited settings. PMID:28095435
Ehdaie, Beeta; Rento, Chloe T; Son, Veronica; Turner, Sydney S; Samie, Amidou; Dillingham, Rebecca A; Smith, James A
2017-01-01
The World Health Organization (WHO) recognizes point-of-use water treatment (PoUWT) technologies as effective means to improve water quality. This paper investigates long-term performance and social acceptance of a novel PoUWT technology, a silver-infused ceramic tablet, in Limpopo Province, South Africa. When placed in a water storage container, the silver-embedded ceramic tablet releases silver ions into water, thereby disinfecting microbial pathogens and leaving the water safe for human consumption. As a result of its simplicity and efficiency, the silver-embedded ceramic tablet can serve as a stand-alone PoUWT method and as a secondary PoUWT to improve exisitng PoUWT methods, such as ceramic water filters. In this paper, three PoUWT interventions were conducted to evaluate the silver-embedded ceramic tablet: (1) the silver-embedded ceramic tablet as a stand-alone PoUWT method, (2) ceramic water filters stand-alone, and (3) a filter-tablet combination. The filter-tablet combination evaluates the silver-embedded ceramic tablet as a secondary PoUWT method when placed in the lower reservoir of the ceramic water filter system to provide residual disinfection post-filtration. Samples were collected from 79 households over one year and analyzed for turbidity, total silver levels and coliform bacteria. Results show that the silver-embedded ceramic tablet effectively reduced total coliform bacteria (TC) and E. coli when used as a stand-alone PoUWT method and when used in combination with ceramic water filters. The silver-embedded ceramic tablet's performance as a stand-alone PoUWT method was comparable to current inexpensive, single-use PoUWT methods, demonstrating 100% and 75% median reduction in E. coli and TC, respectively, after two months of use. Overall, the the filter-tablet combination performed the best of the three interventions, providing a 100% average percent reduction in E. coli over one year. User surveys were also conducted and indicated that the silver-embedded ceramic tablet was simple to use and culturally appropriate. Also, silver levels in all treated water samples remained below 20 μg/L, significantly lower than the drinking water standard of 100 μg/L, making it safe for consumption. Long-term data demonstrates that the silver-embedded ceramic tablet has beneficial effects even after one year of use. This study demonstrates that the silver-embedded ceramic tablet can effectively improve water quality when used alone, or with ceramic water filters, to reduce rates of recontamination. Therefore, the tablet has the potential to provide a low-cost means to purify water in resource-limited settings.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
Yousefsani, Seyed Abdolmajid; Shamloo, Amir; Farahmand, Farzam
2018-04-01
A transverse-plane hyperelastic micromechanical model of brain white matter tissue was developed using the embedded element technique (EET). The model consisted of a histology-informed probabilistic distribution of axonal fibers embedded within an extracellular matrix, both described using the generalized Ogden hyperelastic material model. A correcting method, based on the strain energy density function, was formulated to resolve the stiffness redundancy problem of the EET in large deformation regime. The model was then used to predict the homogenized tissue behavior and the associated localized responses of the axonal fibers under quasi-static, transverse, large deformations. Results indicated that with a sufficiently large representative volume element (RVE) and fine mesh, the statistically randomized microstructure implemented in the RVE exhibits directional independency in transverse plane, and the model predictions for the overall and local tissue responses, characterized by the normalized strain energy density and Cauchy and von Mises stresses, are independent from the modeling parameters. Comparison of the responses of the probabilistic model with that of a simple uniform RVE revealed that only the first one is capable of representing the localized behavior of the tissue constituents. The validity test of the model predictions for the corona radiata against experimental data from the literature indicated a very close agreement. In comparison with the conventional direct meshing method, the model provided almost the same results after correcting the stiffness redundancy, however, with much less computational cost and facilitated geometrical modeling, meshing, and boundary conditions imposing. It was concluded that the EET can be used effectively for detailed probabilistic micromechanical modeling of the white matter in order to provide more accurate predictions for the axonal responses, which are of great importance when simulating the brain trauma or tumor growth. Copyright © 2018 Elsevier Ltd. All rights reserved.
Resin embedded multicycle imaging (REMI): a tool to evaluate protein domains.
Busse, B L; Bezrukov, L; Blank, P S; Zimmerberg, J
2016-08-08
Protein complexes associated with cellular processes comprise a significant fraction of all biology, but our understanding of their heterogeneous organization remains inadequate, particularly for physiological densities of multiple protein species. Towards resolving this limitation, we here present a new technique based on resin-embedded multicycle imaging (REMI) of proteins in-situ. By stabilizing protein structure and antigenicity in acrylic resins, affinity labels were repeatedly applied, imaged, removed, and replaced. In principle, an arbitrarily large number of proteins of interest may be imaged on the same specimen with subsequent digital overlay. A series of novel preparative methods were developed to address the problem of imaging multiple protein species in areas of the plasma membrane or volumes of cytoplasm of individual cells. For multiplexed examination of antibody staining we used straightforward computational techniques to align sequential images, and super-resolution microscopy was used to further define membrane protein colocalization. We give one example of a fibroblast membrane with eight multiplexed proteins. A simple statistical analysis of this limited membrane proteomic dataset is sufficient to demonstrate the analytical power contributed by additional imaged proteins when studying membrane protein domains.
Wavelet-based compression of M-FISH images.
Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R
2005-05-01
Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.
Linguistic steganography on Twitter: hierarchical language modeling with manual interaction
NASA Astrophysics Data System (ADS)
Wilson, Alex; Blunsom, Phil; Ker, Andrew D.
2014-02-01
This work proposes a natural language stegosystem for Twitter, modifying tweets as they are written to hide 4 bits of payload per tweet, which is a greater payload than previous systems have achieved. The system, CoverTweet, includes novel components, as well as some already developed in the literature. We believe that the task of transforming covers during embedding is equivalent to unilingual machine translation (paraphrasing), and we use this equivalence to de ne a distortion measure based on statistical machine translation methods. The system incorporates this measure of distortion to rank possible tweet paraphrases, using a hierarchical language model; we use human interaction as a second distortion measure to pick the best. The hierarchical language model is designed to model the speci c language of the covers, which in this setting is the language of the Twitter user who is embedding. This is a change from previous work, where general-purpose language models have been used. We evaluate our system by testing the output against human judges, and show that humans are unable to distinguish stego tweets from cover tweets any better than random guessing.
Resin embedded multicycle imaging (REMI): a tool to evaluate protein domains
Busse, B. L.; Bezrukov, L.; Blank, P. S.; Zimmerberg, J.
2016-01-01
Protein complexes associated with cellular processes comprise a significant fraction of all biology, but our understanding of their heterogeneous organization remains inadequate, particularly for physiological densities of multiple protein species. Towards resolving this limitation, we here present a new technique based on resin-embedded multicycle imaging (REMI) of proteins in-situ. By stabilizing protein structure and antigenicity in acrylic resins, affinity labels were repeatedly applied, imaged, removed, and replaced. In principle, an arbitrarily large number of proteins of interest may be imaged on the same specimen with subsequent digital overlay. A series of novel preparative methods were developed to address the problem of imaging multiple protein species in areas of the plasma membrane or volumes of cytoplasm of individual cells. For multiplexed examination of antibody staining we used straightforward computational techniques to align sequential images, and super-resolution microscopy was used to further define membrane protein colocalization. We give one example of a fibroblast membrane with eight multiplexed proteins. A simple statistical analysis of this limited membrane proteomic dataset is sufficient to demonstrate the analytical power contributed by additional imaged proteins when studying membrane protein domains. PMID:27499335
Modeling for Ultrasonic Health Monitoring of Foams with Embedded Sensors
NASA Technical Reports Server (NTRS)
Wang, L.; Rokhlin, S. I.; Rokhlin, Stanislav, I.
2005-01-01
In this report analytical and numerical methods are proposed to estimate the effective elastic properties of regular and random open-cell foams. The methods are based on the principle of minimum energy and on structural beam models. The analytical solutions are obtained using symbolic processing software. The microstructure of the random foam is simulated using Voronoi tessellation together with a rate-dependent random close-packing algorithm. The statistics of the geometrical properties of random foams corresponding to different packing fractions have been studied. The effects of the packing fraction on elastic properties of the foams have been investigated by decomposing the compliance into bending and axial compliance components. It is shown that the bending compliance increases and the axial compliance decreases when the packing fraction increases. Keywords: Foam; Elastic properties; Finite element; Randomness
Affective context interferes with cognitive control in unipolar depression: An fMRI investigation
Dichter, Gabriel S.; Felder, Jennifer N.; Smoski, Moria J.
2009-01-01
Background Unipolar major depressive disorder (MDD) is characterized by aberrant amygdala responses to sad stimuli and poor cognitive control, but the interactive effects of these impairments are poorly understood. Aim To evaluate brain activation in MDD in response to cognitive control stimuli embedded within sad and neutral contexts. Method Fourteen adults with MDD and fifteen matched controls participated in a mixed block/event-related functional magnetic resonance imaging (fMRI) task that presented oddball target stimuli embedded within blocks of sad or neutral images. Results Target events activated similar prefrontal brain regions in both groups. However, responses to target events embedded within blocks of emotional images revealed a clear group dissociation. During neutral blocks, the control group demonstrated greater activation to targets in the midfrontal gyrus and anterior cingulate relative to the MDD group, replicating previous findings of prefrontal hypo-activation in MDD samples to cognitive control stimuli. However, during sad blocks, the MDD group demonstrated greater activation in a number of prefrontal regions, including the mid-, inferior, and orbito-frontal gyri and the anterior cingulate, suggesting that relatively more prefrontal brain activation was required to disengage from the sad images to respond to the target events. Limitations A larger sample size would have provided greater statistical power, and more standardized stimuli would have increased external validity. Conclusions This double dissociation of prefrontal responses to target events embedded within neutral and sad context suggests that MDD impacts not only responses to affective events, but extends to other cognitive processes carried out in the context of affective engagement. This implies that emotional reactivity to sad events in MDD may impact functioning more broadly than previously understood. PMID:18706701
A novel sample preparation method to avoid influence of embedding medium during nano-indentation
Yujie Meng; Siqun Wang; Zhiyong Cai; Timothy M. Young; Guanben Du; Yanjun Li
2012-01-01
The effect of the embedding medium on the nano-indentation measurements of lignocellulosic materials was investigated experimentally using nano-indentation. Both the reduced elastic modulus and the hardness of nonembedded cell walls were found to be lower than those of the embedded samples, proving that the embedding medium used for specimen preparation on cellulosic...
Robust kernel representation with statistical local features for face recognition.
Yang, Meng; Zhang, Lei; Shiu, Simon Chi-Keung; Zhang, David
2013-06-01
Factors such as misalignment, pose variation, and occlusion make robust face recognition a difficult problem. It is known that statistical features such as local binary pattern are effective for local feature extraction, whereas the recently proposed sparse or collaborative representation-based classification has shown interesting results in robust face recognition. In this paper, we propose a novel robust kernel representation model with statistical local features (SLF) for robust face recognition. Initially, multipartition max pooling is used to enhance the invariance of SLF to image registration error. Then, a kernel-based representation model is proposed to fully exploit the discrimination information embedded in the SLF, and robust regression is adopted to effectively handle the occlusion in face images. Extensive experiments are conducted on benchmark face databases, including extended Yale B, AR (A. Martinez and R. Benavente), multiple pose, illumination, and expression (multi-PIE), facial recognition technology (FERET), face recognition grand challenge (FRGC), and labeled faces in the wild (LFW), which have different variations of lighting, expression, pose, and occlusions, demonstrating the promising performance of the proposed method.
Study of Composite Plate Damages Using Embedded PZT Sensors with Various Center Frequency
NASA Astrophysics Data System (ADS)
Kang, Kyoung-Tak; Chun, Heoung-Jae; Son, Ju-Hyun; Byun, Joon-Hyung; Um, Moon-Kwang; Lee, Sang-Kwan
This study presents part of an experimental and analytical survey of candidate methods for damage detection of composite structural. Embedded piezoceramic (PZT) sensors were excited with the high power ultrasonic wave generator generating a propagation of stress wave along the composite plate. The same embedded piezoceramic (PZT) sensors are used as receivers for acquiring stress signals. The effects of center frequency of embedded sensor were evaluated for the damage identification capability with known localized defects. The study was carried out to assess damage in composite plate by fusing information from multiple sensing paths of the embedded network. It was based on the Hilbert transform, signal correlation and probabilistic searching. The obtained results show that satisfactory detection of defects could be achieved by proposed method.
A Steganographic Embedding Undetectable by JPEG Compatibility Steganalysis
2002-01-01
itd.nrl.navy.mil Abstract. Steganography and steganalysis of digital images is a cat- and-mouse game. In recent work, Fridrich, Goljan and Du introduced a method...proposed embedding method. 1 Introduction Steganography and steganalysis of digital images is a cat-and-mouse game. Ever since Kurak and McHugh’s seminal...paper on LSB embeddings in images [10], various researchers have published work on either increasing the payload, im- proving the resistance to
Research about Memory Detection Based on the Embedded Platform
NASA Astrophysics Data System (ADS)
Sun, Hao; Chu, Jian
As is known to us all, the resources of memory detection of the embedded systems are very limited. Taking the Linux-based embedded arm as platform, this article puts forward two efficient memory detection technologies according to the characteristics of the embedded software. Especially for the programs which need specific libraries, the article puts forwards portable memory detection methods to help program designers to reduce human errors,improve programming quality and therefore make better use of the valuable embedded memory resource.
Data embedding employing degenerate clusters of data having differences less than noise value
Sanford, II, Maxwell T.; Handel, Theodore G.
1998-01-01
A method of embedding auxiliary information into a set of host data, such as a photograph, television signal, facsimile transmission, or identification card. All such host data contain intrinsic noise, allowing pixels in the host data which are nearly identical and which have values differing by less than the noise value to be manipulated and replaced with auxiliary data. As the embedding method does not change the elemental values of the host data, the auxiliary data do not noticeably affect the appearance or interpretation of the host data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user.
Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding
Tajima, Satohiro; Yanagawa, Toru; Fujii, Naotaka; Toyoizumi, Taro
2015-01-01
Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The cross-embedding method captures structured interaction underlying cortex-wide dynamics that may be missed by conventional correlation-based analysis, demonstrating a critical role of time-series analysis in characterizing brain state. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness. PMID:26584045
Hartman, Joshua D; Balaji, Ashwin; Beran, Gregory J O
2017-12-12
Fragment-based methods predict nuclear magnetic resonance (NMR) chemical shielding tensors in molecular crystals with high accuracy and computational efficiency. Such methods typically employ electrostatic embedding to mimic the crystalline environment, and the quality of the results can be sensitive to the embedding treatment. To improve the quality of this embedding environment for fragment-based molecular crystal property calculations, we borrow ideas from the embedded ion method to incorporate self-consistently polarized Madelung field effects. The self-consistent reproduction of the Madelung potential (SCRMP) model developed here constructs an array of point charges that incorporates self-consistent lattice polarization and which reproduces the Madelung potential at all atomic sites involved in the quantum mechanical region of the system. The performance of fragment- and cluster-based 1 H, 13 C, 14 N, and 17 O chemical shift predictions using SCRMP and density functionals like PBE and PBE0 are assessed. The improved embedding model results in substantial improvements in the predicted 17 O chemical shifts and modest improvements in the 15 N ones. Finally, the performance of the model is demonstrated by examining the assignment of the two oxygen chemical shifts in the challenging γ-polymorph of glycine. Overall, the SCRMP-embedded NMR chemical shift predictions are on par with or more accurate than those obtained with the widely used gauge-including projector augmented wave (GIPAW) model.
Embedding Quantitative Methods by Stealth in Political Science: Developing a Pedagogy for Psephology
ERIC Educational Resources Information Center
Gunn, Andrew
2017-01-01
Student evaluations of quantitative methods courses in political science often reveal they are characterised by aversion, alienation and anxiety. As a solution to this problem, this paper describes a pedagogic research project with the aim of embedding quantitative methods by stealth into the first-year undergraduate curriculum. This paper…
NASA Astrophysics Data System (ADS)
Ensign, Todd I.; Rye, James A.; Luna, Melissa J.
2017-12-01
Research indicates that preservice teacher (PT) education programs can positively impact perceptions of scientific probeware use in K-8 environments. Despite the potential of probeware to improve science instruction and student engagement, its use in elementary education has been limited. Sixty-seven PT enrolled across three sections of an elementary science methods course participated in a mixed-methods study through which they utilized probeware in a thematic experience on ocean acidification. One-way repeated measures ANOVA of pre and post survey data measuring subscales of utility, ability, and intent to use probeware demonstrated a statistically significant increase with medium to large effect sizes for all subscales across all sections (p<0.01,{η}_p^2=0.384;p<0.001,{η}_p^2=0.517;p<0.001,{η}_p^2=0.214) . Analysis of reflective journals revealed over 60% felt the multiple capabilities (notably graphing) of probeware make it a useful classroom tool, and almost one-half believed that its use makes science more enjoyable and engaging. Mapping of the unitized data from the journals on the Next Generation Science Standards suggested that probeware use especially engages learners in planning and carrying out investigations and in analyzing and interpreting data. Journals also revealed that despite PT having prior experience with probeware in science courses, its use in their future elementary classroom is conditional on having a positive experience with probeware in a science methods course. Further, embedding a probeware experience in a unit on ocean acidification provides PT with strategies for addressing climate change and engaging in argument from evidence.
Teodoro, Douglas; Lovis, Christian
2013-01-01
Background Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. Objective To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. Methods We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. Results The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. Conclusion The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends. PMID:23637796
Mixed methods research in mental health nursing.
Kettles, A M; Creswell, J W; Zhang, W
2011-08-01
Mixed methods research is becoming more widely used in order to answer research questions and to investigate research problems in mental health and psychiatric nursing. However, two separate literature searches, one in Scotland and one in the USA, revealed that few mental health nursing studies identified mixed methods research in their titles. Many studies used the term 'embedded' but few studies identified in the literature were mixed methods embedded studies. The history, philosophical underpinnings, definition, types of mixed methods research and associated pragmatism are discussed, as well as the need for mixed methods research. Examples of mental health nursing mixed methods research are used to illustrate the different types of mixed methods: convergent parallel, embedded, explanatory and exploratory in their sequential and concurrent combinations. Implementing mixed methods research is also discussed briefly and the problem of identifying mixed methods research in mental and psychiatric nursing are discussed with some possible solutions to the problem proposed. © 2011 Blackwell Publishing.
Zhang, Wenli; Li, Caibin; Baguley, Bruce C; Zhou, Fang; Zhou, Weisai; Shaw, John P; Wang, Zhen; Wu, Zimei; Liu, Jianping
2016-12-15
To obtain a multicellular MCF-7 spheroid model to mimic the three-dimensional (3D) of tumors, the microwell liquid overlay (A) and hanging-drop/agar (B) methods were first compared for their technical parameters. Then a method for embedding spheroids within collagen was optimized. For method A, centrifugation assisted cells form irregular aggregates but not spheroids. For method B, an extended sedimentation period of over 24 h for cell suspensions and increased viscosity of the culture medium using methylcellulose were necessary to harvest a dense and regular cell spheroid. When the number was less than 5000 cells/drop, embedded spheroids showed no tight cores and higher viability than the unembedded. However, above 5000 cells/drop, cellular viability of embedded spheroids was not significantly different from unembedded spheroids and cells invading through the collagen were in a sun-burst pattern with tight cores. Propidium Iodide staining indicated that spheroids had necrotic cores. The doxorubicin cytotoxicity demonstrated that spheroids were less susceptible to DOX than their monolayer cells. A reliable and reproducible method for embedding spheroids using the hanging-drop/agarose method within collagen is described herein. The cell culture model can be used to guide experimental manipulation of 3D cell cultures and to evaluate anticancer drug efficacy. Copyright © 2016 Elsevier Inc. All rights reserved.
Dai, Hanjun; Umarov, Ramzan; Kuwahara, Hiroyuki; Li, Yu; Song, Le; Gao, Xin
2017-11-15
An accurate characterization of transcription factor (TF)-DNA affinity landscape is crucial to a quantitative understanding of the molecular mechanisms underpinning endogenous gene regulation. While recent advances in biotechnology have brought the opportunity for building binding affinity prediction methods, the accurate characterization of TF-DNA binding affinity landscape still remains a challenging problem. Here we propose a novel sequence embedding approach for modeling the transcription factor binding affinity landscape. Our method represents DNA binding sequences as a hidden Markov model which captures both position specific information and long-range dependency in the sequence. A cornerstone of our method is a novel message passing-like embedding algorithm, called Sequence2Vec, which maps these hidden Markov models into a common nonlinear feature space and uses these embedded features to build a predictive model. Our method is a novel combination of the strength of probabilistic graphical models, feature space embedding and deep learning. We conducted comprehensive experiments on over 90 large-scale TF-DNA datasets which were measured by different high-throughput experimental technologies. Sequence2Vec outperforms alternative machine learning methods as well as the state-of-the-art binding affinity prediction methods. Our program is freely available at https://github.com/ramzan1990/sequence2vec. xin.gao@kaust.edu.sa or lsong@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Embedding objects during 3D printing to add new functionalities.
Yuen, Po Ki
2016-07-01
A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning(®) Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication. These 3D printed glass bottom casting molds would result in PDMS replicas with a flat smooth bottom surface for better bonding and adhesion.
Efficient sidelobe ASK based dual-function radar-communications
NASA Astrophysics Data System (ADS)
Hassanien, Aboulnasr; Amin, Moeness G.; Zhang, Yimin D.; Ahmad, Fauzia
2016-05-01
Recently, dual-function radar-communications (DFRC) has been proposed as means to mitigate the spectrum congestion problem. Existing amplitude-shift keying (ASK) methods for information embedding do not take full advantage of the highest permissable sidelobe level. In this paper, a new ASK-based signaling strategy for enhancing the signal-to-noise ratio (SNR) at the communication receiver is proposed. The proposed method employs one reference waveform and simultaneously transmits a number of orthogonal waveforms equals to the number of 1's in the binary sequence being embedded. 3 dB SNR gain is achieved using the proposed method as compared to existing sidelobe ASK methods. The effectiveness of the proposed information embedding strategy is verified using simulations examples.
NASA Astrophysics Data System (ADS)
Kim, Sungho; Ahn, Jae-Hyuk; Park, Tae Jung; Lee, Sang Yup; Choi, Yang-Kyu
2009-06-01
A unique direct electrical detection method of biomolecules, charge pumping, was demonstrated using a nanogap embedded field-effect-transistor (FET). With aid of a charge pumping method, sensitivity can fall below the 1 ng/ml concentration regime in antigen-antibody binding of an avian influenza case. Biomolecules immobilized in the nanogap are mainly responsible for the acute changes of the interface trap density due to modulation of the energy level of the trap. This finding is supported by a numerical simulation. The proposed detection method for biomolecules using a nanogap embedded FET represents a foundation for a chip-based biosensor capable of high sensitivity.
Data embedding employing degenerate clusters of data having differences less than noise value
Sanford, M.T. II; Handel, T.G.
1998-10-06
A method of embedding auxiliary information into a set of host data, such as a photograph, television signal, facsimile transmission, or identification card. All such host data contain intrinsic noise, allowing pixels in the host data which are nearly identical and which have values differing by less than the noise value to be manipulated and replaced with auxiliary data. As the embedding method does not change the elemental values of the host data, the auxiliary data do not noticeably affect the appearance or interpretation of the host data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. 35 figs.
Framing Electronic Medical Records as Polylingual Documents in Query Expansion
Huang, Edward W; Wang, Sheng; Lee, Doris Jung-Lin; Zhang, Runshun; Liu, Baoyan; Zhou, Xuezhong; Zhai, ChengXiang
2017-01-01
We present a study of electronic medical record (EMR) retrieval that emulates situations in which a doctor treats a new patient. Given a query consisting of a new patient’s symptoms, the retrieval system returns the set of most relevant records of previously treated patients. However, due to semantic, functional, and treatment synonyms in medical terminology, queries are often incomplete and thus require enhancement. In this paper, we present a topic model that frames symptoms and treatments as separate languages. Our experimental results show that this method improves retrieval performance over several baselines with statistical significance. These baselines include methods used in prior studies as well as state-of-the-art embedding techniques. Finally, we show that our proposed topic model discovers all three types of synonyms to improve medical record retrieval. PMID:29854161
A New Concurrent Multiscale Methodology for Coupling Molecular Dynamics and Finite Element Analyses
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin; Saether, Erik; Glaessgen, Edward H/.
2008-01-01
The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.
McDonald, Kent L
2014-02-01
A variety of specimens including bacteria, ciliates, choanoflagellates (Salpingoeca rosetta), zebrafish (Danio rerio) embryos, nematode worms (Caenorhabditis elegans), and leaves of white clover (Trifolium repens) plants were high pressure frozen, freeze-substituted, infiltrated with either Epon, Epon-Araldite, or LR White resins, and polymerized. Total processing time from freezing to blocks ready to section was about 6 h. For epoxy embedding the specimens were freeze-substituted in 1% osmium tetroxide plus 0.1% uranyl acetate in acetone. For embedding in LR White the freeze-substitution medium was 0.2% uranyl acetate in acetone. Rapid infiltration was achieved by centrifugation through increasing concentrations of resin followed by polymerization at 100°C for 1.5-2 h. The preservation of ultrastructure was comparable to standard freeze substitution and resin embedding methods that take days to complete. On-section immunolabeling results for actin and tubulin molecules were positive with very low background labeling. The LR White methods offer a safer, quicker, and less-expensive alternative to Lowicryl embedding of specimens processed for on-section immunolabeling without traditional aldehyde fixatives.
GPU surface extraction using the closest point embedding
NASA Astrophysics Data System (ADS)
Kim, Mark; Hansen, Charles
2015-01-01
Isosurface extraction is a fundamental technique used for both surface reconstruction and mesh generation. One method to extract well-formed isosurfaces is a particle system; unfortunately, particle systems can be slow. In this paper, we introduce an enhanced parallel particle system that uses the closest point embedding as the surface representation to speedup the particle system for isosurface extraction. The closest point embedding is used in the Closest Point Method (CPM), a technique that uses a standard three dimensional numerical PDE solver on two dimensional embedded surfaces. To fully take advantage of the closest point embedding, it is coupled with a Barnes-Hut tree code on the GPU. This new technique produces well-formed, conformal unstructured triangular and tetrahedral meshes from labeled multi-material volume datasets. Further, this new parallel implementation of the particle system is faster than any known methods for conformal multi-material mesh extraction. The resulting speed-ups gained in this implementation can reduce the time from labeled data to mesh from hours to minutes and benefits users, such as bioengineers, who employ triangular and tetrahedral meshes
Mesquita, R A; Anzai, E K; Oliveira, R N; Nunes, F D
2001-01-01
There are several protocols reported in the literature for the extraction of genomic DNA from formalin-fixed paraffin-embedded samples. Genomic DNA is utilized in molecular analyses, including PCR. This study compares three different methods for the extraction of genomic DNA from formalin-fixed paraffin-embedded (inflammatory fibrous hyperplasia) and non-formalin-fixed (normal oral mucosa) samples: phenol with enzymatic digestion, and silica with and without enzymatic digestion. The amplification of DNA by means of the PCR technique was carried out with primers for the exon 7 of human keratin type 14. Amplicons were analyzed by means of electrophoresis in an 8% polyacrylamide gel with 5% glycerol, followed by silver-staining visualization. The phenol/enzymatic digestion and the silica/enzymatic digestion methods provided amplicons from both tissue samples. The method described is a potential aid in the establishment of the histopathologic diagnosis and in retrospective studies with archival paraffin-embedded samples.
Design method of ARM based embedded iris recognition system
NASA Astrophysics Data System (ADS)
Wang, Yuanbo; He, Yuqing; Hou, Yushi; Liu, Ting
2008-03-01
With the advantages of non-invasiveness, uniqueness, stability and low false recognition rate, iris recognition has been successfully applied in many fields. Up to now, most of the iris recognition systems are based on PC. However, a PC is not portable and it needs more power. In this paper, we proposed an embedded iris recognition system based on ARM. Considering the requirements of iris image acquisition and recognition algorithm, we analyzed the design method of the iris image acquisition module, designed the ARM processing module and its peripherals, studied the Linux platform and the recognition algorithm based on this platform, finally actualized the design method of ARM-based iris imaging and recognition system. Experimental results show that the ARM platform we used is fast enough to run the iris recognition algorithm, and the data stream can flow smoothly between the camera and the ARM chip based on the embedded Linux system. It's an effective method of using ARM to actualize portable embedded iris recognition system.
Parametric embedding for class visualization.
Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B
2007-09-01
We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.
Method for preparing hydrous zirconium oxide gels and spherules
Collins, Jack L.
2003-08-05
Methods for preparing hydrous zirconium oxide spherules, hydrous zirconium oxide gels such as gel slabs, films, capillary and electrophoresis gels, zirconium monohydrogen phosphate spherules, hydrous zirconium oxide spherules having suspendable particles homogeneously embedded within to form a composite sorbent, zirconium monohydrogen phosphate spherules having suspendable particles of at least one different sorbent homogeneously embedded within to form a composite sorbent having a desired crystallinity, zirconium oxide spherules having suspendable particles homogeneously embedded within to form a composite, hydrous zirconium oxide fiber materials, zirconium oxide fiber materials, hydrous zirconium oxide fiber materials having suspendable particles homogeneously embedded within to form a composite, zirconium oxide fiber materials having suspendable particles homogeneously embedded within to form a composite and spherules of barium zirconate. The hydrous zirconium oxide spherules and gel forms prepared by the gel-sphere, internal gelation process are useful as inorganic ion exchangers, catalysts, getters and ceramics.
Delay differential analysis of time series.
Lainscsek, Claudia; Sejnowski, Terrence J
2015-03-01
Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time compared with frequency-based methods such as the DFT and cross-spectral analysis.
Zhang, Chenbo; Li, Ajian; Li, Huaguang; Peng, Kangsheng; Wei, Qing; Lin, Moubin; Liu, Zhanju; Yin, Lu; Li, Jianwen
2015-01-01
Aim. To investigate the correlation between PPP1R12A gene copy number and clinical outcomes of oxaliplatin-based regimen in stage III colorectal cancer (CRC). Methods. A total of 139 paraffin-embedded tissue samples of stage III CRC patients who received oxaliplatin-based treatment after radical surgery were recruited. Genomic DNA was extracted and purified from paraffin-embedded sections. Quantitative PCR methods were used to detect the relative copy number (RCN) of PPP1R12A. Results. Statistical analysis demonstrated that low PPP1R12A RCN was associated with poor RFS (HR = 2.186, 95% CI: 1.293–3.696; P = 0.003) and OS (HR = 2.782, 95% CI: 1.531–5.052; P < 0.001). Additionally, when patients were stratified according to subgroups of stage III and tumor location, poor RFS and OS were also observed in the low PPP1R12A RCN group with significance (RFS: IIIB HR = 2.870, P < 0.001; colon HR = 1.910, P = 0.037; OS: IIIB HR = 3.527, P < 0.001; IIIC HR = 2.662, P = 0.049; rectum HR = 4.229, P = 0.002). Conclusion. Our findings suggest the copy number of PPP1R12A can independently predict recurrence and overall survival of stage III colorectal cancer patients receiving oxaliplatin-based chemotherapy. PMID:26113782
Closed-form solution of temperature and heat flux in embedded cooling channels
NASA Astrophysics Data System (ADS)
Griggs, Steven Craig
1997-11-01
An analytical method is discussed for predicting temperature in a layered composite material with embedded cooling channels. The cooling channels are embedded in the material to maintain its temperature at acceptable levels. Problems of this type are encountered in the aerospace industry and include high-temperature or high-heat-flux protection for advanced composite-material skins of high-speed air vehicles; thermal boundary-layer flow control on supersonic transports; or infrared signature suppression on military vehicles. A Green's function solution of the diffusion equation is used to simultaneously predict the global and localized effects of temperature in the material and in the embedded cooling channels. The integral method is used to solve the energy equation with fluid flow to find the solution of temperature and heat flux in the cooling fluid and material simultaneously. This method of calculation preserves the three-dimensional nature of this problem.
Public accountants' field dependence: Canadian evidence.
Hicks, Elizabeth; Bagg, Robert; Doyle, Wendy; Young, Jeffrey D
2007-12-01
The cognitive styles of 113 practicing, professional accountants from Nova Scotia, Canada were examined using the Group Embedded Figures Test. They completed a demographic survey for descriptive information as well as their rank in the firm and preferred area of professional practice. Analysis suggested professional accountants tend to be more analytical than intuitive in cognitive style and, consistent with recent findings in other fields, men and women in accounting do not appear to be different in cognitive style. No statistically significant differences were found on the embedded figures scores across ranks of trainee, manager, and partner or across select, preferred areas of professional practice.
Embedded high-contrast distributed grating structures
Zubrzycki, Walter J.; Vawter, Gregory A.; Allerman, Andrew A.
2002-01-01
A new class of fabrication methods for embedded distributed grating structures is claimed, together with optical devices which include such structures. These new methods are the only known approach to making defect-free high-dielectric contrast grating structures, which are smaller and more efficient than are conventional grating structures.
Embedded correlated wavefunction schemes: theory and applications.
Libisch, Florian; Huang, Chen; Carter, Emily A
2014-09-16
Conspectus Ab initio modeling of matter has become a pillar of chemical research: with ever-increasing computational power, simulations can be used to accurately predict, for example, chemical reaction rates, electronic and mechanical properties of materials, and dynamical properties of liquids. Many competing quantum mechanical methods have been developed over the years that vary in computational cost, accuracy, and scalability: density functional theory (DFT), the workhorse of solid-state electronic structure calculations, features a good compromise between accuracy and speed. However, approximate exchange-correlation functionals limit DFT's ability to treat certain phenomena or states of matter, such as charge-transfer processes or strongly correlated materials. Furthermore, conventional DFT is purely a ground-state theory: electronic excitations are beyond its scope. Excitations in molecules are routinely calculated using time-dependent DFT linear response; however applications to condensed matter are still limited. By contrast, many-electron wavefunction methods aim for a very accurate treatment of electronic exchange and correlation. Unfortunately, the associated computational cost renders treatment of more than a handful of heavy atoms challenging. On the other side of the accuracy spectrum, parametrized approaches like tight-binding can treat millions of atoms. In view of the different (dis-)advantages of each method, the simulation of complex systems seems to force a compromise: one is limited to the most accurate method that can still handle the problem size. For many interesting problems, however, compromise proves insufficient. A possible solution is to break up the system into manageable subsystems that may be treated by different computational methods. The interaction between subsystems may be handled by an embedding formalism. In this Account, we review embedded correlated wavefunction (CW) approaches and some applications. We first discuss our density functional embedding theory, which is formally exact. We show how to determine the embedding potential, which replaces the interaction between subsystems, at the DFT level. CW calculations are performed using a fixed embedding potential, that is, a non-self-consistent embedding scheme. We demonstrate this embedding theory for two challenging electron transfer phenomena: (1) initial oxidation of an aluminum surface and (2) hot-electron-mediated dissociation of hydrogen molecules on a gold surface. In both cases, the interaction between gas molecules and metal surfaces were treated by sophisticated CW techniques, with the remainder of the extended metal surface being treated by DFT. Our embedding approach overcomes the limitations of conventional Kohn-Sham DFT in describing charge transfer, multiconfigurational character, and excited states. From these embedding simulations, we gained important insights into fundamental processes that are crucial aspects of fuel cell catalysis (i.e., O2 reduction at metal surfaces) and plasmon-mediated photocatalysis by metal nanoparticles. Moreover, our findings agree very well with experimental observations, while offering new views into the chemistry. We finally discuss our recently formulated potential-functional embedding theory that provides a seamless, first-principles way to include back-action onto the environment from the embedded region.
Smith, Ashlee L.; Sun, Mai; Bhargava, Rohit; Stewart, Nicolas A.; Flint, Melanie S.; Bigbee, William L.; Krivak, Thomas C.; Strange, Mary A.; Cooper, Kristine L.; Zorn, Kristin K.
2013-01-01
Objective: The biology of high grade serous ovarian carcinoma (HGSOC) is poorly understood. Little has been reported on intratumoral homogeneity or heterogeneity of primary HGSOC tumors and their metastases. We evaluated the global protein expression profiles of paired primary and metastatic HGSOC from formalin-fixed, paraffin-embedded (FFPE) tissue samples. Methods: After IRB approval, six patients with advanced HGSOC were identified with tumor in both ovaries at initial surgery. Laser capture microdissection (LCM) was used to extract tumor for protein digestion. Peptides were extracted and analyzed by reversed-phase liquid chromatography coupled to a linear ion trap mass spectrometer. Tandem mass spectra were searched against the UniProt human protein database. Differences in protein abundance between samples were assessed and analyzed by Ingenuity Pathway Analysis software. Immunohistochemistry (IHC) for select proteins from the original and an additional validation set of five patients was performed. Results: Unsupervised clustering of the abundance profiles placed the paired specimens adjacent to each other. IHC H-score analysis of the validation set revealed a strong correlation between paired samples for all proteins. For the similarly expressed proteins, the estimated correlation coefficients in two of three experimental samples and all validation samples were statistically significant (p < 0.05). The estimated correlation coefficients in the experimental sample proteins classified as differentially expressed were not statistically significant. Conclusion: A global proteomic screen of primary HGSOC tumors and their metastatic lesions identifies tumoral homogeneity and heterogeneity and provides preliminary insight into these protein profiles and the cellular pathways they constitute. PMID:28250404
Method for preparing hydrous iron oxide gels and spherules
Collins, Jack L.; Lauf, Robert J.; Anderson, Kimberly K.
2003-07-29
The present invention is directed to methods for preparing hydrous iron oxide spherules, hydrous iron oxide gels such as gel slabs, films, capillary and electrophoresis gels, iron monohydrogen phosphate spherules, hydrous iron oxide spherules having suspendable particles homogeneously embedded within to form composite sorbents and catalysts, iron monohydrogen phosphate spherules having suspendable particles of at least one different sorbent homogeneously embedded within to form a composite sorbent, iron oxide spherules having suspendable particles homogeneously embedded within to form a composite of hydrous iron oxide fiber materials, iron oxide fiber materials, hydrous iron oxide fiber materials having suspendable particles homogeneously embedded within to form a composite, iron oxide fiber materials having suspendable particles homogeneously embedded within to form a composite, dielectric spherules of barium, strontium, and lead ferrites and mixtures thereof, and composite catalytic spherules of barium or strontium ferrite embedded with oxides of Mg, Zn, Pb, Ce and mixtures thereof. These variations of hydrous iron oxide spherules and gel forms prepared by the gel-sphere, internal gelation process offer more useful forms of inorganic ion exchangers, catalysts, getters, dielectrics, and ceramics.
Metric Optimization for Surface Analysis in the Laplace-Beltrami Embedding Space
Lai, Rongjie; Wang, Danny J.J.; Pelletier, Daniel; Mohr, David; Sicotte, Nancy; Toga, Arthur W.
2014-01-01
In this paper we present a novel approach for the intrinsic mapping of anatomical surfaces and its application in brain mapping research. Using the Laplace-Beltrami eigen-system, we represent each surface with an isometry invariant embedding in a high dimensional space. The key idea in our system is that we realize surface deformation in the embedding space via the iterative optimization of a conformal metric without explicitly perturbing the surface or its embedding. By minimizing a distance measure in the embedding space with metric optimization, our method generates a conformal map directly between surfaces with highly uniform metric distortion and the ability of aligning salient geometric features. Besides pairwise surface maps, we also extend the metric optimization approach for group-wise atlas construction and multi-atlas cortical label fusion. In experimental results, we demonstrate the robustness and generality of our method by applying it to map both cortical and hippocampal surfaces in population studies. For cortical labeling, our method achieves excellent performance in a cross-validation experiment with 40 manually labeled surfaces, and successfully models localized brain development in a pediatric study of 80 subjects. For hippocampal mapping, our method produces much more significant results than two popular tools on a multiple sclerosis study of 109 subjects. PMID:24686245
Toward automated denoising of single molecular Förster resonance energy transfer data
NASA Astrophysics Data System (ADS)
Lee, Hao-Chih; Lin, Bo-Lin; Chang, Wei-Hau; Tu, I.-Ping
2012-01-01
A wide-field two-channel fluorescence microscope is a powerful tool as it allows for the study of conformation dynamics of hundreds to thousands of immobilized single molecules by Förster resonance energy transfer (FRET) signals. To date, the data reduction from a movie to a final set containing meaningful single-molecule FRET (smFRET) traces involves human inspection and intervention at several critical steps, greatly hampering the efficiency at the post-imaging stage. To facilitate the data reduction from smFRET movies to smFRET traces and to address the noise-limited issues, we developed a statistical denoising system toward fully automated processing. This data reduction system has embedded several novel approaches. First, as to background subtraction, high-order singular value decomposition (HOSVD) method is employed to extract spatial and temporal features. Second, to register and map the two color channels, the spots representing bleeding through the donor channel to the acceptor channel are used. Finally, correlation analysis and likelihood ratio statistic for the change point detection (CPD) are developed to study the two channels simultaneously, resolve FRET states, and report the dwelling time of each state. The performance of our method has been checked using both simulation and real data.
Research and Design of Embedded Wireless Meal Ordering System Based on SQLite
NASA Astrophysics Data System (ADS)
Zhang, Jihong; Chen, Xiaoquan
The paper describes features and internal architecture and developing method of SQLite. And then it gives a design and program of meal ordering system. The system realizes the information interaction among the users and embedded devices with SQLite as database system. The embedded database SQLite manages the data and achieves wireless communication by using Bluetooth. A system program based on Qt/Embedded and Linux drivers realizes the local management of environmental data.
LC Circuits for Diagnosing Embedded Piezoelectric Devices
NASA Technical Reports Server (NTRS)
Chattin, Richard L.; Fox, Robert Lee; Moses, Robert W.; Shams, Qamar A.
2005-01-01
A recently invented method of nonintrusively detecting faults in piezoelectric devices involves measurement of the resonance frequencies of inductor capacitor (LC) resonant circuits. The method is intended especially to enable diagnosis of piezoelectric sensors, actuators, and sensor/actuators that are embedded in structures and/or are components of multilayer composite material structures.
High-speed event detector for embedded nanopore bio-systems.
Huang, Yiyun; Magierowski, Sebastian; Ghafar-Zadeh, Ebrahim; Wang, Chengjie
2015-08-01
Biological measurements of microscopic phenomena often deal with discrete-event signals. The ability to automatically carry out such measurements at high-speed in a miniature embedded system is desirable but compromised by high-frequency noise along with practical constraints on filter quality and sampler resolution. This paper presents a real-time event-detection method in the context of nanopore sensing that helps to mitigate these drawbacks and allows accurate signal processing in an embedded system. Simulations show at least a 10× improvement over existing on-line detection methods.
Statistical and sampling issues when using multiple particle tracking
NASA Astrophysics Data System (ADS)
Savin, Thierry; Doyle, Patrick S.
2007-08-01
Video microscopy can be used to simultaneously track several microparticles embedded in a complex material. The trajectories are used to extract a sample of displacements at random locations in the material. From this sample, averaged quantities characterizing the dynamics of the probes are calculated to evaluate structural and/or mechanical properties of the assessed material. However, the sampling of measured displacements in heterogeneous systems is singular because the volume of observation with video microscopy is finite. By carefully characterizing the sampling design in the experimental output of the multiple particle tracking technique, we derive estimators for the mean and variance of the probes’ dynamics that are independent of the peculiar statistical characteristics. We expose stringent tests of these estimators using simulated and experimental complex systems with a known heterogeneous structure. Up to a certain fundamental limitation, which we characterize through a material degree of sampling by the embedded probe tracking, these estimators can be applied to quantify the heterogeneity of a material, providing an original and intelligible kind of information on complex fluid properties. More generally, we show that the precise assessment of the statistics in the multiple particle tracking output sample of observations is essential in order to provide accurate unbiased measurements.
Fibrinogen Demonstration in Oral Lichen Planus: An Immunofluorescence Study on Archival Tissues.
Shirol, Pallavi D; Naik, Veena; Kale, Alka
2015-10-01
Lichen planus is a premalignant condition with minimal diagnostic aids. This study is an attempt to use paraffin embedded sections of lichen planus with immunofluorescein stain and to evaluate the immunofluorescent sections to establish pattern of fibrinogen deposition. Thirty-five paraffin embedded sections of old and new cases of oral lichen planus (study group) and five normal oral mucosa (control group) were chosen. Two sections of each (H & E) case were taken, one was stained with hematoxylin and eosin and another with fluorescein isothiocynate conjugate (FITC) polyclonal rabbit antibody against fibrinogen. Fluorescent findings were examined with a fluorescent microscope. A high statistical significant correlation was found in respect to fluorescence positivity, intensity of fluorescence and distribution of fluorescence each with p < 0.0001 and fluorescence at blood vessel walls (p = 0.0003). This study suggested that paraffin embedded sections can be successfully used in direct immunofluorescence staining in routine set up where only formalin fixed tissues are received. Paraffin embedded sections can be successfully used in direct immunofluorescence staining when only formalin fixed tissues are received.
Cell Kinetic and Histomorphometric Analysis of Microgravitational Osteopenia: PARE.03B
NASA Technical Reports Server (NTRS)
Roberts, W. Eugene; Garetto, Lawrence P.
1998-01-01
Previous methods of identifying cells undergoing DNA synthesis (S-phase) utilized H-3 thymidine (3HT) autoradiography. 5-Bromo-2'-deoxyuridine (BrdU) immunohistochemistry is a nonradioactive alternative method. This experiment compared the two methods using the nuclear volume model for osteoblast histogenesis in two different embedding media. Twenty Sprague-Dawley rats were used, with half receiving 3HT (1 micro Ci/g) and the other half BrdU (50 microgram/g). Condyies were embedded (one side in paraffin, the other in plastic) and S-phase nuclei were identified using either autoradiography or immunohistochemistry. The fractional distribution of preosteoblast cell types and the percentage of labeled cells (within each cell fraction and label index) were calculated and expressed as mean q standard error. Chi-Square analysis showed only a minor difference in the fractional distribution of cell types. However, there were significant differences (p less than 0.05) by ANOVA, in the nuclear labeling of specific cell types. With the exception of the less-differentiated A+A'cells, more BrdU label was consistently detected in paraffin than in plastic-embedded sections. In general, more nuclei were labeled with 3H-thymidine than with BrdU in both types of embedding media. Labeling index data (labeled cells/total cells sampled x 100) indicated that BrdU in paraffin, but not plastic gave the same results as 3HT in either embedding method. Thus, we conclude that the two labeling methods do not yield the same results for the nuclear volume model and that embedding media is an important factor whenusing BrdU. As a result of this work, 3HT was chosen for used in the PARE.03 flight experiments.
A telepresence robot system realized by embedded object concept
NASA Astrophysics Data System (ADS)
Vallius, Tero; Röning, Juha
2006-10-01
This paper presents the Embedded Object Concept (EOC) and a telepresence robot system which is a test case for the EOC. The EOC utilizes common object-oriented methods used in software by applying them to combined Lego-like software-hardware entities. These entities represent objects in object-oriented design methods, and they are the building blocks of embedded systems. The goal of the EOC is to make the designing embedded systems faster and easier. This concept enables people without comprehensive knowledge in electronics design to create new embedded systems, and for experts it shortens the design time of new embedded systems. We present the current status of a telepresence robot created with second-generation Atomi-objects, which is the name for our implementation of the embedded objects. The telepresence robot is a relatively complex test case for the EOC. The robot has been constructed using incremental device development, which is made possible by the architecture of the EOC. The robot contains video and audio exchange capability and a controlling system for driving with two wheels. The robot is built in two versions, the first consisting of a PC device and Atomi-objects, and the second consisting of only Atomi-objects. The robot is currently incomplete, but most of it has been successfully tested.
Extracting similar terms from multiple EMR-based semantic embeddings to support chart reviews.
Cheng Ye, M S; Fabbri, Daniel
2018-05-21
Word embeddings project semantically similar terms into nearby points in a vector space. When trained on clinical text, these embeddings can be leveraged to improve keyword search and text highlighting. In this paper, we present methods to refine the selection process of similar terms from multiple EMR-based word embeddings, and evaluate their performance quantitatively and qualitatively across multiple chart review tasks. Word embeddings were trained on each clinical note type in an EMR. These embeddings were then combined, weighted, and truncated to select a refined set of similar terms to be used in keyword search and text highlighting. To evaluate their quality, we measured the similar terms' information retrieval (IR) performance using precision-at-K (P@5, P@10). Additionally a user study evaluated users' search term preferences, while a timing study measured the time to answer a question from a clinical chart. The refined terms outperformed the baseline method's information retrieval performance (e.g., increasing the average P@5 from 0.48 to 0.60). Additionally, the refined terms were preferred by most users, and reduced the average time to answer a question. Clinical information can be more quickly retrieved and synthesized when using semantically similar term from multiple embeddings. Copyright © 2018. Published by Elsevier Inc.
Embedding objects during 3D printing to add new functionalities
2016-01-01
A novel method for integrating and embedding objects to add new functionalities during 3D printing based on fused deposition modeling (FDM) (also known as fused filament fabrication or molten polymer deposition) is presented. Unlike typical 3D printing, FDM-based 3D printing could allow objects to be integrated and embedded during 3D printing and the FDM-based 3D printed devices do not typically require any post-processing and finishing. Thus, various fluidic devices with integrated glass cover slips or polystyrene films with and without an embedded porous membrane, and optical devices with embedded Corning® Fibrance™ Light-Diffusing Fiber were 3D printed to demonstrate the versatility of the FDM-based 3D printing and embedding method. Fluid perfusion flow experiments with a blue colored food dye solution were used to visually confirm fluid flow and/or fluid perfusion through the embedded porous membrane in the 3D printed fluidic devices. Similar to typical 3D printed devices, FDM-based 3D printed devices are translucent at best unless post-polishing is performed and optical transparency is highly desirable in any fluidic devices; integrated glass cover slips or polystyrene films would provide a perfect optical transparent window for observation and visualization. In addition, they also provide a compatible flat smooth surface for biological or biomolecular applications. The 3D printed fluidic devices with an embedded porous membrane are applicable to biological or chemical applications such as continuous perfusion cell culture or biocatalytic synthesis but without the need for any post-device assembly and finishing. The 3D printed devices with embedded Corning® Fibrance™ Light-Diffusing Fiber would have applications in display, illumination, or optical applications. Furthermore, the FDM-based 3D printing and embedding method could also be utilized to print casting molds with an integrated glass bottom for polydimethylsiloxane (PDMS) device replication. These 3D printed glass bottom casting molds would result in PDMS replicas with a flat smooth bottom surface for better bonding and adhesion. PMID:27478528
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-03-10
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.
Light Weight MP3 Watermarking Method for Mobile Terminals
NASA Astrophysics Data System (ADS)
Takagi, Koichi; Sakazawa, Shigeyuki; Takishima, Yasuhiro
This paper proposes a novel MP3 watermarking method which is applicable to a mobile terminal with limited computational resources. Considering that in most cases the embedded information is copyright information or metadata, which should be extracted before playing back audio contents, the watermark detection process should be executed at high speed. However, when conventional methods are used with a mobile terminal, it takes a considerable amount of time to detect a digital watermark. This paper focuses on scalefactor manipulation to enable high speed watermark embedding/detection for MP3 audio and also proposes the manipulation method which minimizes audio quality degradation adaptively. Evaluation tests showed that the proposed method is capable of embedding 3 bits/frame information without degrading audio quality and detecting it at very high speed. Finally, this paper describes application examples for authentication with a digital signature.
NASA Astrophysics Data System (ADS)
Sablik, Thomas; Velten, Jörg; Kummert, Anton
2015-03-01
An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.
Tensor Train Neighborhood Preserving Embedding
NASA Astrophysics Data System (ADS)
Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin
2018-05-01
In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate novel trade-off gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior trade-off in classification, computation, and dimensionality reduction in MNIST handwritten digits and Weizmann face datasets.
Santos, Fábio Pinheiro; Santos, Estevão Antero; dos Santos, Ramon Silva; dos Anjos, Marcelino José; de Miranda, Mauro Sayão
2017-01-01
Purpose The purpose of this study was to evaluate changes in calcium and phosphorus content in dental enamel when subjected to “in-office” whitening for an extended time by using a 35% hydrogen peroxide solution, with and without calcium. Materials and Methods 10 human teeth, from which the roots had been removed, were embedded in epoxy resin, and their surfaces were smoothed. The specimens were divided into two groups; in group 1, a whitening solution without calcium was used, while in group 2, the solution included calcium. Each specimen was evaluated at 6 different points before the bleaching treatment, and these points were reassessed after each session. A total of five sessions were carried out. Concentrations of calcium and phosphorus were measured by using the technique of X-ray fluorescence. Results After performing a statistical analysis, it was found that there was no statistically significant loss of calcium and phosphorus during the whitening treatment, and the groups showed no statistical differences. Conclusion Excessive use of hydrogen peroxide, with or without calcium, causes no loss of calcium and phosphorus. PMID:28932242
Connectopic mapping with resting-state fMRI.
Haak, Koen V; Marquand, Andre F; Beckmann, Christian F
2018-04-15
Brain regions are often topographically connected: nearby locations within one brain area connect with nearby locations in another area. Mapping these connection topographies, or 'connectopies' in short, is crucial for understanding how information is processed in the brain. Here, we propose principled, fully data-driven methods for mapping connectopies using functional magnetic resonance imaging (fMRI) data acquired at rest by combining spectral embedding of voxel-wise connectivity 'fingerprints' with a novel approach to spatial statistical inference. We apply the approach in human primary motor and visual cortex, and show that it can trace biologically plausible, overlapping connectopies in individual subjects that follow these regions' somatotopic and retinotopic maps. As a generic mechanism to perform inference over connectopies, the new spatial statistics approach enables rigorous statistical testing of hypotheses regarding the fine-grained spatial profile of functional connectivity and whether that profile is different between subjects or between experimental conditions. The combined framework offers a fundamental alternative to existing approaches to investigating functional connectivity in the brain, from voxel- or seed-pair wise characterizations of functional association, towards a full, multivariate characterization of spatial topography. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Systematic Model-in-the-Loop Test of Embedded Control Systems
NASA Astrophysics Data System (ADS)
Krupp, Alexander; Müller, Wolfgang
Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.
Teixeira, Cleonice Silveira; Alfredo, Edson; Thomé, Luis Henrique de Camargo; Gariba-Silva, Ricardo; Silva-Sousa, Yara T. Correa; Sousa, Manoel Damião
2009-01-01
The use of an adequate method for evaluation of the adhesion of root canal filling materials provides more reliable results to allow comparison of the materials and substantiate their clinical choice. The aims of this study were to compare the shear bond strength (SBS) test and push-out test for evaluation of the adhesion of an epoxy-based endodontic sealer (AH Plus) to dentin and guttapercha, and to assess the failure modes on the debonded surfaces by means of scanning electron microscopy (SEM). Three groups were established (n=7): in group 1, root cylinders obtained from human canines were embedded in acrylic resin and had their canals prepared and filled with sealer; in group 2, longitudinal sections of dentin cylinders were embedded in resin with the canal surface smoothed and turned upwards; in group 3, gutta-percha cylinders were embedded in resin. Polyethylene tubes filled with sealer were positioned on the polished surface of the specimens (groups 2 and 3). The push-out test (group 1) and the SBS test (groups 2 and 3) were performed in an Instron universal testing machine running at crosshead speed of 1 mm/min. Means (±SD) in MPa were: G1 (8.8±1.13), G2 (5.9±1.05) and G3 (3.8±0.55). Statistical analysis by ANOVA and Student's t-test (α=0.05) revealed statistically significant differences (p<0.01) among the groups. SEM analysis showed a predominance of adhesive and mixed failures of AH Plus sealer. The tested surface affected significantly the results with the sealer reaching higher bond strength to dentin than to guttapercha with the SBS test. The comparison of the employed methodologies showed that the SBS test produced significantly lower bond strength values than the push-out test, was skilful in determining the adhesion of AH Plus sealer to dentin and gutta-percha, and required specimens that could be easily prepared for SEM, presenting as a viable alternative for further experiments. PMID:19274399
Streamflow Prediction based on Chaos Theory
NASA Astrophysics Data System (ADS)
Li, X.; Wang, X.; Babovic, V. M.
2015-12-01
Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.
Biological Embedding: Evaluation and Analysis of an Emerging Concept for Nursing Scholarship
Nist, Marliese Dion
2016-01-01
Aim The purpose of this paper is to report the analysis of the concept of biological embedding. Background Research that incorporates a life course perspective is becoming increasingly prominent in the health sciences. Biological embedding is a central concept in life course theory and may be important for nursing theories to enhance our understanding of health states in individuals and populations. Before the concept of biological embedding can be used in nursing theory and research, an analysis of the concept is required to advance it toward full maturity. Design Concept analysis. Data Sources PubMed, CINAHL and PsycINFO were searched for publications using the term ‘biological embedding’ or ‘biological programming’ and published through 2015. Methods An evaluation of the concept was first conducted to determine the concept’s level of maturity and was followed by a concept comparison, using the methods for concept evaluation and comparison described by Morse. Results A consistent definition of biological embedding – the process by which early life experience alters biological processes to affect adult health outcomes – was found throughout the literature. The concept has been used in several theories that describe the mechanisms through which biological embedding might occur and highlight its role in the development of health trajectories. Biological embedding is a partially mature concept, requiring concept comparison with an overlapping concept – biological programming – to more clearly establish the boundaries of biological embedding. Conclusions Biological embedding has significant potential for theory development and application in multiple academic disciplines, including nursing. PMID:27682606
NASA Astrophysics Data System (ADS)
Xu, Xianjin; Yan, Chengfei; Zou, Xiaoqin
2017-08-01
The growing number of protein-ligand complex structures, particularly the structures of proteins co-bound with different ligands, in the Protein Data Bank helps us tackle two major challenges in molecular docking studies: the protein flexibility and the scoring function. Here, we introduced a systematic strategy by using the information embedded in the known protein-ligand complex structures to improve both binding mode and binding affinity predictions. Specifically, a ligand similarity calculation method was employed to search a receptor structure with a bound ligand sharing high similarity with the query ligand for the docking use. The strategy was applied to the two datasets (HSP90 and MAP4K4) in recent D3R Grand Challenge 2015. In addition, for the HSP90 dataset, a system-specific scoring function (ITScore2_hsp90) was generated by recalibrating our statistical potential-based scoring function (ITScore2) using the known protein-ligand complex structures and the statistical mechanics-based iterative method. For the HSP90 dataset, better performances were achieved for both binding mode and binding affinity predictions comparing with the original ITScore2 and with ensemble docking. For the MAP4K4 dataset, although there were only eight known protein-ligand complex structures, our docking strategy achieved a comparable performance with ensemble docking. Our method for receptor conformational selection and iterative method for the development of system-specific statistical potential-based scoring functions can be easily applied to other protein targets that have a number of protein-ligand complex structures available to improve predictions on binding.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Glycogen in the Nervous System. I; Methods for Light and Electron Microscopy
NASA Technical Reports Server (NTRS)
Estable, Rosita F. De; Estable-Puig, J. F.; Miquel, J.
1964-01-01
'l'he relative value of different methods for combined light and electron microscopical studies of glycogen in the nervous tissue was investigated. Picroalcoholic fixatives preserve glycogen in a considerable amount but give an inadequate morphological image of glycogen distribution and are unsuitable for ultrastructural studies. Fixation by perfusion, with Dalton's chromeosmic fluid seems adequate for ultrastructural cytochemistry of glycogen. Furthermore it permits routine paraffin embedding of brain slices adjacent to those used for electron microscopy. Dimedone blocking is a necessary step for a selective staining of glycogen with PAS after osmic fixation. Enzymatic removal of glycogen in osmic fixed nervous tissue can be done In paraffin-embedded tissue. It can also be performed in glycolmethacrylate-embedded tissue without removal of the embedding medium. Paraphenylenediamine stains glycogen following periodic acid oxidation.
Stavropoulos, S William; Ge, Benjamin H; Mondschein, Jeffrey I; Shlansky-Goldberg, Richard D; Sudheendra, Deepak; Trerotola, Scott O
2015-06-01
To evaluate the use of endobronchial forceps to retrieve tip-embedded inferior vena cava (IVC) filters. This institutional review board-approved, HIPAA-compliant retrospective study included 114 patients who presented with tip-embedded IVC filters for removal from January 2005 to April 2014. The included patients consisted of 77 women and 37 men with a mean age of 43 years (range, 18-79 years). Filters were identified as tip embedded by using rotational venography. Rigid bronchoscopy forceps were used to dissect the tip or hook of the filter from the wall of the IVC. The filter was then removed through the sheath by using the endobronchial forceps. Statistical analysis entailed calculating percentages, ranges, and means. The endobronchial forceps technique was used to successfully retrieve 109 of 114 (96%) tip-embedded IVC filters on an intention-to-treat basis. Five failures occurred in four patients in whom the technique was attempted but failed and one patient in whom retrieval was not attempted. Filters were in place for a mean of 465 days (range, 31-2976 days). The filters in this study included 10 Recovery, 33 G2, eight G2X, 11 Eclipse, one OptEase, six Option, 13 Günther Tulip, one ALN, and 31 Celect filters. Three minor complications and one major complication occurred, with no permanent sequelae. The endobronchial forceps technique can be safely used to remove tip-embedded IVC filters. © RSNA, 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verhertbruggen, Yves; Walker, Jesse L.; Guillon, Fabienne
Staining and immunodetection by light microscopy are methods widely used to investigate plant cell walls. The two techniques have been crucial to study the cell wall architecture in planta, its deconstruction by chemicals or cell wall-degrading enzymes. They have been instrumental in detecting the presence of cell types, in deciphering plant cell wall evolution and in characterizing plant mutants and transformants. The success of immunolabeling relies on how plant materials are embedded and sectioned. Agarose coating, wax and resin embedding are, respectively, associated with vibratome, microtome and ultramicrotome sectioning. Here, we have systematically carried out a comparative analysis of thesemore » three methods of sample preparation when they are applied for cell wall staining and cell wall immunomicroscopy. In order to help the plant community in understanding and selecting adequate methods of embedding and sectioning for cell wall immunodetection, we review in this article the advantages and limitations of these three methods. Moreover, we offer detailed protocols of embedding for studying plant materials through microscopy.« less
Verhertbruggen, Yves; Walker, Jesse L.; Guillon, Fabienne; ...
2017-08-29
Staining and immunodetection by light microscopy are methods widely used to investigate plant cell walls. The two techniques have been crucial to study the cell wall architecture in planta, its deconstruction by chemicals or cell wall-degrading enzymes. They have been instrumental in detecting the presence of cell types, in deciphering plant cell wall evolution and in characterizing plant mutants and transformants. The success of immunolabeling relies on how plant materials are embedded and sectioned. Agarose coating, wax and resin embedding are, respectively, associated with vibratome, microtome and ultramicrotome sectioning. Here, we have systematically carried out a comparative analysis of thesemore » three methods of sample preparation when they are applied for cell wall staining and cell wall immunomicroscopy. In order to help the plant community in understanding and selecting adequate methods of embedding and sectioning for cell wall immunodetection, we review in this article the advantages and limitations of these three methods. Moreover, we offer detailed protocols of embedding for studying plant materials through microscopy.« less
Verhertbruggen, Yves; Walker, Jesse L.; Guillon, Fabienne; Scheller, Henrik V.
2017-01-01
Staining and immunodetection by light microscopy are methods widely used to investigate plant cell walls. The two techniques have been crucial to study the cell wall architecture in planta, its deconstruction by chemicals or cell wall-degrading enzymes. They have been instrumental in detecting the presence of cell types, in deciphering plant cell wall evolution and in characterizing plant mutants and transformants. The success of immunolabeling relies on how plant materials are embedded and sectioned. Agarose coating, wax and resin embedding are, respectively, associated with vibratome, microtome and ultramicrotome sectioning. Here, we have systematically carried out a comparative analysis of these three methods of sample preparation when they are applied for cell wall staining and cell wall immunomicroscopy. In order to help the plant community in understanding and selecting adequate methods of embedding and sectioning for cell wall immunodetection, we review in this article the advantages and limitations of these three methods. Moreover, we offer detailed protocols of embedding for studying plant materials through microscopy. PMID:28900439
Data quantile-quantile plots: quantifying the time evolution of space climatology
NASA Astrophysics Data System (ADS)
Tindale, Elizabeth; Chapman, Sandra
2017-04-01
The solar wind is inherently variable across a wide range of spatio-temporal scales; embedded in the flow are the signatures of distinct non-linear physical processes from evolving turbulence to the dynamical solar corona. In-situ satellite observations of solar wind magnetic field and velocity are at minute and below time resolution and now extend over several solar cycles. Each solar cycle is unique, and the space climatology challenge is to quantify how solar wind variability changes within, and across, each distinct solar cycle, and how this in turn drives space weather at earth. We will demonstrate a novel statistical method, that of data-data quantile-quantile (DQQ) plots, which quantifies how the underlying statistical distribution of a given observable is changing in time. Importantly this method does not require any assumptions concerning the underlying functional form of the distribution and can identify multi-component behaviour that is changing in time. This can be used to determine when a sub-range of a given observable is undergoing a change in statistical distribution, or where the moments of the distribution only are changing and the functional form of the underlying distribution is not changing in time. The method is quite general; for this application we use data from the WIND satellite to compare the solar wind across the minima and maxima of solar cycles 23 and 24 [1], and how these changes are manifest in parameters that quantify coupling to the earth's magnetosphere. [1] Tindale, E., and S.C. Chapman (2016), Geophys. Res. Lett., 43(11), doi: 10.1002/2016GL068920.
Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison
NASA Astrophysics Data System (ADS)
van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder
2000-04-01
Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.
Fracture Mechanics Method for Word Embedding Generation of Neural Probabilistic Linguistic Model.
Bi, Size; Liang, Xiao; Huang, Ting-Lei
2016-01-01
Word embedding, a lexical vector representation generated via the neural linguistic model (NLM), is empirically demonstrated to be appropriate for improvement of the performance of traditional language model. However, the supreme dimensionality that is inherent in NLM contributes to the problems of hyperparameters and long-time training in modeling. Here, we propose a force-directed method to improve such problems for simplifying the generation of word embedding. In this framework, each word is assumed as a point in the real world; thus it can approximately simulate the physical movement following certain mechanics. To simulate the variation of meaning in phrases, we use the fracture mechanics to do the formation and breakdown of meaning combined by a 2-gram word group. With the experiments on the natural linguistic tasks of part-of-speech tagging, named entity recognition and semantic role labeling, the result demonstrated that the 2-dimensional word embedding can rival the word embeddings generated by classic NLMs, in terms of accuracy, recall, and text visualization.
Method for preparing hydrous titanium oxide spherules and other gel forms thereof
Collins, J.L.
1998-10-13
The present invention are methods for preparing hydrous titanium oxide spherules, hydrous titanium oxide gels such as gel slabs, films, capillary and electrophoresis gels, titanium monohydrogen phosphate spherules, hydrous titanium oxide spherules having suspendible particles homogeneously embedded within to form a composite sorbent, titanium monohydrogen phosphate spherules having suspendible particles of at least one different sorbent homogeneously embedded within to form a composite sorbent having a desired crystallinity, titanium oxide spherules in the form of anatase, brookite or rutile, titanium oxide spherules having suspendible particles homogeneously embedded within to form a composite, hydrous titanium oxide fiber materials, titanium oxide fiber materials, hydrous titanium oxide fiber materials having suspendible particles homogeneously embedded within to form a composite, titanium oxide fiber materials having suspendible particles homogeneously embedded within to form a composite and spherules of barium titanate. These variations of hydrous titanium oxide spherules and gel forms prepared by the gel-sphere, internal gelation process offer more useful forms of inorganic ion exchangers, catalysts, getters and ceramics. 6 figs.
Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Martínez, Todd J; Kongsted, Jacob
2017-12-12
The calculation of spectral properties for photoactive proteins is challenging because of the large cost of electronic structure calculations on large systems. Mixed quantum mechanical (QM) and molecular mechanical (MM) methods are typically employed to make such calculations computationally tractable. This study addresses the connection between the minimal QM region size and the method used to model the MM region in the calculation of absorption properties-here exemplified for calculations on the green fluorescent protein. We find that polarizable embedding is necessary for a qualitatively correct description of the MM region, and that this enables the use of much smaller QM regions compared to fixed charge electrostatic embedding. Furthermore, absorption intensities converge very slowly with system size and inclusion of effective external field effects in the MM region through polarizabilities is therefore very important. Thus, this embedding scheme enables accurate prediction of intensities for systems that are too large to be treated fully quantum mechanically.
Method for preparing hydrous titanium oxide spherules and other gel forms thereof
Collins, Jack L.
1998-01-01
The present invention are methods for preparing hydrous titanium oxide spherules, hydrous titanium oxide gels such as gel slabs, films, capillary and electrophoresis gels, titanium monohydrogen phosphate spherules, hydrous titanium oxide spherules having suspendible particles homogeneously embedded within to form a composite sorbent, titanium monohydrogen phosphate spherules having suspendible particles of at least one different sorbent homogeneously embedded within to form a composite sorbent having a desired crystallinity, titanium oxide spherules in the form of anatase, brookite or rutile, titanium oxide spherules having suspendible particles homogeneously embedded within to form a composite, hydrous titanium oxide fiber materials, titanium oxide fiber materials, hydrous titanium oxide fiber materials having suspendible particles homogeneously embedded within to form a composite, titanium oxide fiber materials having suspendible particles homogeneously embedded within to form a composite and spherules of barium titanate. These variations of hydrous titanium oxide spherules and gel forms prepared by the gel-sphere, internal gelation process offer more useful forms of inorganic ion exchangers, catalysts, getters and ceramics.
The ATLASGAL survey: a catalog of dust condensations in the Galactic plane
NASA Astrophysics Data System (ADS)
Csengeri, T.; Urquhart, J. S.; Schuller, F.; Motte, F.; Bontemps, S.; Wyrowski, F.; Menten, K. M.; Bronfman, L.; Beuther, H.; Henning, Th.; Testi, L.; Zavagno, A.; Walmsley, M.
2014-05-01
Context. The formation processes and the evolutionary stages of high-mass stars are poorly understood compared to low-mass stars. Large-scale surveys are needed to provide an unbiased census of high column density sites that can potentially host precursors to high-mass stars. Aims: The ATLASGAL survey covers 420 sq. degree of the Galactic plane, between -80° < ℓ < +60° at 870 μm. Here we identify the population of embedded sources throughout the inner Galaxy. With this catalog we first investigate the general statistical properties of dust condensations in terms of their observed parameters, such as flux density and angular size. Then using mid-infrared surveys we aim to investigate their star formation activity and the Galactic distribution of star-forming and quiescent clumps. Our ultimate goal is to determine the statistical properties of quiescent and star-forming clumps within the Galaxy and to constrain the star formation processes. Methods: We optimized the source extraction method, referred to as MRE-GCL, for the ATLASGAL maps in order to generate a catalog of compact sources. This technique is based on multiscale filtering to remove extended emission from clouds to better determine the parameters corresponding to the embedded compact sources. In a second step we extracted the sources by fitting 2D Gaussians with the Gaussclumps algorithm. Results: We have identified in total 10861 compact submillimeter sources with fluxes above 5σ. Completeness tests show that this catalog is 97% complete above 5σ and >99% complete above 7σ. Correlating this sample of clumps with mid-infrared point source catalogs (MSX at 21.3 μm and WISE at 22 μm), we have determined a lower limit of 33% that is associated with embedded protostellar objects. We note that the proportion of clumps associated with mid-infrared sources increases with increasing flux density, achieving a rather constant fraction of ~75% of all clumps with fluxes over 5 Jy/beam being associated with star formation. Examining the source counts as a function of Galactic longitude, we are able to identify the most prominent star-forming regions in the Galaxy. Conclusions: We present here the compact source catalog of the full ATLASGAL survey and investigate their characteristic properties. From the fraction of the likely massive quiescent clumps (~25%), we estimate a formation time scale of ~ 7.5 ± 2.5 × 104 yr for the deeply embedded phase before the emergence of luminous young stellar objects. Such a short duration for the formation of high-mass stars in massive clumps clearly proves that the earliest phases have to be dynamic with supersonic motions. Full Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/565/A75
Video watermarking for mobile phone applications
NASA Astrophysics Data System (ADS)
Mitrea, M.; Duta, S.; Petrescu, M.; Preteux, F.
2005-08-01
Nowadays, alongside with the traditional voice signal, music, video, and 3D characters tend to become common data to be run, stored and/or processed on mobile phones. Hence, to protect their related intellectual property rights also becomes a crucial issue. The video sequences involved in such applications are generally coded at very low bit rates. The present paper starts by presenting an accurate statistical investigation on such a video as well as on a very dangerous attack (the StirMark attack). The obtained results are turned into practice when adapting a spread spectrum watermarking method to such applications. The informed watermarking approach was also considered: an outstanding method belonging to this paradigm has been adapted and re evaluated under the low rate video constraint. The experimental results were conducted in collaboration with the SFR mobile services provider in France. They also allow a comparison between the spread spectrum and informed embedding techniques.
Cell Kinetic and Histomorphometric Analysis of Microgravitational Osteopenia: PARE.03B
NASA Technical Reports Server (NTRS)
Roberts, W. Eugene; Garetto, Lawrence P.
1998-01-01
Previous methods of identifying cells undergoing DNA synthesis (S-phase) utilized 3H-thymidine (3HT) autoradiography. 5-Bromo-2'-deoxyuridine (BrdU) immunohistochemistry is a nonradioactive alternative method. This experiment compared the two methods using the nuclear volume model for osteoblast histogenesis in two different embedding media. Twenty Sprague-Dawley rats were used, with half receiving 3HT (1 micro-Ci/g) and the other half BrdU (50 micro-g/g). Condyles were embedded (one side in paraffin, the other in plastic) and S-phase nuclei were identified using either autoradiography or immunohistochemistry. The fractional distribution of preosteoblast cell types and the percentage of labeled cells (within each cell fraction and label index) were calculated and expressed as mean +/- standard error. Chi-Square analysis showed only a minor difference in the fractional distribution of cell types. However, there were,significant differences (p less than 0.05) by ANOVA, in the nuclear labeling of specific cell types. With the exception of the less-differentiated A+A' cells, more BrdU label was consistently detected in paraffin than in plastic-embedded sections. In general, more nuclei were labeled with 3H-thymidine than with BrdU in both types of embedding media (Fig 2.). Labeling index data (labeled cells/total cells sampled x 100) indicated that BrdU in paraffin, but not plastic gave the same results as 3HT in either embedding method. Thus, we conclude that the two labeling methods do not yield the same results.
The use of advanced web-based survey design in Delphi research.
Helms, Christopher; Gardner, Anne; McInnes, Elizabeth
2017-12-01
A discussion of the application of metadata, paradata and embedded data in web-based survey research, using two completed Delphi surveys as examples. Metadata, paradata and embedded data use in web-based Delphi surveys has not been described in the literature. The rapid evolution and widespread use of online survey methods imply that paper-based Delphi methods will likely become obsolete. Commercially available web-based survey tools offer a convenient and affordable means of conducting Delphi research. Researchers and ethics committees may be unaware of the benefits and risks of using metadata in web-based surveys. Discussion paper. Two web-based, three-round Delphi surveys were conducted sequentially between August 2014 - January 2015 and April - May 2016. Their aims were to validate the Australian nurse practitioner metaspecialties and their respective clinical practice standards. Our discussion paper is supported by researcher experience and data obtained from conducting both web-based Delphi surveys. Researchers and ethics committees should consider the benefits and risks of metadata use in web-based survey methods. Web-based Delphi research using paradata and embedded data may introduce efficiencies that improve individual participant survey experiences and reduce attrition across iterations. Use of embedded data allows the efficient conduct of multiple simultaneous Delphi surveys across a shorter timeframe than traditional survey methods. The use of metadata, paradata and embedded data appears to improve response rates, identify bias and give possible explanation for apparent outlier responses, providing an efficient method of conducting web-based Delphi surveys. © 2017 John Wiley & Sons Ltd.
Outcomes of Embedded Care Management in a Family Medicine Residency Patient-Centered Medical Home.
Newman, Robert J; Bikowski, Richard; Nakayama, Kristy; Cunningham, Tina; Acker, Pam; Bradshaw, Dana
2017-01-01
Much attention is devoted nationally to preventing hospital readmissions and emergency department (ED) use, given the high cost of this care. There is a growing body of evidence from the Patient Centered Primary Care Collaborative that a patient-centered medical home (PCMH) model successfully lowers these costs. Our study evaluates a specific intervention in a family medicine residency PCMH to decrease readmissions and ED utilization using an embedded care manager. The Department of Family and Community Medicine at Eastern Virginia Medical School in Norfolk, VA, hired an RN care manager in May of 2013 with a well-defined job description focused on decreasing hospital readmissions and ED usage. Our primary outcomes for the study were number of monthly hospital admissions and readmissions over 23 months and monthly ED visits over 20 months. Readmission rates averaged 22.2% per month in the first year of the intervention and 18.3% in the second year, a statistically significant 3.9% decrease. ED visits averaged 176 per month in the first year and 146 per month in the second year, a statistically significant 17% reduction. Our study adds to the evidence that a PCMH model of care with an embedded RN care manager can favorably lower readmission rates and ED utilization in a family medicine residency practice. Developing a viable business model to support this important work remains a challenge.
Huang, Chen; Muñoz-García, Ana Belén; Pavone, Michele
2016-12-28
Density-functional embedding theory provides a general way to perform multi-physics quantum mechanics simulations of large-scale materials by dividing the total system's electron density into a cluster's density and its environment's density. It is then possible to compute the accurate local electronic structures and energetics of the embedded cluster with high-level methods, meanwhile retaining a low-level description of the environment. The prerequisite step in the density-functional embedding theory is the cluster definition. In covalent systems, cutting across the covalent bonds that connect the cluster and its environment leads to dangling bonds (unpaired electrons). These represent a major obstacle for the application of density-functional embedding theory to study extended covalent systems. In this work, we developed a simple scheme to define the cluster in covalent systems. Instead of cutting covalent bonds, we directly split the boundary atoms for maintaining the valency of the cluster. With this new covalent embedding scheme, we compute the dehydrogenation energies of several different molecules, as well as the binding energy of a cobalt atom on graphene. Well localized cluster densities are observed, which can facilitate the use of localized basis sets in high-level calculations. The results are found to converge faster with the embedding method than the other multi-physics approach ONIOM. This work paves the way to perform the density-functional embedding simulations of heterogeneous systems in which different types of chemical bonds are present.
NASA Astrophysics Data System (ADS)
Iyer, Ajai; Etula, Jarkko; Ge, Yanling; Liu, Xuwen; Koskinen, Jari
2016-11-01
Detonation Nanodiamonds (DNDs) are known to have sp3 core, sp2 shell, small size (few nm) and are gaining importance as multi-functional nanoparticles. Diverse methods have been used to form composites, containing detonation nanodiamonds (DNDs) embedded in conductive and dielectric matrices for various applications. Here we show a method, wherein DND-ta-C composite film, consisting of DNDs embedded in ta-C matrix have been co-deposited from the same cathode by pulsed filtered cathodic vacuum arc method. Transmission Electron Microscope analysis of these films revel the presence of DNDs embedded in the matrix of amorphous carbon. Raman spectroscopy indicates that the presence of DNDs does not adversely affect the sp3 content of DND-ta-C composite film compared to ta-C film of same thickness. Nanoindentation and nanowear tests indicate that DND-ta-C composite films possess improved mechanical properties in comparison to ta-C films of similar thickness.
Extending density functional embedding theory for covalently bonded systems.
Yu, Kuang; Carter, Emily A
2017-12-19
Quantum embedding theory aims to provide an efficient solution to obtain accurate electronic energies for systems too large for full-scale, high-level quantum calculations. It adopts a hierarchical approach that divides the total system into a small embedded region and a larger environment, using different levels of theory to describe each part. Previously, we developed a density-based quantum embedding theory called density functional embedding theory (DFET), which achieved considerable success in metals and semiconductors. In this work, we extend DFET into a density-matrix-based nonlocal form, enabling DFET to study the stronger quantum couplings between covalently bonded subsystems. We name this theory density-matrix functional embedding theory (DMFET), and we demonstrate its performance in several test examples that resemble various real applications in both chemistry and biochemistry. DMFET gives excellent results in all cases tested thus far, including predicting isomerization energies, proton transfer energies, and highest occupied molecular orbital-lowest unoccupied molecular orbital gaps for local chromophores. Here, we show that DMFET systematically improves the quality of the results compared with the widely used state-of-the-art methods, such as the simple capped cluster model or the widely used ONIOM method.
Imperial College near infrared spectroscopy neuroimaging analysis framework.
Orihuela-Espina, Felipe; Leff, Daniel R; James, David R C; Darzi, Ara W; Yang, Guang-Zhong
2018-01-01
This paper describes the Imperial College near infrared spectroscopy neuroimaging analysis (ICNNA) software tool for functional near infrared spectroscopy neuroimaging data. ICNNA is a MATLAB-based object-oriented framework encompassing an application programming interface and a graphical user interface. ICNNA incorporates reconstruction based on the modified Beer-Lambert law and basic processing and data validation capabilities. Emphasis is placed on the full experiment rather than individual neuroimages as the central element of analysis. The software offers three types of analyses including classical statistical methods based on comparison of changes in relative concentrations of hemoglobin between the task and baseline periods, graph theory-based metrics of connectivity and, distinctively, an analysis approach based on manifold embedding. This paper presents the different capabilities of ICNNA in its current version.
Curvature and temperature of complex networks.
Krioukov, Dmitri; Papadopoulos, Fragkiskos; Vahdat, Amin; Boguñá, Marián
2009-09-01
We show that heterogeneous degree distributions in observed scale-free topologies of complex networks can emerge as a consequence of the exponential expansion of hidden hyperbolic space. Fermi-Dirac statistics provides a physical interpretation of hyperbolic distances as energies of links. The hidden space curvature affects the heterogeneity of the degree distribution, while clustering is a function of temperature. We embed the internet into the hyperbolic plane and find a remarkable congruency between the embedding and our hyperbolic model. Besides proving our model realistic, this embedding may be used for routing with only local information, which holds significant promise for improving the performance of internet routing.
Talreja, Hari; Ryan, Stephen Edward; Graham, Janet; Sood, Manish M.; Hadziomerovic, Adnan; Clark, Edward
2017-01-01
Background With the increasing frequency of tunneled hemodialysis catheter use there is a parallel increase in the need for removal and/or exchange. A small but significant minority of catheters become embedded or ‘stuck’ and cannot be removed by traditional means. Management of embedded catheters involves cutting the catheter, burying the retained fragment with a subsequent increased risk of infections and thrombosis. Endoluminal dilatation may provide a potential safe and effective technique for removing embedded catheters, however, to date, there is a paucity of data. Objectives 1) To determine factors associated with catheters becoming embedded and 2) to determine outcomes associated with endoluminal dilatation Methods All patients with endoluminal dilatation for embedded catheters at our institution since Jan. 2010 were included. Patients who had an embedded catheter were matched 1:3 with patients with uncomplicated catheter removal. Baseline patient and catheter characteristics were compared. Outcomes included procedural success and procedure-related infection. Logistic regression models were used to determine factors associated with embedded catheters. Results We matched 15 cases of embedded tunneled catheters with 45 controls. Among patients with embedded catheters, there were no complications with endoluminal dilatation. Factors independently associated with embedded catheters included catheter dwell time (> 2 years) and history of central venous stenosis. Conclusion Embedded catheters can be successfully managed by endoluminal dilatation with minimal complications and factors associated with embedding include dwell times > 2 years and/or with a history of central venous stenosis. PMID:28346468
Orthogonality of embedded wave functions for different states in frozen-density embedding theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zech, Alexander; Wesolowski, Tomasz A.; Aquilante, Francesco
2015-10-28
Other than lowest-energy stationary embedded wave functions obtained in Frozen-Density Embedding Theory (FDET) [T. A. Wesolowski, Phys. Rev. A 77, 012504 (2008)] can be associated with electronic excited states but they can be mutually non-orthogonal. Although this does not violate any physical principles — embedded wave functions are only auxiliary objects used to obtain stationary densities — working with orthogonal functions has many practical advantages. In the present work, we show numerically that excitation energies obtained using conventional FDET calculations (allowing for non-orthogonality) can be obtained using embedded wave functions which are strictly orthogonal. The used method preserves the mathematicalmore » structure of FDET and self-consistency between energy, embedded wave function, and the embedding potential (they are connected through the Euler-Lagrange equations). The orthogonality is built-in through the linearization in the embedded density of the relevant components of the total energy functional. Moreover, we show formally that the differences between the expectation values of the embedded Hamiltonian are equal to the excitation energies, which is the exact result within linearized FDET. Linearized FDET is shown to be a robust approximation for a large class of reference densities.« less
Greening the Engineering and Technology Curriculum via Real Life Hands-on Projects
USDA-ARS?s Scientific Manuscript database
This paper aims at demonstrating how greening efforts can be embedded into science and engineering courses without major curricular changes. In this regard, examples of final projects assigned in a statistical quality control, a 500-level, graduate engineering course, focusing on campus sustainabili...
Mathematical Modeling and Pure Mathematics
ERIC Educational Resources Information Center
Usiskin, Zalman
2015-01-01
Common situations, like planning air travel, can become grist for mathematical modeling and can promote the mathematical ideas of variables, formulas, algebraic expressions, functions, and statistics. The purpose of this article is to illustrate how the mathematical modeling that is present in everyday situations can be naturally embedded in…
Spear Phishing Attack Detection
2011-03-24
the insider amongst senior leaders of an organization [Mes08], the undercover detective within a drug cartel, or the classic secret agent planted in...to a mimicry attack that shapes the embedded malware to have a statistical distribution similar to "normal" or benign behavior. 2.3.1.3
Integrated narrowband optical filter based on embedded subwavelength resonant grating structures
Grann, Eric B.; Sitter, Jr., David N.
2000-01-01
A resonant grating structure in a waveguide and methods of tuning the performance of the grating structure are described. An apparatus includes a waveguide; and a subwavelength resonant grating structure embedded in the waveguide. The systems and methods provide advantages including narrowband filtering capabilities, minimal sideband reflections, spatial control, high packing density, and tunability.
Roncero, Octavio; Aguado, Alfredo; Batista-Romero, Fidel A; Bernal-Uruchurtu, Margarita I; Hernández-Lamoneda, Ramón
2015-03-10
A variant of the density difference driven optimized embedding potential (DDD-OEP) method, proposed by Roncero et al. (J. Chem. Phys. 2009, 131, 234110), has been applied to the calculation of excited states of Br2 within small water clusters. It is found that the strong interaction of Br2 with the lone electronic pair of the water molecules makes necessary to optimize specific embedding potentials for ground and excited electronic states, separately and using the corresponding densities. Diagnosis and convergence studies are presented with the aim of providing methods to be applied for the study of chromophores in solution. Also, some preliminary results obtained for the study of electronic states of Br2 in clathrate cages are presented.
Piezoresistive effect of the carbon nanotube yarn embedded axially into the 3D braided composite
NASA Astrophysics Data System (ADS)
Ma, Xin; Cao, Xiaona
2018-06-01
A new method for monitoring 3D braided composite structure health in real time by embedding the carbon nanotube yarn, based on its piezoresistivity, in the composite axially has been designed. The experimental system for piezoresistive effect detection of the carbon nanotube yarn in the 3D braided composite was built, and the sensing characteristics has been analyzed for further research. Compared with other structural health monitoring methods, the monitoring technique with carbon nanotubes yarns is more suitable for internal damage detection immediately, in addition the strength of the composite can be increased by embedding carbon nanotubes yarns. This method can also be used for strain sensing, the development of intelligent materials and structure systems.
Autofocus method for automated microscopy using embedded GPUs.
Castillo-Secilla, J M; Saval-Calvo, M; Medina-Valdès, L; Cuenca-Asensi, S; Martínez-Álvarez, A; Sánchez, C; Cristóbal, G
2017-03-01
In this paper we present a method for autofocusing images of sputum smears taken from a microscope which combines the finding of the optimal focus distance with an algorithm for extending the depth of field (EDoF). Our multifocus fusion method produces an unique image where all the relevant objects of the analyzed scene are well focused, independently to their distance to the sensor. This process is computationally expensive which makes unfeasible its automation using traditional embedded processors. For this purpose a low-cost optimized implementation is proposed using limited resources embedded GPU integrated on cutting-edge NVIDIA system on chip. The extensive tests performed on different sputum smear image sets show the real-time capabilities of our implementation maintaining the quality of the output image.
Enhanced thermal conductance of polymer composites through embedding aligned carbon nanofibers
Nicholas, Roberts; Hensley, Dale K.; Wood, David
2016-07-08
The focus of this work is to find a more efficient method of enhancing the thermal conductance of polymer thin films. This work compares polymer thin films embedded with randomly oriented carbon nanotubes to those with vertically aligned carbon nanofibers. Thin films embedded with carbon nanofibers demonstrated a similar thermal conductance between 40–60 μm and a higher thermal conductance between 25–40 μm than films embedded with carbon nanotubes with similar volume fractions even though carbon nanotubes have a higher thermal conductivity than carbon nanofibers
Enhanced thermal conductance of polymer composites through embedding aligned carbon nanofibers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholas, Roberts; Hensley, Dale K.; Wood, David
The focus of this work is to find a more efficient method of enhancing the thermal conductance of polymer thin films. This work compares polymer thin films embedded with randomly oriented carbon nanotubes to those with vertically aligned carbon nanofibers. Thin films embedded with carbon nanofibers demonstrated a similar thermal conductance between 40–60 μm and a higher thermal conductance between 25–40 μm than films embedded with carbon nanotubes with similar volume fractions even though carbon nanotubes have a higher thermal conductivity than carbon nanofibers
Embedded object concept: case balancing two-wheeled robot
NASA Astrophysics Data System (ADS)
Vallius, Tero; Röning, Juha
2007-09-01
This paper presents the Embedded Object Concept (EOC) and a telepresence robot system which is a test case for the EOC. The EOC utilizes common object-oriented methods used in software by applying them to combined Lego-like software-hardware entities. These entities represent objects in object-oriented design methods, and they are the building blocks of embedded systems. The goal of the EOC is to make the designing of embedded systems faster and easier. This concept enables people without comprehensive knowledge in electronics design to create new embedded systems, and for experts it shortens the design time of new embedded systems. We present the current status of a telepresence robot created with Atomi-objects, which is the name for our implementation of the embedded objects. The telepresence robot is a relatively complex test case for the EOC. The robot has been constructed using incremental device development, which is made possible by the architecture of the EOC. The robot contains video and audio exchange capability and a controlling system for driving with two wheels. The robot consists of Atomi-objects, demonstrating the suitability of the EOC for prototyping and easy modifications, and proving the capabilities of the EOC by realizing a function that normally requires a computer. The computer counterpart is a regular PC with audio and video capabilities running with a robot control application. The robot is functional and successfully tested.
Epoxy Resins in Electron Microscopy
Finck, Henry
1960-01-01
A method of embedding biological specimens in araldite 502 (Ciba) has been developed for materials available in the United States. Araldite-embedded tissues are suitable for electron microscopy, but the cutting qualities of the resin necessitates more than routine attention during microtomy. The rather high viscosity of araldite 502 also seems to be an unnecessary handicap. The less viscous epoxy epon 812 (Shell) produces specimens with improved cutting qualities, and has several features—low shrinkage and absence of specimen damage during cure, minimal compression of sections, relative absence of electron beam-induced section damage, etc.—which recommends it as a routine embedding material. The hardness of the cured resin can be easily adjusted by several methods to suit the materials embedded in it. Several problems and advantages of working with sections of epoxy resins are also discussed. PMID:13822825
Virtual network embedding in cross-domain network based on topology and resource attributes
NASA Astrophysics Data System (ADS)
Zhu, Lei; Zhang, Zhizhong; Feng, Linlin; Liu, Lilan
2018-03-01
Aiming at the network architecture ossification and the diversity of access technologies issues, this paper researches the cross-domain virtual network embedding algorithm. By analysing the topological attribute from the local and global perspective of nodes in the virtual network and the physical network, combined with the local network resource property, we rank the embedding priority of the nodes with PCA and TOPSIS methods. Besides, the link load distribution is considered. Above all, We proposed an cross-domain virtual network embedding algorithm based on topology and resource attributes. The simulation results depicts that our algorithm increases the acceptance rate of multi-domain virtual network requests, compared with the existing virtual network embedding algorithm.
2012-01-01
Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103
Deep learning with word embeddings improves biomedical named entity recognition.
Habibi, Maryam; Weber, Leon; Neves, Mariana; Wiegandt, David Luis; Leser, Ulf
2017-07-15
Text mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult. We show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall. The source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ . habibima@informatik.hu-berlin.de. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Deep learning with word embeddings improves biomedical named entity recognition
Habibi, Maryam; Weber, Leon; Neves, Mariana; Wiegandt, David Luis; Leser, Ulf
2017-01-01
Abstract Motivation: Text mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult. Results: We show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall. Availability and implementation: The source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/. Contact: habibima@informatik.hu-berlin.de PMID:28881963
Computational Efficiency of the Simplex Embedding Method in Convex Nondifferentiable Optimization
NASA Astrophysics Data System (ADS)
Kolosnitsyn, A. V.
2018-02-01
The simplex embedding method for solving convex nondifferentiable optimization problems is considered. A description of modifications of this method based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable optimization problems is presented.
Thomas, L H; French, B; Burton, C R; Sutton, C; Forshaw, D; Dickinson, H; Leathley, M J; Britt, D; Roe, B; Cheater, F M; Booth, J; Watkins, C L
2014-10-01
Urinary incontinence (UI) affects between 40 and 60% of people in hospital after stroke, but is often poorly managed in stroke units. To inform an exploratory trial by three methods: identifying the organisational context for embedding the SVP; exploring health professionals' views around embedding the SVP and measuring presence/absence of UI and frequency of UI episodes at baseline and six weeks post-stroke. A mixed methods single case study included analysis of organisational context using interviews with clinical leaders analysed with soft systems methodology, a process evaluation using interviews with staff delivering the intervention and analysed with Normalisation Process Theory, and outcome evaluation using data from patients receiving the SVP and analysed using descriptive statistics. An 18 bed acute stroke unit in a large Foundation Trust (a 'not for profit' privately controlled entity not accountable to the UK Department of Health) serving a population of 370,000. Health professionals and clinical leaders with a role in either delivering the SVP or linking with it in any capacity were recruited following informed consent. Patients were recruited meeting the following inclusion criteria: aged 18 or over with a diagnosis of stroke; urinary incontinence (UI) as defined by the International Continence Society; conscious; medically stable as judged by the clinical team and with incontinence classified as stress, urge, mixed or 'functional'. All patients admitted to the unit during the intervention period were screened for eligibility; informed consent to collect baseline and outcome data was sought from all eligible patients. Organisational context: 18 health professionals took part in four group interviews. Findings suggest an environment not conducive to therapeutic continence management and a focus on containment of UI. Embedding the SVP into practice: 21 nursing staff took part in six group interviews. Initial confusion gave way to embedding of processes facilitated by new routines and procedures. Patient outcome: 43 patients were recruited; 28 of these commenced the SVP. Of these, 6/28 (21%) were continent at six weeks post-stroke or discharge. It was possible to embed the SVP into practice despite an organisational context not conducive to therapeutic continence care. Recommendations are made for introducing the SVP in a trial context. Copyright © 2014. Published by Elsevier Ltd.
Force Field for Water Based on Neural Network.
Wang, Hao; Yang, Weitao
2018-05-18
We developed a novel neural network based force field for water based on training with high level ab initio theory. The force field was built based on electrostatically embedded many-body expansion method truncated at binary interactions. Many-body expansion method is a common strategy to partition the total Hamiltonian of large systems into a hierarchy of few-body terms. Neural networks were trained to represent electrostatically embedded one-body and two-body interactions, which require as input only one and two water molecule calculations at the level of ab initio electronic structure method CCSD/aug-cc-pVDZ embedded in the molecular mechanics water environment, making it efficient as a general force field construction approach. Structural and dynamic properties of liquid water calculated with our force field show good agreement with experimental results. We constructed two sets of neural network based force fields: non-polarizable and polarizable force fields. Simulation results show that the non-polarizable force field using fixed TIP3P charges has already behaved well, since polarization effects and many-body effects are implicitly included due to the electrostatic embedding scheme. Our results demonstrate that the electrostatically embedded many-body expansion combined with neural network provides a promising and systematic way to build the next generation force fields at high accuracy and low computational costs, especially for large systems.
Dimensionality reduction of collective motion by principal manifolds
NASA Astrophysics Data System (ADS)
Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.
2015-01-01
While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.
Yamamoto, Takeshi
2008-12-28
Conventional quantum chemical solvation theories are based on the mean-field embedding approximation. That is, the electronic wavefunction is calculated in the presence of the mean field of the environment. In this paper a direct quantum mechanical/molecular mechanical (QM/MM) analog of such a mean-field theory is formulated based on variational and perturbative frameworks. In the variational framework, an appropriate QM/MM free energy functional is defined and is minimized in terms of the trial wavefunction that best approximates the true QM wavefunction in a statistically averaged sense. Analytical free energy gradient is obtained, which takes the form of the gradient of effective QM energy calculated in the averaged MM potential. In the perturbative framework, the above variational procedure is shown to be equivalent to the first-order expansion of the QM energy (in the exact free energy expression) about the self-consistent reference field. This helps understand the relation between the variational procedure and the exact QM/MM free energy as well as existing QM/MM theories. Based on this, several ways are discussed for evaluating non-mean-field effects (i.e., statistical fluctuations of the QM wavefunction) that are neglected in the mean-field calculation. As an illustration, the method is applied to an S(N)2 Menshutkin reaction in water, NH(3)+CH(3)Cl-->NH(3)CH(3) (+)+Cl(-), for which free energy profiles are obtained at the Hartree-Fock, MP2, B3LYP, and BHHLYP levels by integrating the free energy gradient. Non-mean-field effects are evaluated to be <0.5 kcal/mol using a Gaussian fluctuation model for the environment, which suggests that those effects are rather small for the present reaction in water.
78 FR 11661 - Request for Information: Main Study Design for the National Children's Study
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-19
....nationalchildrensstudy.gov/research/workshops/Pages/nationalacademyofsciencesworkshop.aspx . DATES: RFI Release Date is...-response relationships, substudies embedded in the Vanguard Study or the Main Study, and formative research... fall of 2012, the NCS held a series of meetings with federal and non-federal statistical sampling...
Deconstructing multivariate decoding for the study of brain function.
Hebart, Martin N; Baker, Chris I
2017-08-04
Multivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function. Copyright © 2017. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Anderson, J. M.
1978-01-01
A method is described for preparing large gelatine-embedded soil sections for ecological studies. Sampling methods reduce structural disturbance of the samples to a minimum and include freezing the samples in the field to kill soil invertebrates in their natural microhabitats. Projects are suggested for upper secondary school students. (Author/BB)
Reliability Validation and Improvement Framework
2012-11-01
systems . Steps in that direction include the use of the Architec- ture Tradeoff Analysis Method ® (ATAM®) developed at the Carnegie Mellon...embedded software • cyber - physical systems (CPSs) to indicate that the embedded software interacts with, manag - es, and controls a physical system [Lee...the use of formal static analysis methods to increase our confidence in system operation beyond testing. However, analysis results
Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.
Zhao, Dongfang; Yang, Li
2009-01-01
Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.
Modeling the mechanics of axonal fiber tracts using the embedded finite element method.
Garimella, Harsha T; Kraft, Reuben H
2017-05-01
A subject-specific human head finite element model with embedded axonal fiber tractography obtained from diffusion tensor imaging was developed. The axonal fiber tractography finite element model was coupled with the volumetric elements in the head model using the embedded element method. This technique enables the calculation of axonal strains and real-time tracking of the mechanical response of the axonal fiber tracts. The coupled model was then verified using pressure and relative displacement-based (between skull and brain) experimental studies and was employed to analyze a head impact, demonstrating the applicability of this method in studying axonal injury. Following this, a comparison study of different injury criteria was performed. This model was used to determine the influence of impact direction on the extent of the axonal injury. The results suggested that the lateral impact loading is more dangerous compared to loading in the sagittal plane, a finding in agreement with previous studies. Through this analysis, we demonstrated the viability of the embedded element method as an alternative numerical approach for studying axonal injury in patient-specific human head models. Copyright © 2016 John Wiley & Sons, Ltd.
Accelerating wavefunction in density-functional-theory embedding by truncating the active basis set
NASA Astrophysics Data System (ADS)
Bennie, Simon J.; Stella, Martina; Miller, Thomas F.; Manby, Frederick R.
2015-07-01
Methods where an accurate wavefunction is embedded in a density-functional description of the surrounding environment have recently been simplified through the use of a projection operator to ensure orthogonality of orbital subspaces. Projector embedding already offers significant performance gains over conventional post-Hartree-Fock methods by reducing the number of correlated occupied orbitals. However, in our first applications of the method, we used the atomic-orbital basis for the full system, even for the correlated wavefunction calculation in a small, active subsystem. Here, we further develop our method for truncating the atomic-orbital basis to include only functions within or close to the active subsystem. The number of atomic orbitals in a calculation on a fixed active subsystem becomes asymptotically independent of the size of the environment, producing the required O ( N 0 ) scaling of cost of the calculation in the active subsystem, and accuracy is controlled by a single parameter. The applicability of this approach is demonstrated for the embedded many-body expansion of binding energies of water hexamers and calculation of reaction barriers of SN2 substitution of fluorine by chlorine in α-fluoroalkanes.
NASA Astrophysics Data System (ADS)
Laha, Ranjit; Malar, P.; Osipowicz, Thomas; Kasiviswanathan, S.
2017-09-01
Tailoring of plasmonic properties of metal nanoparticle-embedded dielectric thin films are very crucial for many thin film-based applications. We, herein, investigate the various ways of tuning the plasmonic positions of gold nanoparticles (AuNPs)-embedded indium oxide thin films (Au:IO) through a sequence-specific sandwich method. The sandwich method is a four-step process involving deposition of In2O3 film by magnetron sputtering in first and fourth steps, thermal evaporation of Au on to In2O3 film in second and annealing of Au/In2O3 film in the third step. The Au:IO films were characterized by x-ray diffraction, spectrophotometry and transmission electron microscopy. The size and shape of the embedded nanoparticles were found from Rutherford back-scattering spectrometry. Based on dynamic Maxwell Garnett theory, the observed plasmon resonance position was ascribed to the oblate shape of AuNPs formed in sandwich method. Finally, through experimental data, it was shown that the plasmon resonance position of Au:IO thin films can be tuned by 125 nm. The method shown here can be used to tune the plasmon resonance position over the entire range of visible region for the thin films made from other combinations of metal-dielectric pair.
The application of polyethylene glycol (PEG) to electron microscopy
1980-01-01
The cytoplasm of cells from a variety of tissues has been viewed in sections (0.25-1 micrometers) devoid of any embedding resin. Glutaraldehyde- and osmium tetroxide-fixed tissues were infiltrated and embedded in a water-miscible wax, polyethylene glycol (PEG), and subsequently sectioned on dry glass or diamond knives. The PEG matrix was removed and the sections were placed on Formvarcarbon-polylysine- coated grids, dehydrated, dried by the critical-point method, and observed in either the high- or low-voltage electron microscope. Stereoscopic views of cells devoid of embedding resin present an image of cell utrastructure unobscured by electron-scattering resins similar to the image of whole, unembedded critical-point-dried or freeze-dried cultured cells observed by transmission electron microscopy. All organelles, including the cytoskeletal structures, are identified and appear not to have been damaged during processing, although membrane components appear somewhat less distinct. The absence of an embedding matrix eliminates the need for additional staining to increase contrast, unlike the situation with specimens embedded in standard electron-scattering resins. The PEG technique thus appears to be a valuable adjunct to conventional methods for ultrastructural analysis. PMID:7400222
The application of polyethylene glycol (PEG) to electron microscopy.
Wolosewick, J J
1980-08-01
The cytoplasm of cells from a variety of tissues has been viewed in sections (0.25-1 micrometers) devoid of any embedding resin. Glutaraldehyde- and osmium tetroxide-fixed tissues were infiltrated and embedded in a water-miscible wax, polyethylene glycol (PEG), and subsequently sectioned on dry glass or diamond knives. The PEG matrix was removed and the sections were placed on Formvarcarbon-polylysine-coated grids, dehydrated, dried by the critical-point method, and observed in either the high- or low-voltage electron microscope. Stereoscopic views of cells devoid of embedding resin present an image of cell utrastructure unobscured by electron-scattering resins similar to the image of whole, unembedded critical-point-dried or freeze-dried cultured cells observed by transmission electron microscopy. All organelles, including the cytoskeletal structures, are identified and appear not to have been damaged during processing, although membrane components appear somewhat less distinct. The absence of an embedding matrix eliminates the need for additional staining to increase contrast, unlike the situation with specimens embedded in standard electron-scattering resins. The PEG technique thus appears to be a valuable adjunct to conventional methods for ultrastructural analysis.
NASA Astrophysics Data System (ADS)
Chibani, Wael; Ren, Xinguo; Scheffler, Matthias; Rinke, Patrick
2016-04-01
We present an embedding scheme for periodic systems that facilitates the treatment of the physically important part (here a unit cell or a supercell) with advanced electronic structure methods, that are computationally too expensive for periodic systems. The rest of the periodic system is treated with computationally less demanding approaches, e.g., Kohn-Sham density-functional theory, in a self-consistent manner. Our scheme is based on the concept of dynamical mean-field theory formulated in terms of Green's functions. Our real-space dynamical mean-field embedding scheme features two nested Dyson equations, one for the embedded cluster and another for the periodic surrounding. The total energy is computed from the resulting Green's functions. The performance of our scheme is demonstrated by treating the embedded region with hybrid functionals and many-body perturbation theory in the GW approach for simple bulk systems. The total energy and the density of states converge rapidly with respect to the computational parameters and approach their bulk limit with increasing cluster (i.e., computational supercell) size.
A general method for handling missing binary outcome data in randomized controlled trials
Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen
2014-01-01
Aims The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. Design We propose a sensitivity analysis where standard analyses, which could include ‘missing = smoking’ and ‘last observation carried forward’, are embedded in a wider class of models. Setting We apply our general method to data from two smoking cessation trials. Participants A total of 489 and 1758 participants from two smoking cessation trials. Measurements The abstinence outcomes were obtained using telephone interviews. Findings The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. Conclusions A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. PMID:25171441
Multiview Locally Linear Embedding for Effective Medical Image Retrieval
Shen, Hualei; Tao, Dacheng; Ma, Dianfu
2013-01-01
Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277
DOE Office of Scientific and Technical Information (OSTI.GOV)
Govind, Niranjan; Sushko, Petr V.; Hess, Wayne P.
2009-03-05
We present a study of the electronic excitations in insulating materials using an embedded- cluster method. The excited states of the embedded cluster are studied systematically using time-dependent density functional theory (TDDFT) and high-level equation-of-motion coupled cluster (EOMCC) methods. In particular, we have used EOMCC models with singles and doubles (EOMCCSD) and two approaches which account for the e®ect of triply excited con¯gurations in non-iterative and iterative fashions. We present calculations of the lowest surface excitations of the well-studied potassium bromide (KBr) system and compare our results with experiment. The bulk-surface exciton shift is also calculated at the TDDFT levelmore » and compared with experiment.« less
Integrated Micro-Optics for Microfluidic Detection.
Kazama, Yuto; Hibara, Akihide
2016-01-01
A method of embedding micro-optics into a microfluidic device was proposed and demonstrated. First, the usefulness of embedded right-angle prisms was demonstrated in microscope observation. Lateral-view microscopic observation of an aqueous dye flow in a 100-μm-sized microchannel was demonstrated. Then, the embedded right-angle prisms were utilized for multi-beam laser spectroscopy. Here, crossed-beam thermal lens detection of a liquid sample was applied to glucose detection.
NASA Astrophysics Data System (ADS)
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
NASA Astrophysics Data System (ADS)
Zhang, D. P.; Lei, Y.; Shen, Z. B.
2017-12-01
The effect of longitudinal magnetic field on vibration response of a sing-walled carbon nanotube (SWCNT) embedded in viscoelastic medium is investigated. Based on nonlocal Euler-Bernoulli beam theory, Maxwell's relations, and Kelvin viscoelastic foundation model, the governing equations of motion for vibration analysis are established. The complex natural frequencies and corresponding mode shapes in closed form for the embedded SWCNT with arbitrary boundary conditions are obtained using transfer function method (TFM). The new analytical expressions for the complex natural frequencies are also derived for certain typical boundary conditions and Kelvin-Voigt model. Numerical results from the model are presented to show the effects of nonlocal parameter, viscoelastic parameter, boundary conditions, aspect ratio, and strength of the magnetic field on vibration characteristics for the embedded SWCNT in longitudinal magnetic field. The results demonstrate the efficiency of the proposed methods for vibration analysis of embedded SWCNTs under magnetic field.
Analysis of S-box in Image Encryption Using Root Mean Square Error Method
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-07-01
The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes
A secure steganography for privacy protection in healthcare system.
Liu, Jing; Tang, Guangming; Sun, Yifeng
2013-04-01
Private data in healthcare system require confidentiality protection while transmitting. Steganography is the art of concealing data into a cover media for conveying messages confidentially. In this paper, we propose a steganographic method which can provide private data in medical system with very secure protection. In our method, a cover image is first mapped into a 1D pixels sequence by Hilbert filling curve and then divided into non-overlapping embedding units with three consecutive pixels. We use adaptive pixel pair match (APPM) method to embed digits in the pixel value differences (PVD) of the three pixels and the base of embedded digits is dependent on the differences among the three pixels. By solving an optimization problem, minimal distortion of the pixel ternaries caused by data embedding can be obtained. The experimental results show our method is more suitable to privacy protection of healthcare system than prior steganographic works.
ProMotE: an efficient algorithm for counting independent motifs in uncertain network topologies.
Ren, Yuanfang; Sarkar, Aisharjya; Kahveci, Tamer
2018-06-26
Identifying motifs in biological networks is essential in uncovering key functions served by these networks. Finding non-overlapping motif instances is however a computationally challenging task. The fact that biological interactions are uncertain events further complicates the problem, as it makes the existence of an embedding of a given motif an uncertain event as well. In this paper, we develop a novel method, ProMotE (Probabilistic Motif Embedding), to count non-overlapping embeddings of a given motif in probabilistic networks. We utilize a polynomial model to capture the uncertainty. We develop three strategies to scale our algorithm to large networks. Our experiments demonstrate that our method scales to large networks in practical time with high accuracy where existing methods fail. Moreover, our experiments on cancer and degenerative disease networks show that our method helps in uncovering key functional characteristics of biological networks.
Embedding global and collective in a torus network with message class map based tree path selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Dong; Coteus, Paul W.; Eisley, Noel A.
Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computermore » program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure.« less
Multiscale Methods for Nuclear Reactor Analysis
NASA Astrophysics Data System (ADS)
Collins, Benjamin S.
The ability to accurately predict local pin powers in nuclear reactors is necessary to understand the mechanisms that cause fuel pin failure during steady state and transient operation. In the research presented here, methods are developed to improve the local solution using high order methods with boundary conditions from a low order global solution. Several different core configurations were tested to determine the improvement in the local pin powers compared to the standard techniques, that use diffusion theory and pin power reconstruction (PPR). Two different multiscale methods were developed and analyzed; the post-refinement multiscale method and the embedded multiscale method. The post-refinement multiscale methods use the global solution to determine boundary conditions for the local solution. The local solution is solved using either a fixed boundary source or an albedo boundary condition; this solution is "post-refinement" and thus has no impact on the global solution. The embedded multiscale method allows the local solver to change the global solution to provide an improved global and local solution. The post-refinement multiscale method is assessed using three core designs. When the local solution has more energy groups, the fixed source method has some difficulties near the interface: however the albedo method works well for all cases. In order to remedy the issue with boundary condition errors for the fixed source method, a buffer region is used to act as a filter, which decreases the sensitivity of the solution to the boundary condition. Both the albedo and fixed source methods benefit from the use of a buffer region. Unlike the post-refinement method, the embedded multiscale method alters the global solution. The ability to change the global solution allows for refinement in areas where the errors in the few group nodal diffusion are typically large. The embedded method is shown to improve the global solution when it is applied to a MOX/LEU assembly interface, the fuel/reflector interface, and assemblies where control rods are inserted. The embedded method also allows for multiple solution levels to be applied in a single calculation. The addition of intermediate levels to the solution improves the accuracy of the method. Both multiscale methods considered here have benefits and drawbacks, but both can provide improvements over the current PPR methodology.
NASA Astrophysics Data System (ADS)
Lee, Kyu J.; Kunii, T. L.; Noma, T.
1993-01-01
In this paper, we propose a syntactic pattern recognition method for non-schematic drawings, based on a new attributed graph grammar with flexible embedding. In our graph grammar, the embedding rule permits the nodes of a guest graph to be arbitrarily connected with the nodes of a host graph. The ambiguity caused by this flexible embedding is controlled with the evaluation of synthesized attributes and the check of context sensitivity. To integrate parsing with the synthesized attribute evaluation and the context sensitivity check, we also develop a bottom up parsing algorithm.
Embedding methods for the steady Euler equations
NASA Technical Reports Server (NTRS)
Chang, S. H.; Johnson, G. M.
1983-01-01
An approach to the numerical solution of the steady Euler equations is to embed the first-order Euler system in a second-order system and then to recapture the original solution by imposing additional boundary conditions. Initial development of this approach and computational experimentation with it were previously based on heuristic physical reasoning. This has led to the construction of a relaxation procedure for the solution of two-dimensional steady flow problems. The theoretical justification for the embedding approach is addressed. It is proven that, with the appropriate choice of embedding operator and additional boundary conditions, the solution to the embedded system is exactly the one to the original Euler equations. Hence, solving the embedded version of the Euler equations will not produce extraneous solutions.
Jimeno Yepes, Antonio
2017-09-01
Word sense disambiguation helps identifying the proper sense of ambiguous words in text. With large terminologies such as the UMLS Metathesaurus ambiguities appear and highly effective disambiguation methods are required. Supervised learning algorithm methods are used as one of the approaches to perform disambiguation. Features extracted from the context of an ambiguous word are used to identify the proper sense of such a word. The type of features have an impact on machine learning methods, thus affect disambiguation performance. In this work, we have evaluated several types of features derived from the context of the ambiguous word and we have explored as well more global features derived from MEDLINE using word embeddings. Results show that word embeddings improve the performance of more traditional features and allow as well using recurrent neural network classifiers based on Long-Short Term Memory (LSTM) nodes. The combination of unigrams and word embeddings with an SVM sets a new state of the art performance with a macro accuracy of 95.97 in the MSH WSD data set. Copyright © 2017 Elsevier Inc. All rights reserved.
A real-time spike sorting method based on the embedded GPU.
Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng
2017-07-01
Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.
NASA Astrophysics Data System (ADS)
Yu, Lingyu; Bao, Jingjing; Giurgiutiu, Victor
2004-07-01
Embedded ultrasonic structural radar (EUSR) algorithm is developed for using piezoelectric wafer active sensor (PWAS) array to detect defects within a large area of a thin-plate specimen. Signal processing techniques are used to extract the time of flight of the wave packages, and thereby to determine the location of the defects with the EUSR algorithm. In our research, the transient tone-burst wave propagation signals are generated and collected by the embedded PWAS. Then, with signal processing, the frequency contents of the signals and the time of flight of individual frequencies are determined. This paper starts with an introduction of embedded ultrasonic structural radar algorithm. Then we will describe the signal processing methods used to extract the time of flight of the wave packages. The signal processing methods being used include the wavelet denoising, the cross correlation, and Hilbert transform. Though hardware device can provide averaging function to eliminate the noise coming from the signal collection process, wavelet denoising is included to ensure better signal quality for the application in real severe environment. For better recognition of time of flight, cross correlation method is used. Hilbert transform is applied to the signals after cross correlation in order to extract the envelope of the signals. Signal processing and EUSR are both implemented by developing a graphical user-friendly interface program in LabView. We conclude with a description of our vision for applying EUSR signal analysis to structural health monitoring and embedded nondestructive evaluation. To this end, we envisage an automatic damage detection application utilizing embedded PWAS, EUSR, and advanced signal processing.
Semantic Annotation of Complex Text Structures in Problem Reports
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Throop, David R.; Fleming, Land D.
2011-01-01
Text analysis is important for effective information retrieval from databases where the critical information is embedded in text fields. Aerospace safety depends on effective retrieval of relevant and related problem reports for the purpose of trend analysis. The complex text syntax in problem descriptions has limited statistical text mining of problem reports. The presentation describes an intelligent tagging approach that applies syntactic and then semantic analysis to overcome this problem. The tags identify types of problems and equipment that are embedded in the text descriptions. The power of these tags is illustrated in a faceted searching and browsing interface for problem report trending that combines automatically generated tags with database code fields and temporal information.
Chiarella, Deborah
2016-01-01
The University at Buffalo Health Sciences Library provides reference and instructional services to support research, curricular, and clinical programs of the University at Buffalo. With funding from an NN/LM MAR Technology Improvement Award, the University at Buffalo Health Sciences Library (UBHSL) purchased iPads to develop embedded reference and educational services. Usage statistics were collected over a ten-month period to measure the frequency of iPad use for mobile services. While this experiment demonstrates that the iPad can be used to meet the library user's needs outside of the physical library space, this paper will also offer advice for others who are considering implementing their own program. PMID:26496394
Stellrecht, Elizabeth; Chiarella, Deborah
2015-01-01
The University at Buffalo Health Sciences Library provides reference and instructional services to support research, curricular, and clinical programs of the University at Buffalo. With funding from an NN/LM MAR Technology Improvement Award, the University at Buffalo Health Sciences Library (UBHSL) purchased iPads to develop embedded reference and educational services. Usage statistics were collected over a ten-month period to measure the frequency of iPad use for mobile services. While this experiment demonstrates that the iPad can be used to meet the library user's needs outside of the physical library space, this article will also offer advice for others who are considering implementing their own program.
Assessment of changing interdependencies between human electroencephalograms using nonlinear methods
NASA Astrophysics Data System (ADS)
Pereda, E.; Rial, R.; Gamundi, A.; González, J.
2001-01-01
We investigate the problems that might arise when two recently developed methods for detecting interdependencies between time series using state space embedding are applied to signals of different complexity. With this aim, these methods were used to assess the interdependencies between two electroencephalographic channels from 10 adult human subjects during different vigilance states. The significance and nature of the measured interdependencies were checked by comparing the results of the original data with those of different types of surrogates. We found that even with proper reconstructions of the dynamics of the time series, both methods may give wrong statistical evidence of decreasing interdependencies during deep sleep due to changes in the complexity of each individual channel. The main factor responsible for this result was the use of an insufficient number of neighbors in the calculations. Once this problem was surmounted, both methods showed the existence of a significant relationship between the channels which was mostly of linear type and increased from awake to slow wave sleep. We conclude that the significance of the qualitative results provided for both methods must be carefully tested before drawing any conclusion about the implications of such results.
Recursive feature selection with significant variables of support vectors.
Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh
2012-01-01
The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.
Mechanical properties of metastatic breast cancer cells invading into collagen I matrices
NASA Astrophysics Data System (ADS)
Ros, Robert
2014-03-01
Mechanical interactions between cells and the extracellular matrix (ECM) are critical to the metastasis of cancer cells. To investigate the mechanical interplay between the cells and ECM during invasion, we created thin bovine collagen I hydrogels ranging from 0.1-5 kPa in Young's modulus that were seeded with highly metastatic MDA-MB-231 breast cancer cells. Significant population fractions invaded the matrices either partially or fully within 24 h. We then combined confocal fluorescence microscopy and indentation with an atomic force microscope to determine the Young's moduli of individual embedded cells and the pericellular matrix using novel analysis methods for heterogeneous samples. In partially embedded cells, we observe a statistically significant correlation between the degree of invasion and the Young's modulus, which was up to an order of magnitude greater than that of the same cells measured in 2D. ROCK inhibition returned the cells' Young's moduli to values similar to 2D and diminished but did not abrogate invasion. This provides evidence that Rho/ROCK-dependent acto-myosin contractility is employed for matrix reorganization during initial invasion, and suggests the observed cell stiffening is due to an attendant increase in actin stress fibers. This work was supported by the National Cancer Institute under the grant U54 CA143862.
Ghanbari, Yasser; Smith, Alex R.; Schultz, Robert T.; Verma, Ragini
2014-01-01
Diffusion tensor imaging (DTI) offers rich insights into the physical characteristics of white matter (WM) fiber tracts and their development in the brain, facilitating a network representation of brain’s traffic pathways. Such a network representation of brain connectivity has provided a novel means of investigating brain changes arising from pathology, development or aging. The high dimensionality of these connectivity networks necessitates the development of methods that identify the connectivity building blocks or sub-network components that characterize the underlying variation in the population. In addition, the projection of the subject networks into the basis set provides a low dimensional representation of it, that teases apart different sources of variation in the sample, facilitating variation-specific statistical analysis. We propose a unified framework of non-negative matrix factorization and graph embedding for learning sub-network patterns of connectivity by their projective non-negative decomposition into a reconstructive basis set, as well as, additional basis sets representing variational sources in the population like age and pathology. The proposed framework is applied to a study of diffusion-based connectivity in subjects with autism that shows localized sparse sub-networks which mostly capture the changes related to pathology and developmental variations. PMID:25037933
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yijie; Lim, Hyun-Kyung; de Almeida, Valmor F
2012-06-01
This progress report describes the development of a front tracking method for the solution of the governing equations of motion for two-phase micromixing of incompressible, viscous, liquid-liquid solvent extraction processes. The ability to compute the detailed local interfacial structure of the mixture allows characterization of the statistical properties of the two-phase mixture in terms of droplets, filaments, and other structures which emerge as a dispersed phase embedded into a continuous phase. Such a statistical picture provides the information needed for building a consistent coarsened model applicable to the entire mixing device. Coarsening is an undertaking for a future mathematical developmentmore » and is outside the scope of the present work. We present here a method for accurate simulation of the micromixing dynamics of an aqueous and an organic phase exposed to intense centrifugal force and shearing stress. The onset of mixing is the result of the combination of the classical Rayleigh- Taylor and Kelvin-Helmholtz instabilities. A mixing environment that emulates a sector of the annular mixing zone of a centrifugal contactor is used for the mathematical domain. The domain is small enough to allow for resolution of the individual interfacial structures and large enough to allow for an analysis of their statistical distribution of sizes and shapes. A set of accurate algorithms for this application requires an advanced front tracking approach constrained by the incompressibility condition. This research is aimed at designing and implementing these algorithms. We demonstrate verification and convergence results for one-phase and unmixed, two-phase flows. In addition we report on preliminary results for mixed, two-phase flow for realistic operating flow parameters.« less
Pandey, Pinki; Dixit, Alok; Tanwar, Aparna; Sharma, Anuradha; Mittal, Sanjeev
2014-01-01
Introduction: Our study presents a new deparaffinizing and hematoxylin and eosin (H and E) staining method that involves the use of easily available, nontoxic and eco-friendly liquid diluted dish washing soap (DWS) by completely eliminating expensive and hazardous xylene and alcohol from deparaffinizing and rehydration prior to staining, staining and from dehydration prior to mounting. The aim was to evaluate and compare the quality of liquid DWS treated xylene and alcohol free (XAF) sections with that of the conventional H and E sections. Materials and Methods: A total of 100 paraffin embedded tissue blocks from different tissues were included. From each tissue block, one section was stained with conventional H and E (normal sections) and the other with XAF H and E (soapy sections) staining method. Slides were scored using five parameters: Nuclear, cytoplasmic, clarity, uniformity, and crispness of staining. Z-test was used for statistical analysis. Results: Soapy sections scored better for cytoplasmic (90%) and crisp staining (95%) with a statistically significant difference. Whereas for uniformity of staining, normal sections (88%) scored over soapy sections (72%) (Z = 2.82, P < 0.05). For nuclear (90%) and clarity of staining (90%) total scored favored soapy sections, but the difference was not statistically significant. About 84% normal sections stained adequately for diagnosis when compared with 86% in soapy sections (Z = 0.396, P > 0.05). Conclusion: Liquid DWS is a safe and efficient alternative to xylene and alcohol in deparaffinization and routine H and E staining procedure. We are documenting this project that can be used as a model for other histology laboratories. PMID:25328332
Method for hygromechanical characterization of graphite/epoxy composite
NASA Technical Reports Server (NTRS)
Yaniv, Gershon; Peimanidis, Gus; Daniel, Isaac M.
1987-01-01
An experimental method is described for measuring hygroscopic swelling strains and mechanical strains of moisture-conditioned composite specimens. The method consists of embedding encapsulated strain gages in the midplane of the composite laminate; thus it does not interfere with normal moisture diffusion. It is particularly suited for measuring moisture swelling coefficients and for mechanical testing of moisture-conditioned specimens at high strain rates. Results obtained by the embedded gage method were shown to be more reliable and reproducible than those obtained by surface gages, dial gages, or extensometers.
Time delayed Ensemble Nudging Method
NASA Astrophysics Data System (ADS)
An, Zhe; Abarbanel, Henry
Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.
Subsystem real-time time dependent density functional theory.
Krishtal, Alisa; Ceresoli, Davide; Pavanello, Michele
2015-04-21
We present the extension of Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) to real-time Time Dependent Density Functional Theory (rt-TDDFT). FDE is a DFT-in-DFT embedding method that allows to partition a larger Kohn-Sham system into a set of smaller, coupled Kohn-Sham systems. Additional to the computational advantage, FDE provides physical insight into the properties of embedded systems and the coupling interactions between them. The extension to rt-TDDFT is done straightforwardly by evolving the Kohn-Sham subsystems in time simultaneously, while updating the embedding potential between the systems at every time step. Two main applications are presented: the explicit excitation energy transfer in real time between subsystems is demonstrated for the case of the Na4 cluster and the effect of the embedding on optical spectra of coupled chromophores. In particular, the importance of including the full dynamic response in the embedding potential is demonstrated.
2013-01-01
Background BRAF mutation is an important diagnostic and prognostic marker in patients with papillary thyroid carcinoma (PTC). To be applicable in clinical laboratories with limited equipment, diverse testing methods are required to detect BRAF mutation. Methods A shifted termination assay (STA) fragment analysis was used to detect common V600 BRAF mutations in 159 PTCs with DNAs extracted from formalin-fixed paraffin-embedded tumor tissue. The results of STA fragment analysis were compared to those of direct sequencing. Serial dilutions of BRAF mutant cell line (SNU-790) were used to calculate limit of detection (LOD). Results BRAF mutations were detected in 119 (74.8%) PTCs by STA fragment analysis. In direct sequencing, BRAF mutations were observed in 118 (74.2%) cases. The results of STA fragment analysis had high correlation with those of direct sequencing (p < 0.00001, κ = 0.98). The LOD of STA fragment analysis and direct sequencing was 6% and 12.5%, respectively. In PTCs with pT3/T4 stages, BRAF mutation was observed in 83.8% of cases. In pT1/T2 carcinomas, BRAF mutation was detected in 65.9% and this difference was statistically significant (p = 0.007). Moreover, BRAF mutation was more frequent in PTCs with extrathyroidal invasion than tumors without extrathyroidal invasion (84.7% versus 62.2%, p = 0.001). To prepare and run the reactions, direct sequencing required 450 minutes while STA fragment analysis needed 290 minutes. Conclusions STA fragment analysis is a simple and sensitive method to detect BRAF V600 mutations in formalin-fixed paraffin-embedded clinical samples. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5684057089135749 PMID:23883275
A minimization method on the basis of embedding the feasible set and the epigraph
NASA Astrophysics Data System (ADS)
Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.
2016-11-01
We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.
Gupta, Vishal; Pandey, Pulak M
2016-11-01
Thermal necrosis is one of the major problems associated with the bone drilling process in orthopedic/trauma surgical operations. To overcome this problem a new bone drilling method has been introduced recently. Studies have been carried out with rotary ultrasonic drilling (RUD) on pig bones using diamond coated abrasive hollow tools. In the present work, influence of process parameters (rotational speed, feed rate, drill diameter and vibrational amplitude) on change in the temperature was studied using design of experiment technique i.e., response surface methodology (RSM) and data analysis was carried out using analysis of variance (ANOVA). Temperature was recorded and measured by using embedded thermocouple technique at a distance of 0.5mm, 1.0mm, 1.5mm and 2.0mm from the drill site. Statistical model was developed to predict the maximum temperature at the drill tool and bone interface. It was observed that temperature increased with increase in the rotational speed, feed rate and drill diameter and decreased with increase in the vibrational amplitude. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Nelson, Douglas L; Dyrdal, Gunvor M; Goodmon, Leilani B
2005-08-01
Measuring lexical knowledge poses a challenge to the study of the influence of preexisting knowledge on the retrieval of new memories. Many tasks focus on word pairs, but words are embedded in associative networks, so how should preexisting pair strength be measured? It has been measured by free association, similarity ratings, and co-occurrence statistics. Researchers interpret free association response probabilities as unbiased estimates of forward cue-to-target strength. In Study 1, analyses of large free association and extralist cued recall databases indicate that this interpretation is incorrect. Competitor and backward strengths bias free association probabilities, and as with other recall tasks, preexisting strength is described by a ratio rule. In Study 2, associative similarity ratings are predicted by forward and backward, but not by competitor, strength. Preexisting strength is not a unitary construct, because its measurement varies with method. Furthermore, free association probabilities predict extralist cued recall better than do ratings and co-occurrence statistics. The measure that most closely matches the criterion task may provide the best estimate of the identity of preexisting strength.
2017-03-20
computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
A new JPEG-based steganographic algorithm for mobile devices
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.
2006-05-01
Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.
Women's work and symptoms during midlife: Korean immigrant women.
Im, E O; Meleis, A I
2001-01-01
To describe how Korean immigrant women tend to describe their work experiences within their daily lives and how they relate their work to the symptoms experienced during midlife. Cross-sectional study using methodological triangulation. Using a convenience sampling method, 119 Korean immigrant women were recruited for the quantitative phase, and 21 among the 119 women were recruited for the qualitative phase. Data were collected using both questionnaires and in-depth interviews. The data were analyzed using descriptive and inferential statistics and thematic analysis. FINDINGS AND DISCUSSIONS: The symptoms that the women experienced during midlife were influenced by their work experience, which was complicated by their cultural heritage, gender issues embedded in their daily lives, and immigration transition. Complexities and diversities in women's work need to be incorporated in menopausal studies.
Decentralized safety concept for closed-loop controlled intensive care.
Kühn, Jan; Brendle, Christian; Stollenwerk, André; Schweigler, Martin; Kowalewski, Stefan; Janisch, Thorsten; Rossaint, Rolf; Leonhardt, Steffen; Walter, Marian; Kopp, Rüdger
2017-04-01
This paper presents a decentralized safety concept for networked intensive care setups, for which a decentralized network of sensors and actuators is realized by embedded microcontroller nodes. It is evaluated for up to eleven medical devices in a setup for automated acute respiratory distress syndrome (ARDS) therapy. In this contribution we highlight a blood pump supervision as exemplary safety measure, which allows a reliable bubble detection in an extracorporeal blood circulation. The approach is validated with data of animal experiments including 35 bubbles with a size between 0.05 and 0.3 ml. All 18 bubbles with a size down to 0.15 ml are successfully detected. By using hidden Markov models (HMMs) as statistical method the number of necessary sensors can be reduced by two pressure sensors.
Research on Generating Method of Embedded Software Test Document Based on Dynamic Model
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryant, M.A.; Braue Jr, E.H.
1992-12-31
Ten anesthetized hairless guinea pigs Crl:IAF(HA)BR were exposed to 10 pi of neat sulfur mustard (HD) in a vapor cup on their skin for 7 min. At 24 h postexposure, the guinea pigs were euthanatized and skin sections taken for histologic evaluation. The skin was fixed using either 10% neutral buffered formalin (NBF), McDowell Trump fixative (4CF-IG), Zenker`s formol-saline (Helly`s fluid), or Zenker`s fluid. Fixed skin sections were cut in half: one half was embedded in paraffin and the other half in plastic (glycol methacrylate). Paraffin-embedded tissue was stained with hematoxylin and eosin; plastic-embedded tissue was stained with Lee`s methylenemore » blue basic fuchsin. Skin was also frozen unfixed, sectioned by cryostat, and stained with pinacyanole. HD-exposed skin was evaluated histologically for the presence of epidermal and follicular necrosis, microblister formation, epidermitis, and intracellular edema to determine the optimal fixation and embedding method for lesion preservation. The percentage of histologic sections with lesions varied little between fixatives and was similar for both paraffin and plastic embedding material. Plastic-embedded sections were thinner, allowing better histologic evaluation, but were more difficult to stain. Plastic embedding material did not infiltrate tissue fixed in Zenker`s fluid or Zenker`s formol-saline. Frozen tissue sections were prepared in the least processing time and lesion preservation was comparable to fixed tissue. It was concluded that standard histologic processing using formalin fixation and paraffin embedding is adequate for routine histopathological evaluation of HD skin lesions in the hairless guinea pig.... Sulfur mustard, Vesicating agents, Pathology, Hairless guinea pig model, Fixation.« less
ERIC Educational Resources Information Center
Tong, Xiuli; McBride, Catherine
2014-01-01
This study examined how Chinese children acquire the untaught positional constraints of stroke patterns that are embedded in left-right structured and top-bottom structured characters. Using an orthographic regularity pattern elicitation paradigm, 536 Hong Kong Chinese children at different levels of reading (kindergarten, 2nd, and 5th grades)…
Effectiveness of eLearning in Statistics: Pictures and Stories
ERIC Educational Resources Information Center
Blackburn, Greg
2015-01-01
The study investigates (1) the effectiveness of using eLearning-embedded stories and pictures in order to improve learning outcomes for students and (2) how universities can adopt innovative approaches to the creation of Problem-Based Learning (PBL) resources and embed them in educational technology for teaching domain-specific content, such as…
Australian Curriculum Linked Lessons: The Language of Chance
ERIC Educational Resources Information Center
Hurrell, Derek
2015-01-01
In providing a continued focus on tasks and activities that help to illustrate key ideas embedded in the "Australian Curriculum," this issue focuses on the Statistics and probability strand and the sub-strand of Chance. In the Australian Curriculum (ACARA, 2015), students are not asked to list outcomes of chance experiments and represent…
Australian Curriculum Linked Lessons
ERIC Educational Resources Information Center
Hurrell, Derek; O'Neil, Jennifer
2011-01-01
In providing a continued focus on tasks and activities that help to illustrate key ideas embedded in the new Australian Curriculum, this issue the authors focus, on Geometry in the Measurement and Geometry strand with strong links for an integrated focus on the Statistics and Probability strand. The small unit of work on the sorting and…
ERIC Educational Resources Information Center
Abla, Dilshat; Okanoya, Kazuo
2008-01-01
Word segmentation, that is, discovering the boundaries between words that are embedded in a continuous speech stream, is an important faculty for language learners; humans solve this task partly by calculating transitional probabilities between sounds. Behavioral and ERP studies suggest that detection of sequential probabilities (statistical…
Fiber Optic Sensor Embedment Study for Multi-Parameter Strain Sensing
Drissi-Habti, Monssef; Raman, Venkadesh; Khadour, Aghiad; Timorian, Safiullah
2017-01-01
The fiber optic sensors (FOSs) are commonly used for large-scale structure monitoring systems for their small size, noise free and low electrical risk characteristics. Embedded fiber optic sensors (FOSs) lead to micro-damage in composite structures. This damage generation threshold is based on the coating material of the FOSs and their diameter. In addition, embedded FOSs are aligned parallel to reinforcement fibers to avoid micro-damage creation. This linear positioning of distributed FOS fails to provide all strain parameters. We suggest novel sinusoidal sensor positioning to overcome this issue. This method tends to provide multi-parameter strains in a large surface area. The effectiveness of sinusoidal FOS positioning over linear FOS positioning is studied under both numerical and experimental methods. This study proves the advantages of the sinusoidal positioning method for FOS in composite material’s bonding. PMID:28333117
Realization of Chinese word segmentation based on deep learning method
NASA Astrophysics Data System (ADS)
Wang, Xuefei; Wang, Mingjiang; Zhang, Qiquan
2017-08-01
In recent years, with the rapid development of deep learning, it has been widely used in the field of natural language processing. In this paper, I use the method of deep learning to achieve Chinese word segmentation, with large-scale corpus, eliminating the need to construct additional manual characteristics. In the process of Chinese word segmentation, the first step is to deal with the corpus, use word2vec to get word embedding of the corpus, each character is 50. After the word is embedded, the word embedding feature is fed to the bidirectional LSTM, add a linear layer to the hidden layer of the output, and then add a CRF to get the model implemented in this paper. Experimental results show that the method used in the 2014 People's Daily corpus to achieve a satisfactory accuracy.
Nitsche’s Method For Helmholtz Problems with Embedded Interfaces
Zou, Zilong; Aquino, Wilkins; Harari, Isaac
2016-01-01
SUMMARY In this work, we use Nitsche’s formulation to weakly enforce kinematic constraints at an embedded interface in Helmholtz problems. Allowing embedded interfaces in a mesh provides significant ease for discretization, especially when material interfaces have complex geometries. We provide analytical results that establish the well-posedness of Helmholtz variational problems and convergence of the corresponding finite element discretizations when Nitsche’s method is used to enforce kinematic constraints. As in the analysis of conventional Helmholtz problems, we show that the inf-sup constant remains positive provided that the Nitsche’s stabilization parameter is judiciously chosen. We then apply our formulation to several 2D plane-wave examples that confirm our analytical findings. Doing so, we demonstrate the asymptotic convergence of the proposed method and show that numerical results are in accordance with the theoretical analysis. PMID:28713177
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, John A.; Nuzzo, Ralph; Kim, Hoon-sik
Described herein are printable structures and methods for making, assembling and arranging electronic devices. A number of the methods described herein are useful for assembling electronic devices where one or more device components are embedded in a polymer which is patterned during the embedding process with trenches for electrical interconnects between device components. Some methods described herein are useful for assembling electronic devices by printing methods, such as by dry transfer contact printing methods. Also described herein are GaN light emitting diodes and methods for making and arranging GaN light emitting diodes, for example for display or lighting systems.
Rogers, John A; Nuzzo, Ralph; Kim, Hoon-sik; Brueckner, Eric; Park, Sang Il; Kim, Rak Hwan
2014-10-21
Described herein are printable structures and methods for making, assembling and arranging electronic devices. A number of the methods described herein are useful for assembling electronic devices where one or more device components are embedded in a polymer which is patterned during the embedding process with trenches for electrical interconnects between device components. Some methods described herein are useful for assembling electronic devices by printing methods, such as by dry transfer contact printing methods. Also described herein are GaN light emitting diodes and methods for making and arranging GaN light emitting diodes, for example for display or lighting systems.
Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video
NASA Astrophysics Data System (ADS)
Yeo, Boon-Lock; Liu, Bede
1996-03-01
Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.
Random matrix theory for transition strengths: Applications and open questions
NASA Astrophysics Data System (ADS)
Kota, V. K. B.
2017-12-01
Embedded random matrix ensembles are generic models for describing statistical properties of finite isolated interacting quantum many-particle systems. A finite quantum system, induced by a transition operator, makes transitions from its states to the states of the same system or to those of another system. Examples are electromagnetic transitions (then the initial and final systems are same), nuclear beta and double beta decay (then the initial and final systems are different) and so on. Using embedded ensembles (EE), there are efforts to derive a good statistical theory for transition strengths. With m fermions (or bosons) in N mean-field single particle levels and interacting via two-body forces, we have with GOE embedding, the so called EGOE(1+2). Now, the transition strength density (transition strength multiplied by the density of states at the initial and final energies) is a convolution of the density generated by the mean-field one-body part with a bivariate spreading function due to the two-body interaction. Using the embedding U(N) algebra, it is established, for a variety of transition operators, that the spreading function, for sufficiently strong interactions, is close to a bivariate Gaussian. Also, as the interaction strength increases, the spreading function exhibits a transition from bivariate Breit-Wigner to bivariate Gaussian form. In appropriate limits, this EE theory reduces to the polynomial theory of Draayer, French and Wong on one hand and to the theory due to Flambaum and Izrailev for one-body transition operators on the other. Using spin-cutoff factors for projecting angular momentum, the theory is applied to nuclear matrix elements for neutrinoless double beta decay (NDBD). In this paper we will describe: (i) various developments in the EE theory for transition strengths; (ii) results for nuclear matrix elements for 130Te and 136Xe NDBD; (iii) important open questions in the current form of the EE theory.
Leverentz, Hannah R; Truhlar, Donald G
2009-06-09
This work tests the capability of the electrostatically embedded many-body (EE-MB) method to calculate accurate (relative to conventional calculations carried out at the same level of electronic structure theory and with the same basis set) binding energies of mixed clusters (as large as 9-mers) consisting of water, ammonia, sulfuric acid, and ammonium and bisulfate ions. This work also investigates the dependence of the accuracy of the EE-MB approximation on the type and origin of the charges used for electrostatically embedding these clusters. The conclusions reached are that for all of the clusters and sets of embedding charges studied in this work, the electrostatically embedded three-body (EE-3B) approximation is capable of consistently yielding relative errors of less than 1% and an average relative absolute error of only 0.3%, and that the performance of the EE-MB approximation does not depend strongly on the specific set of embedding charges used. The electrostatically embedded pairwise approximation has errors about an order of magnitude larger than EE-3B. This study also explores the question of why the accuracy of the EE-MB approximation shows such little dependence on the types of embedding charges employed.
One improved LSB steganography algorithm
NASA Astrophysics Data System (ADS)
Song, Bing; Zhang, Zhi-hong
2013-03-01
It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.
Cannistraci, Carlo Vittorio; Alanis-Lobato, Gregorio; Ravasi, Timothy
2013-01-01
Motivation: Most functions within the cell emerge thanks to protein–protein interactions (PPIs), yet experimental determination of PPIs is both expensive and time-consuming. PPI networks present significant levels of noise and incompleteness. Predicting interactions using only PPI-network topology (topological prediction) is difficult but essential when prior biological knowledge is absent or unreliable. Methods: Network embedding emphasizes the relations between network proteins embedded in a low-dimensional space, in which protein pairs that are closer to each other represent good candidate interactions. To achieve network denoising, which boosts prediction performance, we first applied minimum curvilinear embedding (MCE), and then adopted shortest path (SP) in the reduced space to assign likelihood scores to candidate interactions. Furthermore, we introduce (i) a new valid variation of MCE, named non-centred MCE (ncMCE); (ii) two automatic strategies for selecting the appropriate embedding dimension; and (iii) two new randomized procedures for evaluating predictions. Results: We compared our method against several unsupervised and supervisedly tuned embedding approaches and node neighbourhood techniques. Despite its computational simplicity, ncMCE-SP was the overall leader, outperforming the current methods in topological link prediction. Conclusion: Minimum curvilinearity is a valuable non-linear framework that we successfully applied to the embedding of protein networks for the unsupervised prediction of novel PPIs. The rationale for our approach is that biological and evolutionary information is imprinted in the non-linear patterns hidden behind the protein network topology, and can be exploited for predicting new protein links. The predicted PPIs represent good candidates for testing in high-throughput experiments or for exploitation in systems biology tools such as those used for network-based inference and prediction of disease-related functional modules. Availability: https://sites.google.com/site/carlovittoriocannistraci/home Contact: kalokagathos.agon@gmail.com or timothy.ravasi@kaust.edu.sa Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23812985
Kokkat, Theresa J.; McGarvey, Diane; Patel, Miral S.; Tieniber, Andrew D.; LiVolsi, Virginia A.; Baloch, Zubair W.
2013-01-01
Background: Methanol fixed and paraffin embedded (MFPE) cellblocks are an essential cytology preparation. However, MFPE cellblocks often contain limited material and their relatively small size has caused them to be overlooked in biomarker discovery. Advances in the field of molecular biotechnology have made it possible to extract proteins from formalin fixed and paraffin embedded (FFPE) tissue blocks. In contrast, there are no established methods for extracting proteins from MFPE cellblocks. We investigated commonly available CHAPS (3-[(3-cholamidopropyl) dimethylammonio]-1-propanesulfonate) buffer, as well as two commercially available Qiagen® kits and compared their effectiveness on MFPE tissue for protein yields. Materials and Methods: MFPE blocks were made by Cellient™ automated system using human tissue specimens from normal and malignant specimens collected in ThinPrep™ Vials. Protein was extracted from Cellient-methanol fixed and paraffin embedded blocks with CHAPS buffer method as well as FFPE and Mammalian Qiagen® kits. Results: Comparison of protein yields demonstrated the effectiveness of various protein extraction methods on MFPE cellblocks. Conclusion: In the current era of minimally invasive techniques to obtain minimal amount of tissue for diagnostic and prognostic purposes, the use of commercial and lab made buffer on low weight MFPE scrapings obtained by Cellient® processor opens new possibilities for protein biomarker research. PMID:24403950
Correlative Imaging of Fluorescent Proteins in Resin-Embedded Plant Material1
Bell, Karen; Mitchell, Steve; Paultre, Danae; Posch, Markus; Oparka, Karl
2013-01-01
Fluorescent proteins (FPs) were developed for live-cell imaging and have revolutionized cell biology. However, not all plant tissues are accessible to live imaging using confocal microscopy, necessitating alternative approaches for protein localization. An example is the phloem, a tissue embedded deep within plant organs and sensitive to damage. To facilitate accurate localization of FPs within recalcitrant tissues, we developed a simple method for retaining FPs after resin embedding. This method is based on low-temperature fixation and dehydration, followed by embedding in London Resin White, and avoids the need for cryosections. We show that a palette of FPs can be localized in plant tissues while retaining good structural cell preservation, and that the polymerized block face can be counterstained with cell wall probes. Using this method we have been able to image green fluorescent protein-labeled plasmodesmata to a depth of more than 40 μm beneath the resin surface. Using correlative light and electron microscopy of the phloem, we were able to locate the same FP-labeled sieve elements in semithin and ultrathin sections. Sections were amenable to antibody labeling, and allowed a combination of confocal and superresolution imaging (three-dimensional-structured illumination microscopy) on the same cells. These correlative imaging methods should find several uses in plant cell biology. PMID:23457228
DNS and Embedded DNS as Tools for Investigating Unsteady Heat Transfer Phenomena in Turbines
NASA Technical Reports Server (NTRS)
vonTerzi, Dominic; Bauer, H.-J.
2010-01-01
DNS is a powerful tool with high potential for investigating unsteady heat transfer and fluid flow phenomena, in particular for cases involving transition to turbulence and/or large coherent structures. - DNS of idealized configurations related to turbomachinery components is already possible. - For more realistic configurations and the inclusion of more effects, reduction of computational cost is key issue (e.g., hybrid methods). - Approach pursued here: Embedded DNS ( segregated coupling of DNS with LES and/or RANS). - Embedded DNS is an enabling technology for many studies. - Pre-transitional heat transfer and trailing-edge cutback film-cooling are good candidates for (embedded) DNS studies.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Adapting Word Embeddings from Multiple Domains to Symptom Recognition from Psychiatric Notes
Zhang, Yaoyun; Li, Hee-Jin; Wang, Jingqi; Cohen, Trevor; Roberts, Kirk; Xu, Hua
2018-01-01
Mental health is increasingly recognized an important topic in healthcare. Information concerning psychiatric symptoms is critical for the timely diagnosis of mental disorders, as well as for the personalization of interventions. However, the diversity and sparsity of psychiatric symptoms make it challenging for conventional natural language processing techniques to automatically extract such information from clinical text. To address this problem, this study takes the initiative to use and adapt word embeddings from four source domains – intensive care, biomedical literature, Wikipedia and Psychiatric Forum – to recognize symptoms in the target domain of psychiatry. We investigated four different approaches including 1) only using word embeddings of the source domain, 2) directly combining data of the source and target to generate word embeddings, 3) assigning different weights to word embeddings, and 4) retraining the word embedding model of the source domain using a corpus of the target domain. To the best of our knowledge, this is the first work of adapting multiple word embeddings of external domains to improve psychiatric symptom recognition in clinical text. Experimental results showed that the last two approaches outperformed the baseline methods, indicating the effectiveness of our new strategies to leverage embeddings from other domains. PMID:29888086
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolesnikov, S. V., E-mail: kolesnikov@physics.msu.ru; Klavsyuk, A. L.; Saletsky, A. M.
The self-organization and magnetic properties of small iron and cobalt nanostructures embedded into the first layer of a Cu(100) surface are investigated using the self-learning kinetic Monte Carlo method and density functional theory. The similarities and differences between the Fe/Cu(100) and the Co/Cu(100) are underlined. The time evolution of magnetic properties of a copper monolayer with embedded magnetic atoms at 380 K is discussed.
Botoman, Lester; Shukla, Elvis; Johan, Erni; Mitsunobu, Satoshi; Matsue, Naoto
2018-02-01
Although many kinds of materials for water purification are known, easy-to-use methods that ensure the safety of drinking water for rural populations are not sufficiently available. Sorbent-embedded sheets provide methods for the easy removal of contaminants from drinking water in the home. As an example of such a sorbent-embedded sheet, we prepared a Linde type A (LTA) zeolite-embedded sheet (ZES) and examined its Pb(II) removal behaviour. Different amounts of LTA were added either as powder or as ZES to 0.3 mM Pb(NO 3 ) 2 solutions containing 2.5 mM Ca(NO 3 ) 2 , in which the ratio of the negative charges in LTA to the positive charges in Pb(II) (LTA/Pb ratio) ranged from 1 to 20. After shaking, the mixtures were centrifuged to remove the powder, while the ZES was simply removed from the mixture by hand. The LTA powder removed more than 99% of the Pb(II) from the solution at all LTA/Pb ratios within 1 h, while the ZES removed >99% of the Pb(II) at LTA/Pb ratios of 2 and higher; at the highest LTA/Pb ratio of 20, the ZES removed >99% of the Pb(II) in 30 s. Therefore, the use of appropriate sorbent-embedded sheets enable the facile removal of contaminants from water.
Small Private Key PKS on an Embedded Microprocessor
Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon
2014-01-01
Multivariate quadratic ( ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012. PMID:24651722
NASA Astrophysics Data System (ADS)
Sabet Divsholi, Bahador; Yang, Yaowen
2011-04-01
Piezoelectric lead zirconate titanate (PZT) transducers have been used for health monitoring of various structures over the last two decades. There are three methods to install the PZT transducers to structures, namely, surface bonded, reusable setup and embedded PZTs. The embedded PZTs and reusable PZT setups can be used for concrete structures during construction. On the other hand, the surface bonded PZTs can be installed on the existing structures. In this study, the applicability and limitations of each installation method are experimentally studied. A real size concrete structure is cast, where the surface bonded, reusable setup and embedded PZTs are installed. Monitoring of concrete hydration and structural damage is conducted by the electromechanical impedance (EMI), wave propagation and wave transmission techniques. It is observed that embedded PZTs are suitable for monitoring the hydration of concrete by using both the EMI and the wave transmission techniques. For damage detection in concrete structures, the embedded PZTs can be employed using the wave transmission technique, but they are not suitable for the EMI technique. It is also found that the surface bonded PZTs are sensitive to damage when using both the EMI and wave propagation techniques. The reusable PZT setups are able to monitor the hydration of concrete. However they are less sensitive in damage detection in comparison to the surface bonded PZTs.
Small private key MQPKS on an embedded microprocessor.
Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon
2014-03-19
Multivariate quadratic (MQ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.
NASA Astrophysics Data System (ADS)
Raghavan, Ajay; Kiesel, Peter; Sommer, Lars Wilko; Schwartz, Julian; Lochbaum, Alexander; Hegyi, Alex; Schuh, Andreas; Arakaki, Kyle; Saha, Bhaskar; Ganguli, Anurag; Kim, Kyung Ho; Kim, ChaeAh; Hah, Hoe Jin; Kim, SeokKoo; Hwang, Gyu-Ok; Chung, Geun-Chang; Choi, Bokkyu; Alamgir, Mohamed
2017-02-01
A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic sensors. High-performance large-format pouch cells with embedded fiber-optic sensors were fabricated. The first of this two-part paper focuses on the embedding method details and performance of these cells. The seal integrity, capacity retention, cycle life, compatibility with existing module designs, and mass-volume cost estimates indicate their suitability for xEV and other advanced battery applications. The second part of the paper focuses on the internal strain and temperature signals obtained from these sensors under various conditions and their utility for high-accuracy cell state estimation algorithms.
Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.
Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki
2016-07-01
We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.
Matenaers, Cyrill; Popper, Bastian; Rieger, Alexandra; Wanke, Rüdiger; Blutke, Andreas
2018-01-01
The accuracy of quantitative stereological analysis tools such as the (physical) disector method substantially depends on the precise determination of the thickness of the analyzed histological sections. One conventional method for measurement of histological section thickness is to re-embed the section of interest vertically to its original section plane. The section thickness is then measured in a subsequently prepared histological section of this orthogonally re-embedded sample. However, the orthogonal re-embedding (ORE) technique is quite work- and time-intensive and may produce inaccurate section thickness measurement values due to unintentional slightly oblique (non-orthogonal) positioning of the re-embedded sample-section. Here, an improved ORE method is presented, allowing for determination of the factual section plane angle of the re-embedded section, and correction of measured section thickness values for oblique (non-orthogonal) sectioning. For this, the analyzed section is mounted flat on a foil of known thickness (calibration foil) and both the section and the calibration foil are then vertically (re-)embedded. The section angle of the re-embedded section is then calculated from the deviation of the measured section thickness of the calibration foil and its factual thickness, using basic geometry. To find a practicable, fast, and accurate alternative to ORE, the suitability of spectral reflectance (SR) measurement for determination of plastic section thicknesses was evaluated. Using a commercially available optical reflectometer (F20, Filmetrics®, USA), the thicknesses of 0.5 μm thick semi-thin Epon (glycid ether)-sections and of 1-3 μm thick plastic sections (glycolmethacrylate/ methylmethacrylate, GMA/MMA), as regularly used in physical disector analyses, could precisely be measured within few seconds. Compared to the measured section thicknesses determined by ORE, SR measures displayed less than 1% deviation. Our results prove the applicability of SR to efficiently provide accurate section thickness measurements as a prerequisite for reliable estimates of dependent quantitative stereological parameters.
Matenaers, Cyrill; Popper, Bastian; Rieger, Alexandra; Wanke, Rüdiger
2018-01-01
The accuracy of quantitative stereological analysis tools such as the (physical) disector method substantially depends on the precise determination of the thickness of the analyzed histological sections. One conventional method for measurement of histological section thickness is to re-embed the section of interest vertically to its original section plane. The section thickness is then measured in a subsequently prepared histological section of this orthogonally re-embedded sample. However, the orthogonal re-embedding (ORE) technique is quite work- and time-intensive and may produce inaccurate section thickness measurement values due to unintentional slightly oblique (non-orthogonal) positioning of the re-embedded sample-section. Here, an improved ORE method is presented, allowing for determination of the factual section plane angle of the re-embedded section, and correction of measured section thickness values for oblique (non-orthogonal) sectioning. For this, the analyzed section is mounted flat on a foil of known thickness (calibration foil) and both the section and the calibration foil are then vertically (re-)embedded. The section angle of the re-embedded section is then calculated from the deviation of the measured section thickness of the calibration foil and its factual thickness, using basic geometry. To find a practicable, fast, and accurate alternative to ORE, the suitability of spectral reflectance (SR) measurement for determination of plastic section thicknesses was evaluated. Using a commercially available optical reflectometer (F20, Filmetrics®, USA), the thicknesses of 0.5 μm thick semi-thin Epon (glycid ether)-sections and of 1–3 μm thick plastic sections (glycolmethacrylate/ methylmethacrylate, GMA/MMA), as regularly used in physical disector analyses, could precisely be measured within few seconds. Compared to the measured section thicknesses determined by ORE, SR measures displayed less than 1% deviation. Our results prove the applicability of SR to efficiently provide accurate section thickness measurements as a prerequisite for reliable estimates of dependent quantitative stereological parameters. PMID:29444158
Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui
2017-07-15
Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k -mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k -mer co-occurrence information with recent advances in deep learning. We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k -mer embedding. We first split DNA sequences into k -mers and pre-train k -mer embedding vectors based on the co-occurrence matrix of k -mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k -mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm . tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn. Supplementary materials are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui
2017-01-01
Abstract Motivation: Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k-mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k-mer co-occurrence information with recent advances in deep learning. Results: We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k-mer embedding. We first split DNA sequences into k-mers and pre-train k-mer embedding vectors based on the co-occurrence matrix of k-mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k-mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. Availability and implementation: The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm. Contact: tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn Supplementary information: Supplementary materials are available at Bioinformatics online. PMID:28881969
Power Performance Verification of a Wind Farm Using the Friedman's Test.
Hernandez, Wilmar; López-Presa, José Luis; Maldonado-Correa, Jorge L
2016-06-03
In this paper, a method of verification of the power performance of a wind farm is presented. This method is based on the Friedman's test, which is a nonparametric statistical inference technique, and it uses the information that is collected by the SCADA system from the sensors embedded in the wind turbines in order to carry out the power performance verification of a wind farm. Here, the guaranteed power curve of the wind turbines is used as one more wind turbine of the wind farm under assessment, and a multiple comparison method is used to investigate differences between pairs of wind turbines with respect to their power performance. The proposed method says whether the power performance of the specific wind farm under assessment differs significantly from what would be expected, and it also allows wind farm owners to know whether their wind farm has either a perfect power performance or an acceptable power performance. Finally, the power performance verification of an actual wind farm is carried out. The results of the application of the proposed method showed that the power performance of the specific wind farm under assessment was acceptable.
Power Performance Verification of a Wind Farm Using the Friedman’s Test
Hernandez, Wilmar; López-Presa, José Luis; Maldonado-Correa, Jorge L.
2016-01-01
In this paper, a method of verification of the power performance of a wind farm is presented. This method is based on the Friedman’s test, which is a nonparametric statistical inference technique, and it uses the information that is collected by the SCADA system from the sensors embedded in the wind turbines in order to carry out the power performance verification of a wind farm. Here, the guaranteed power curve of the wind turbines is used as one more wind turbine of the wind farm under assessment, and a multiple comparison method is used to investigate differences between pairs of wind turbines with respect to their power performance. The proposed method says whether the power performance of the specific wind farm under assessment differs significantly from what would be expected, and it also allows wind farm owners to know whether their wind farm has either a perfect power performance or an acceptable power performance. Finally, the power performance verification of an actual wind farm is carried out. The results of the application of the proposed method showed that the power performance of the specific wind farm under assessment was acceptable. PMID:27271628
Embedded object concept with a telepresence robot system
NASA Astrophysics Data System (ADS)
Vallius, Tero; Röning, Juha
2005-10-01
This paper presents the Embedded Object Concept (EOC) and a telepresence robot system which is a test case for the EOC. The EOC utilizes common object-oriented methods used in software by applying them to combined Lego-like software-hardware entities. These entities represent objects in object-oriented design methods, and they are the building blocks of embedded systems. The goal of the EOC is to make the designing of embedded systems faster and easier. This concept enables people without comprehensive knowledge in electronics design to create new embedded systems, and for experts it shortens the design time of new embedded systems. We present the current status of the EOC, including two generations of embedded objects named Atomi objects. The first generation of the Atomi objects has been tested with different applications, and found to be functional, but not optimal. The second generation aims to correct the issues found with the first generation, and it is being tested in a relatively complex test case. The test case is a telepresence robot consisting of a two wheeled human height robot and its computer counter part. The robot has been constructed using incremental device development, which is made possible by the architecture of the EOC. The robot contains video and audio exchange capability, and a controlling and balancing system for driving with two wheels. The robot is built in two versions, the first consisting of a PDA device and Atomi objects, and the second consisting of only Atomi objects. The robot is currently incomplete, but for the most part it has been successfully tested.
2010-01-01
Background Human Papillomavirus (HPV) detection results comparing paraffin embedded cervical tissue and other cervical specimens have been done with varying degrees of agreement. However, studies comparing freshly frozen specimens and paraffin embedded specimens of invasive cervical carcinomas are lacking. The aim of the study was to compare HPV detection using SPF10 broad-spectrum primers PCR followed by DEIA and genotyping by LiPA25 (version 1) between freshly frozen cervical tissue samples and paraffin embedded blocks of cervical tissue from the same patient. There were 171 pairs of paraffin embedded and freshly frozen samples analyzed from cervical carcinoma cases from Kampala, Uganda. Results 88.9% (95% CI: 83.2%-93.2%) of paraffin embedded samples were HPV positive compared with 90.1% (95% CI: 84.6%-94.1%) of freshly frozen samples, giving an overall agreement in HPV detection between fresh tissue and paraffin embedded tissue at 86.0% (95% CI: 79.8%-90.8%). Although the proportion of HPV positive cases in freshly frozen tissue was higher than those in paraffin blocks, the difference was not statistically significant (p > 0.05). In both types of tissues, single HPV infections were predominant, with HPV16 accounting for 47% of positive cases. Comparison in the overall agreement, taking into accounts not only positivity in general, but also HPV types, showed a 65% agreement (complete agreement of 59.7%, partial agreement of 5.3%) and complete disagreement of 35.0%. HPV detection in squamous cell carcinomas (SCC) and adenocarcinomas (ADC) was similar in fresh tissue or paraffin blocks (p ≥ 0.05). p16 immunostaining in samples that had at least one HPV negative results showed that 24 out of 25 cases had an over-expressed pattern. Conclusions HPV DNA detection was lower among ADC as compared to SCC. However, such differences were minimized when additional p16 testing was added, suggesting that the technical issues may largely explain the HPV negative cases. PMID:20846370
Developing a multimodal biometric authentication system using soft computing methods.
Malcangi, Mario
2015-01-01
Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.
Wang, Mengmeng; Corpuz, Christine Carole C; Huseynova, Tukezban; Tomita, Minoru
2016-02-01
To evaluate the influences of preoperative pupil parameters on the visual outcomes of a new-generation multifocal toric intraocular lens (IOL) model with a surface-embedded near segment. In this prospective study, patients with cataract had phacoemulsification and implantation of Lentis Mplus toric LU-313 30TY IOLs (Oculentis GmbH, Berlin, Germany). The visual and optical outcomes were measured and compared preoperatively and postoperatively. The correlations between preoperative pupil parameters (diameter and decentration) and 3-month postoperative visual outcomes were evaluated using the Spearman's rank-order correlation coefficient (Rs) for the nonparametric data. A total of 27 eyes (16 patients) were enrolled into the current study. Statistically significant improvements in visual and refractive performances were found after the implantation of Lentis Mplus toric LU-313 30TY IOLs (P < .05). Statistically significant correlations were present between preoperative pupil diameters and postoperative visual acuities (Rs > 0; P < .05). Patients with a larger pupil always have better postoperative visual acuities. Meanwhile, there was no statistically significant correlation between pupil decentration and visual acuities (P > .05). Lentis Mplus toric LU-313 30TY IOLs provided excellent visual and optical performances during the 3-month follow-up. The preoperative pupil size is an important parameter when this toric multifocal IOL model is contemplated for surgery. Copyright 2016, SLACK Incorporated.
Statistical evaluation of synchronous spike patterns extracted by frequent item set mining
Torre, Emiliano; Picado-Muiño, David; Denker, Michael; Borgelt, Christian; Grün, Sonja
2013-01-01
We recently proposed frequent itemset mining (FIM) as a method to perform an optimized search for patterns of synchronous spikes (item sets) in massively parallel spike trains. This search outputs the occurrence count (support) of individual patterns that are not trivially explained by the counts of any superset (closed frequent item sets). The number of patterns found by FIM makes direct statistical tests infeasible due to severe multiple testing. To overcome this issue, we proposed to test the significance not of individual patterns, but instead of their signatures, defined as the pairs of pattern size z and support c. Here, we derive in detail a statistical test for the significance of the signatures under the null hypothesis of full independence (pattern spectrum filtering, PSF) by means of surrogate data. As a result, injected spike patterns that mimic assembly activity are well detected, yielding a low false negative rate. However, this approach is prone to additionally classify patterns resulting from chance overlap of real assembly activity and background spiking as significant. These patterns represent false positives with respect to the null hypothesis of having one assembly of given signature embedded in otherwise independent spiking activity. We propose the additional method of pattern set reduction (PSR) to remove these false positives by conditional filtering. By employing stochastic simulations of parallel spike trains with correlated activity in form of injected spike synchrony in subsets of the neurons, we demonstrate for a range of parameter settings that the analysis scheme composed of FIM, PSF and PSR allows to reliably detect active assemblies in massively parallel spike trains. PMID:24167487
Qi, Helena W; Leverentz, Hannah R; Truhlar, Donald G
2013-05-30
This work presents a new fragment method, the electrostatically embedded many-body expansion of the nonlocal energy (EE-MB-NE), and shows that it, along with the previously proposed electrostatically embedded many-body expansion of the correlation energy (EE-MB-CE), produces accurate results for large systems at the level of CCSD(T) coupled cluster theory. We primarily study water 16-mers, but we also test the EE-MB-CE method on water hexamers. We analyze the distributions of two-body and three-body terms to show why the many-body expansion of the electrostatically embedded correlation energy converges faster than the many-body expansion of the entire electrostatically embedded interaction potential. The average magnitude of the dimer contributions to the pairwise additive (PA) term of the correlation energy (which neglects cooperative effects) is only one-half of that of the average dimer contribution to the PA term of the expansion of the total energy; this explains why the mean unsigned error (MUE) of the EE-PA-CE approximation is only one-half of that of the EE-PA approximation. Similarly, the average magnitude of the trimer contributions to the three-body (3B) term of the EE-3B-CE approximation is only one-fourth of that of the EE-3B approximation, and the MUE of the EE-3B-CE approximation is one-fourth that of the EE-3B approximation. Finally, we test the efficacy of two- and three-body density functional corrections. One such density functional correction method, the new EE-PA-NE method, with the OLYP or the OHLYP density functional (where the OHLYP functional is the OptX exchange functional combined with the LYP correlation functional multiplied by 0.5), has the best performance-to-price ratio of any method whose computational cost scales as the third power of the number of monomers and is competitive in accuracy in the tests presented here with even the electrostatically embedded three-body approximation.
Immunogold Staining of Ultrathin Thawed Cryosections for Transmission Electron Microscopy (TEM).
Skepper, Jeremy N; Powell, Janet M
2008-06-01
INTRODUCTIONA pre-embedding method of immunochemical staining is used if antigens are damaged by resin embedding, or if the best preservation of membranes is required. Applying immunogold reagents to sections of lightly fixed tissue, free of embedding medium, can be a very sensitive method of immunochemical staining. Cells or tissues are fixed as strongly as possible and then treated with a cryoprotectant, which is usually a mixture of sucrose and polyvinylpyrrolidone (PVP). They are frozen onto pins in liquid nitrogen and sectioned at approximately -100°C. The frozen sections are thaw-mounted on to Formvar/nickel film grids and the cryoprotectant is removed by floating the grids on drops of phosphate-buffered saline (PBS). The immunogold staining is performed on the unembedded sections, which are subsequently contrast counterstained and infiltrated with a mixture of methylcellulose and uranyl acetate. In this protocol, samples are sectioned at low temperature, thaw-mounted onto film grids, immunochemically stained, contrast counterstained, and embedded/encapsulated in situ on the grid before viewing by transmission electron microscopy (TEM).
Embedded function methods for supersonic turbulent boundary layers
NASA Technical Reports Server (NTRS)
He, J.; Kazakia, J. Y.; Walker, J. D. A.
1990-01-01
The development of embedded functions to represent the mean velocity and total enthalpy distributions in the wall layer of a supersonic turbulent boundary layer is considered. The asymptotic scaling laws (in the limit of large Reynolds number) for high speed compressible flows are obtained to facilitate eventual implementation of the embedded functions in a general prediction method. A self-consistent asymptotic structure is derived, as well as a compressible law of the wall in which the velocity and total enthalpy are logarithmic within the overlap zone, but in the Howarth-Dorodnitsyn variable. Simple outer region turbulence models are proposed (some of which are modifications of existing incompressible models) to reflect the effects of compressibility. As a test of the methodology and the new turbulence models, a set of self-similar outer region profiles is obtained for constant pressure flow; these are then coupled with embedded functions in the wall layer. The composite profiles thus obtained are compared directly with experimental data and good agreement is obtained for flows with Mach numbers up to 10.
Embedding Dimension Selection for Adaptive Singular Spectrum Analysis of EEG Signal.
Xu, Shanzhi; Hu, Hai; Ji, Linhong; Wang, Peng
2018-02-26
The recorded electroencephalography (EEG) signal is often contaminated with different kinds of artifacts and noise. Singular spectrum analysis (SSA) is a powerful tool for extracting the brain rhythm from a noisy EEG signal. By analyzing the frequency characteristics of the reconstructed component (RC) and the change rate in the trace of the Toeplitz matrix, it is demonstrated that the embedding dimension is related to the frequency bandwidth of each reconstructed component, in consistence with the component mixing in the singular value decomposition step. A method for selecting the embedding dimension is thereby proposed and verified by simulated EEG signal based on the Markov Process Amplitude (MPA) EEG Model. Real EEG signal is also collected from the experimental subjects under both eyes-open and eyes-closed conditions. The experimental results show that based on the embedding dimension selection method, the alpha rhythm can be extracted from the real EEG signal by the adaptive SSA, which can be effectively utilized to distinguish between the eyes-open and eyes-closed states.
Poojan, Shiv; Kim, Han-Seong; Yoon, Ji-Woon; Sim, Hye Won; Hong, Kyeong-Man
2018-05-20
Immunofluorescent staining is currently the method of choice for determination of protein expression levels in cell-culture systems when morphological information is also necessary. The protocol of immunocytochemical staining on paraffin-embedded cell blocks, presented herein, is an excellent alternative to immunofluorescent staining on non-paraffin-embedded fixed cells. In this protocol, a paraffin cell block from HeLa cells was prepared using the thromboplastin-plasma method, and immunocytochemistry was performed for the evaluation of two proliferation markers, CKAP2 and Ki-67. The nuclei and cytoplasmic morphology of the HeLa cells were well preserved in the cell-block slides. At the same time, the CKAP2 and Ki-67 staining patterns in the immunocytochemistry were quite similar to those in immunohistochemical staining in paraffin cancer tissues. With modified cell-culture conditions, including pre-incubation of HeLa cells under serum-free conditions, the effect could be evaluated while preserving architectural information. In conclusion, immunocytochemistry on paraffin-embedded cell blocks is an excellent alternative to immunofluorescent staining.
Electron microscopy of the amphibian model systems Xenopus laevis and Ambystoma mexicanum.
Kurth, Thomas; Berger, Jürgen; Wilsch-Bräuninger, Michaela; Kretschmar, Susanne; Cerny, Robert; Schwarz, Heinz; Löfberg, Jan; Piendl, Thomas; Epperlein, Hans H
2010-01-01
In this chapter we provide a set of different protocols for the ultrastructural analysis of amphibian (Xenopus, axolotl) tissues, mostly of embryonic origin. For Xenopus these methods include: (1) embedding gastrulae and tailbud embryos into Spurr's resin for TEM, (2) post-embedding labeling of methacrylate (K4M) and cryosections through adult and embryonic epithelia for correlative LM and TEM, and (3) pre-embedding labeling of embryonic tissues with silver-enhanced nanogold. For the axolotl (Ambystoma mexicanum) we present the following methods: (1) SEM of migrating neural crest (NC) cells; (2) SEM and TEM of extracellular matrix (ECM) material; (3) Cryo-SEM of extracellular matrix (ECM) material after cryoimmobilization; and (4) TEM analysis of hyaluronan using high-pressure freezing and HABP labeling. These methods provide exemplary approaches for a variety of questions in the field of amphibian development and regeneration, and focus on cell biological issues that can only be answered with fine structural imaging methods, such as electron microscopy. Copyright © 2010 Elsevier Inc. All rights reserved.
Oriented nanofibers embedded in a polymer matrix
NASA Technical Reports Server (NTRS)
Barrera, Enrique V. (Inventor); Lozano, Karen (Inventor); Rodriguez-Macias, Fernando J. (Inventor); Chibante, Luis Paulo Felipe (Inventor); Stewart, David Harris (Inventor)
2011-01-01
A method of forming a composite of embedded nanofibers in a polymer matrix is disclosed. The method includes incorporating nanofibers in a plastic matrix forming agglomerates, and uniformly distributing the nanofibers by exposing the agglomerates to hydrodynamic stresses. The hydrodynamic said stresses force the agglomerates to break apart. In combination or additionally elongational flow is used to achieve small diameters and alignment. A nanofiber reinforced polymer composite system is disclosed. The system includes a plurality of nanofibers that are embedded in polymer matrices in micron size fibers. A method for producing nanotube continuous fibers is disclosed. Nanofibers are fibrils with diameters of 100 nm, multiwall nanotubes, single wall nanotubes and their various functionalized and derivatized forms. The method includes mixing a nanofiber in a polymer; and inducing an orientation of the nanofibers that enables the nanofibers to be used to enhance mechanical, thermal and electrical properties. Orientation is induced by high shear mixing and elongational flow, singly or in combination. The polymer may be removed from said nanofibers, leaving micron size fibers of aligned nanofibers.
A novel attack method about double-random-phase-encoding-based image hiding method
NASA Astrophysics Data System (ADS)
Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen
2018-03-01
By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.
Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Heidelberger, Philip; Senger, Robert M; Salapura, Valentina; Steinmacher-Burow, Burkhard; Sugawara, Yutaka; Takken, Todd E
2013-08-27
Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computer program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure.
Local alignment vectors reveal cancer cell-induced ECM fiber remodeling dynamics
Lee, Byoungkoo; Konen, Jessica; Wilkinson, Scott; Marcus, Adam I.; Jiang, Yi
2017-01-01
Invasive cancer cells interact with the surrounding extracellular matrix (ECM), remodeling ECM fiber network structure by condensing, degrading, and aligning these fibers. We developed a novel local alignment vector analysis method to quantitatively measure collagen fiber alignment as a vector field using Circular Statistics. This method was applied to human non-small cell lung carcinoma (NSCLC) cell lines, embedded as spheroids in a collagen gel. Collagen remodeling was monitored using second harmonic generation imaging under normal conditions and when the LKB1-MARK1 pathway was disrupted through RNAi-based approaches. The results showed that inhibiting LKB1 or MARK1 in NSCLC increases the collagen fiber alignment and captures outward alignment vectors from the tumor spheroid, corresponding to high invasiveness of LKB1 mutant cancer cells. With time-lapse imaging of ECM micro-fiber morphology, the local alignment vector can measure the dynamic signature of invasive cancer cell activity and cell-migration-induced ECM and collagen remodeling and realigning dynamics. PMID:28045069
Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach
NASA Astrophysics Data System (ADS)
Bugatti, Alessandro; Flammini, Alessandra; Migliorati, Pierangelo
2002-12-01
We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron). In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.
KISS for STRAP: user extensions for a protein alignment editor.
Gille, Christoph; Lorenzen, Stephan; Michalsky, Elke; Frömmel, Cornelius
2003-12-12
The Structural Alignment Program STRAP is a comfortable comprehensive editor and analyzing tool for protein alignments. A wide range of functions related to protein sequences and protein structures are accessible with an intuitive graphical interface. Recent features include mapping of mutations and polymorphisms onto structures and production of high quality figures for publication. Here we address the general problem of multi-purpose program packages to keep up with the rapid development of bioinformatical methods and the demand for specific program functions. STRAP was remade implementing a novel design which aims at Keeping Interfaces in STRAP Simple (KISS). KISS renders STRAP extendable to bio-scientists as well as to bio-informaticians. Scientists with basic computer skills are capable of implementing statistical methods or embedding existing bioinformatical tools in STRAP themselves. For bio-informaticians STRAP may serve as an environment for rapid prototyping and testing of complex algorithms such as automatic alignment algorithms or phylogenetic methods. Further, STRAP can be applied as an interactive web applet to present data related to a particular protein family and as a teaching tool. JAVA-1.4 or higher. http://www.charite.de/bioinf/strap/
NASA Astrophysics Data System (ADS)
Park, K. W.; Dasika, V. D.; Nair, H. P.; Crook, A. M.; Bank, S. R.; Yu, E. T.
2012-06-01
We have used conductive atomic force microscopy to investigate the influence of growth temperature on local current flow in GaAs pn junctions with embedded ErAs nanoparticles grown by molecular beam epitaxy. Three sets of samples, one with 1 ML ErAs deposited at different growth temperatures and two grown at 530 °C and 575 °C with varying ErAs depositions, were characterized. Statistical analysis of local current images suggests that the structures grown at 575 °C have about 3 times thicker ErAs nanoparticles than structures grown at 530 °C, resulting in degradation of conductivity due to reduced ErAs coverage. These findings explain previous studies of macroscopic tunnel junctions.
A local crack-tracking strategy to model three-dimensional crack propagation with embedded methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annavarapu, Chandrasekhar; Settgast, Randolph R.; Vitali, Efrem
We develop a local, implicit crack tracking approach to propagate embedded failure surfaces in three-dimensions. We build on the global crack-tracking strategy of Oliver et al. (Int J. Numer. Anal. Meth. Geomech., 2004; 28:609–632) that tracks all potential failure surfaces in a problem at once by solving a Laplace equation with anisotropic conductivity. We discuss important modifications to this algorithm with a particular emphasis on the effect of the Dirichlet boundary conditions for the Laplace equation on the resultant crack path. Algorithmic and implementational details of the proposed method are provided. Finally, several three-dimensional benchmark problems are studied and resultsmore » are compared with available literature. Lastly, the results indicate that the proposed method addresses pathological cases, exhibits better behavior in the presence of closely interacting fractures, and provides a viable strategy to robustly evolve embedded failure surfaces in 3D.« less
A local crack-tracking strategy to model three-dimensional crack propagation with embedded methods
Annavarapu, Chandrasekhar; Settgast, Randolph R.; Vitali, Efrem; ...
2016-09-29
We develop a local, implicit crack tracking approach to propagate embedded failure surfaces in three-dimensions. We build on the global crack-tracking strategy of Oliver et al. (Int J. Numer. Anal. Meth. Geomech., 2004; 28:609–632) that tracks all potential failure surfaces in a problem at once by solving a Laplace equation with anisotropic conductivity. We discuss important modifications to this algorithm with a particular emphasis on the effect of the Dirichlet boundary conditions for the Laplace equation on the resultant crack path. Algorithmic and implementational details of the proposed method are provided. Finally, several three-dimensional benchmark problems are studied and resultsmore » are compared with available literature. Lastly, the results indicate that the proposed method addresses pathological cases, exhibits better behavior in the presence of closely interacting fractures, and provides a viable strategy to robustly evolve embedded failure surfaces in 3D.« less
NASA Astrophysics Data System (ADS)
Yao, Jie; Li, Qian; Zhou, Bo; Wang, Dan; Wu, Rie
2018-04-01
Fourier-Transform Infrared micro-spectroscopy is an excellent method for biological analyses. In this paper, series metal coating films on ITO glass were prepared by the electrochemical method and the different thicknesses of paraffin embedding rat's brain tissue on the substrates were studied by IR micro-spetroscopy in attenuated total reflection (ATR) mode and transflection mode respectively. The Co-Ni-Cu alloy coating film with low cost is good reflection substrates for the IR analysis. The infrared microscopic transflection mode needs not to touch the sample at all and can get the IR spectra with higher signal to noise ratios. The Paraffin-embedding method allows tissues to be stored for a long time for re-analysis to ensure the traceability of the sample. Also it isolates the sample from the metal and avoids the interaction of biological tissue with the metals. The best thickness of the tissues is 4 μm.
von Hansen, Yann; Mehlich, Alexander; Pelz, Benjamin; Rief, Matthias; Netz, Roland R
2012-09-01
The thermal fluctuations of micron-sized beads in dual trap optical tweezer experiments contain complete dynamic information about the viscoelastic properties of the embedding medium and-if present-macromolecular constructs connecting the two beads. To quantitatively interpret the spectral properties of the measured signals, a detailed understanding of the instrumental characteristics is required. To this end, we present a theoretical description of the signal processing in a typical dual trap optical tweezer experiment accounting for polarization crosstalk and instrumental noise and discuss the effect of finite statistics. To infer the unknown parameters from experimental data, a maximum likelihood method based on the statistical properties of the stochastic signals is derived. In a first step, the method can be used for calibration purposes: We propose a scheme involving three consecutive measurements (both traps empty, first one occupied and second empty, and vice versa), by which all instrumental and physical parameters of the setup are determined. We test our approach for a simple model system, namely a pair of unconnected, but hydrodynamically interacting spheres. The comparison to theoretical predictions based on instantaneous as well as retarded hydrodynamics emphasizes the importance of hydrodynamic retardation effects due to vorticity diffusion in the fluid. For more complex experimental scenarios, where macromolecular constructs are tethered between the two beads, the same maximum likelihood method in conjunction with dynamic deconvolution theory will in a second step allow one to determine the viscoelastic properties of the tethered element connecting the two beads.
X-ray absorption spectroscopy characterization of embedded and extracted nano-oxides
Stan, Tiberiu; Sprouster, David J.; Ofan, Avishai; ...
2016-12-29
Here, the chemistries and structures of both embedded and extracted Ysingle bondTisingle bondO nanometer-scale oxides in a nanostructured ferritic alloy (NFA) were probed by x-ray absorption spectroscopy (XAS). Y 2Ti 2O 7 is the primary embedded phase, while the slightly larger extracted oxides are primarily Y 2TiO 5. Analysis of the embedded nano-oxides is difficult partly due to the multiple Ti environments associated with different oxides and those still residing in matrix lattice sites. Thus, bulk extraction followed by selective filtration was used to isolate the larger Y 2TiO 5 oxides for XAS, while the smaller predominant embedded phase Ymore » 2Ti 2O 7 oxides passed through the filters and were analyzed using the log-ratio method.« less
Characteristics of Teachers Nominated for an Accelerated Principal Preparation Program
ERIC Educational Resources Information Center
Rios, Steve J.; Reyes-Guerra, Daniel
2012-01-01
This article reports the initial evaluation results of a new accelerated, job-embedded principal preparation program funded by a Race to the Top Grant (U.S. Department of Education, 2012a) in Florida. Descriptive statistics, t-tests, and chi-square analyses were used to describe the characteristics of a group of potential applicants nominated to…
ERIC Educational Resources Information Center
Todorinova, Lily; Huse, Andy; Lewis, Barbara; Torrence, Matt
2011-01-01
Declining reference statistics, diminishing human resources, and the desire to be more proactive and embedded in academic departments, prompted the University of South Florida Library to create a taskforce for re-envisioning reference services. The taskforce was charged with examining the staffing patterns at the desk and developing…
NASA Astrophysics Data System (ADS)
Aytaç Korkmaz, Sevcan; Binol, Hamidullah
2018-03-01
Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.
Biological embedding: evaluation and analysis of an emerging concept for nursing scholarship.
Nist, Marliese Dion
2017-02-01
The purpose of this paper was to report the analysis of the concept of biological embedding. Research that incorporates a life course perspective is becoming increasingly prominent in the health sciences. Biological embedding is a central concept in life course theory and may be important for nursing theories to enhance our understanding of health states in individuals and populations. Before the concept of biological embedding can be used in nursing theory and research, an analysis of the concept is required to advance it towards full maturity. Concept analysis. PubMed, CINAHL and PsycINFO were searched for publications using the term 'biological embedding' or 'biological programming' and published through 2015. An evaluation of the concept was first conducted to determine the concept's level of maturity and was followed by a concept comparison, using the methods for concept evaluation and comparison described by Morse. A consistent definition of biological embedding - the process by which early life experience alters biological processes to affect adult health outcomes - was found throughout the literature. The concept has been used in several theories that describe the mechanisms through which biological embedding might occur and highlight its role in the development of health trajectories. Biological embedding is a partially mature concept, requiring concept comparison with an overlapping concept - biological programming - to more clearly establish the boundaries of biological embedding. Biological embedding has significant potential for theory development and application in multiple academic disciplines, including nursing. © 2016 John Wiley & Sons Ltd.
Video Game Learning Dynamics: Actionable Measures of Multidimensional Learning Trajectories
ERIC Educational Resources Information Center
Reese, Debbie Denise; Tabachnick, Barbara G.; Kosko, Robert E.
2015-01-01
Valid, accessible, reusable methods for instructional video game design and embedded assessment can provide actionable information enhancing individual and collective achievement. Cyberlearning through game-based, metaphor-enhanced learning objects (CyGaMEs) design and embedded assessment quantify player behavior to study knowledge discovery and…
Embedded electronics for intelligent structures
NASA Astrophysics Data System (ADS)
Warkentin, David J.; Crawley, Edward F.
The signal, power, and communications provisions for the distributed control processing, sensing, and actuation of an intelligent structure could benefit from a method of physically embedding some electronic components. The preliminary feasibility of embedding electronic components in load-bearing intelligent composite structures is addressed. A technique for embedding integrated circuits on silicon chips within graphite/epoxy composite structures is presented which addresses the problems of electrical, mechanical, and chemical isolation. The mechanical and chemical isolation of test articles manufactured by this technique are tested by subjecting them to static and cyclic mechanical loads and a temperature/humidity/bias environment. The likely failure modes under these conditions are identified, and suggestions for further improvements in the technique are discussed.
Embedded fiber optic ultrasonic sensors and generators
NASA Astrophysics Data System (ADS)
Dorighi, John F.; Krishnaswamy, Sridhar; Achenbach, Jan D.
1995-04-01
Ultrasonic sensors and generators based on fiber-optic systems are described. It is shown that intrinsic fiber optic Fabry-Perot ultrasound sensors that are embedded in a structure can be stabilized by actively tuning the laser frequency. The need for this method of stabilization is demonstrated by detecting piezoelectric transducer-generated ultrasonic pulses in the presence of low frequency dynamic strains that are intentionally induced to cause sensor drift. The actively stabilized embedded fiber optic Fabry-Perot sensor is also shown to have sufficient sensitivity to detect ultrasound that is generated in the interior of a structure by means of a high-power optical fiber that pipes energy from a pulsed laser to an embedded generator of ultrasound.
Steganalysis of recorded speech
NASA Astrophysics Data System (ADS)
Johnson, Micah K.; Lyu, Siwei; Farid, Hany
2005-03-01
Digital audio provides a suitable cover for high-throughput steganography. At 16 bits per sample and sampled at a rate of 44,100 Hz, digital audio has the bit-rate to support large messages. In addition, audio is often transient and unpredictable, facilitating the hiding of messages. Using an approach similar to our universal image steganalysis, we show that hidden messages alter the underlying statistics of audio signals. Our statistical model begins by building a linear basis that captures certain statistical properties of audio signals. A low-dimensional statistical feature vector is extracted from this basis representation and used by a non-linear support vector machine for classification. We show the efficacy of this approach on LSB embedding and Hide4PGP. While no explicit assumptions about the content of the audio are made, our technique has been developed and tested on high-quality recorded speech.
Daikoku, Tatsuya
2018-01-01
Learning and knowledge of transitional probability in sequences like music, called statistical learning and knowledge, are considered implicit processes that occur without intention to learn and awareness of what one knows. This implicit statistical knowledge can be alternatively expressed via abstract medium such as musical melody, which suggests this knowledge is reflected in melodies written by a composer. This study investigates how statistics in music vary over a composer's lifetime. Transitional probabilities of highest-pitch sequences in Ludwig van Beethoven's Piano Sonata were calculated based on different hierarchical Markov models. Each interval pattern was ordered based on the sonata opus number. The transitional probabilities of sequential patterns that are musical universal in music gradually decreased, suggesting that time-course variations of statistics in music reflect time-course variations of a composer's statistical knowledge. This study sheds new light on novel methodologies that may be able to evaluate the time-course variation of composer's implicit knowledge using musical scores.
Evaluating Quality of Aged Archival Formalin-Fixed Paraffin-Embedded Samples for RNA-Sequencing
Archival formalin-fixed paraffin-embedded (FFPE) samples offer a vast, untapped source of genomic data for biomarker discovery. However, the quality of FFPE samples is often highly variable, and conventional methods to assess RNA quality for RNA-sequencing (RNA-seq) are not infor...
Use of archival resources has been limited to date by inconsistent methods for genomic profiling of degraded RNA from formalin-fixed paraffin-embedded (FFPE) samples. RNA-sequencing offers a promising way to address this problem. Here we evaluated transcriptomic dose responses us...
Using Embedded Visual Coding to Support Contextualization of Historical Texts
ERIC Educational Resources Information Center
Baron, Christine
2016-01-01
This mixed-method study examines the think-aloud protocols of 48 randomly assigned undergraduate students to understand what effect embedding a visual coding system, based on reliable visual cues for establishing historical time period, would have on novice history students' ability to contextualize historic documents. Results indicate that using…
Mayhew, Terry M; Mühlfeld, Christian; Vanhecke, Dimitri; Ochs, Matthias
2009-04-01
Detecting, localising and counting ultrasmall particles and nanoparticles in sub- and supra-cellular compartments are of considerable current interest in basic and applied research in biomedicine, bioscience and environmental science. For particles with sufficient contrast (e.g. colloidal gold, ferritin, heavy metal-based nanoparticles), visualization requires the high resolutions achievable by transmission electron microscopy (TEM). Moreover, if particles can be counted, their spatial distributions can be subjected to statistical evaluation. Whatever the level of structural organisation, particle distributions can be compared between different compartments within a given structure (cell, tissue and organ) or between different sets of structures (in, say, control and experimental groups). Here, a portfolio of stereology-based methods for drawing such comparisons is presented. We recognise two main scenarios: (1) section surface localisation, in which particles, exemplified by antibody-conjugated colloidal gold particles or quantum dots, are distributed at the section surface during post-embedding immunolabelling, and (2) section volume localisation (or full section penetration), in which particles are contained within the cell or tissue prior to TEM fixation and embedding procedures. Whatever the study aim or hypothesis, the methods for quantifying particles rely on the same basic principles: (i) unbiased selection of specimens by multistage random sampling, (ii) unbiased estimation of particle number and compartment size using stereological test probes (points, lines, areas and volumes), and (iii) statistical testing of an appropriate null hypothesis. To compare different groups of cells or organs, a simple and efficient approach is to compare the observed distributions of raw particle counts by a combined contingency table and chi-squared analysis. Compartmental chi-squared values making substantial contributions to total chi-squared values help identify where the main differences between distributions reside. Distributions between compartments in, say, a given cell type, can be compared using a relative labelling index (RLI) or relative deposition index (RDI) combined with a chi-squared analysis to test whether or not particles preferentially locate in certain compartments. This approach is ideally suited to analysing particles located in volume-occupying compartments (organelles or tissue spaces) or surface-occupying compartments (membranes) and expected distributions can be generated by the stereological devices of point, intersection and particle counting. Labelling efficiencies (number of gold particles per antigen molecule) in immunocytochemical studies can be determined if suitable calibration methods (e.g. biochemical assays of golds per membrane surface or per cell) are available. In addition to relative quantification for between-group and between-compartment comparisons, stereological methods also permit absolute quantification, e.g. total volumes, surfaces and numbers of structures per cell. Here, the utility, limitations and recent applications of these methods are reviewed.
[Reproduction of the fish Brotula clarkae (Pisces: Ophidiidae) in the Colombian Pacific].
Acevedo, Jenny; Angulo, Wilberto; Ramírez, Manuel; Zapata, Luis A
2007-01-01
We studied the reproductive ecology of the fish Brotula clarkae based on 754 individuals obtained from the industrial and artisanal fisheries in the Colombian Pacific (March 1994 to December 1996). Histological sections (paraplast embedding and Harris's hematoxylin-eosin stain) were made from 151 females (26-100 cm total length). Additionally, mean diameter and particular features are described for oocyte phases and ovarian development stages. The mean size at initial sexual maturity of females was determined with two methods (graphic and statistic) in 61.3 cm and 62.3 cm respectively. The general sex ratio was 1:1. There are several spawn pulses between May and October. The mean fecundity by spawn pulse was 1,005,657 eggs. We recommend a fishing ban during the reproduction period and a minimal catch size above 62.3 cm.
Toward the excited isoscalar meson spectrum from lattice QCD
Dudek, Jozef J.; Edwards, Robert G.; Guo, Peng; ...
2013-11-18
We report on the extraction of an excited spectrum of isoscalar mesons using lattice QCD. Calculations on several lattice volumes are performed with a range of light quark masses corresponding to pion masses down to about ~400 MeV. The distillation method enables us to evaluate the required disconnected contributions with high statistical precision for a large number of meson interpolating fields. We find relatively little mixing between light and strange in most J PC channels; one notable exception is the pseudoscalar sector where the approximate SU(3) F octet, singlet structure of the η, η' is reproduced. We extract exotic Jmore » PC states, identified as hybrid mesons in which an excited gluonic field is coupled to a color-octet qqbar pair, along with non-exotic hybrid mesons embedded in a qq¯-like spectrum.« less
A general method for handling missing binary outcome data in randomized controlled trials.
Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen
2014-12-01
The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. We propose a sensitivity analysis where standard analyses, which could include 'missing = smoking' and 'last observation carried forward', are embedded in a wider class of models. We apply our general method to data from two smoking cessation trials. A total of 489 and 1758 participants from two smoking cessation trials. The abstinence outcomes were obtained using telephone interviews. The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. © 2014 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
Agner, Shannon C; Xu, Jun; Madabhushi, Anant
2013-03-01
Segmentation of breast lesions on dynamic contrast enhanced (DCE) magnetic resonance imaging (MRI) is the first step in lesion diagnosis in a computer-aided diagnosis framework. Because manual segmentation of such lesions is both time consuming and highly susceptible to human error and issues of reproducibility, an automated lesion segmentation method is highly desirable. Traditional automated image segmentation methods such as boundary-based active contour (AC) models require a strong gradient at the lesion boundary. Even when region-based terms are introduced to an AC model, grayscale image intensities often do not allow for clear definition of foreground and background region statistics. Thus, there is a need to find alternative image representations that might provide (1) strong gradients at the margin of the object of interest (OOI); and (2) larger separation between intensity distributions and region statistics for the foreground and background, which are necessary to halt evolution of the AC model upon reaching the border of the OOI. In this paper, the authors introduce a spectral embedding (SE) based AC (SEAC) for lesion segmentation on breast DCE-MRI. SE, a nonlinear dimensionality reduction scheme, is applied to the DCE time series in a voxelwise fashion to reduce several time point images to a single parametric image where every voxel is characterized by the three dominant eigenvectors. This parametric eigenvector image (PrEIm) representation allows for better capture of image region statistics and stronger gradients for use with a hybrid AC model, which is driven by both boundary and region information. They compare SEAC to ACs that employ fuzzy c-means (FCM) and principal component analysis (PCA) as alternative image representations. Segmentation performance was evaluated by boundary and region metrics as well as comparing lesion classification using morphological features from SEAC, PCA+AC, and FCM+AC. On a cohort of 50 breast DCE-MRI studies, PrEIm yielded overall better region and boundary-based statistics compared to the original DCE-MR image, FCM, and PCA based image representations. Additionally, SEAC outperformed a hybrid AC applied to both PCA and FCM image representations. Mean dice similarity coefficient (DSC) for SEAC was significantly better (DSC = 0.74 ± 0.21) than FCM+AC (DSC = 0.50 ± 0.32) and similar to PCA+AC (DSC = 0.73 ± 0.22). Boundary-based metrics of mean absolute difference and Hausdorff distance followed the same trends. Of the automated segmentation methods, breast lesion classification based on morphologic features derived from SEAC segmentation using a support vector machine classifier also performed better (AUC = 0.67 ± 0.05; p < 0.05) than FCM+AC (AUC = 0.50 ± 0.07), and PCA+AC (AUC = 0.49 ± 0.07). In this work, we presented SEAC, an accurate, general purpose AC segmentation tool that could be applied to any imaging domain that employs time series data. SE allows for projection of time series data into a PrEIm representation so that every voxel is characterized by the dominant eigenvectors, capturing the global and local time-intensity curve similarities in the data. This PrEIm allows for the calculation of strong tensor gradients and better region statistics than the original image intensities or alternative image representations such as PCA and FCM. The PrEIm also allows for building a more accurate hybrid AC scheme.
Gravitational energy in the framework of embedding and splitting theories
NASA Astrophysics Data System (ADS)
Grad, D. A.; Ilin, R. V.; Paston, S. A.; Sheykin, A. A.
We study various definitions of the gravitational field energy based on the usage of isometric embeddings in the Regge-Teitelboim approach. For the embedding theory, we consider the coordinate translations on the surface as well as the coordinate translations in the flat bulk. In the latter case, the independent definition of gravitational energy-momentum tensor appears as a Noether current corresponding to global inner symmetry. In the field-theoretic form of this approach (splitting theory), we consider Noether procedure and the alternative method of energy-momentum tensor defining by varying the action of the theory with respect to flat bulk metric. As a result, we obtain energy definition in field-theoretic form of embedding theory which, among the other features, gives a nontrivial result for the solutions of embedding theory which are also solutions of Einstein equations. The question of energy localization is also discussed.
Internship Abstract and Final Reflection
NASA Technical Reports Server (NTRS)
Sandor, Edward
2016-01-01
The primary objective for this internship is the evaluation of an embedded natural language processor (NLP) as a way to introduce voice control into future space suits. An embedded natural language processor would provide an astronaut hands-free control for making adjustments to the environment of the space suit and checking status of consumables procedures and navigation. Additionally, the use of an embedded NLP could potentially reduce crew fatigue, increase the crewmember's situational awareness during extravehicular activity (EVA) and improve the ability to focus on mission critical details. The use of an embedded NLP may be valuable for other human spaceflight applications desiring hands-free control as well. An embedded NLP is unique because it is a small device that performs language tasks, including speech recognition, which normally require powerful processors. The dedicated device could perform speech recognition locally with a smaller form-factor and lower power consumption than traditional methods.
Embedded performance validity testing in neuropsychological assessment: Potential clinical tools.
Rickards, Tyler A; Cranston, Christopher C; Touradji, Pegah; Bechtold, Kathleen T
2018-01-01
The article aims to suggest clinically-useful tools in neuropsychological assessment for efficient use of embedded measures of performance validity. To accomplish this, we integrated available validity-related and statistical research from the literature, consensus statements, and survey-based data from practicing neuropsychologists. We provide recommendations for use of 1) Cutoffs for embedded performance validity tests including Reliable Digit Span, California Verbal Learning Test (Second Edition) Forced Choice Recognition, Rey-Osterrieth Complex Figure Test Combination Score, Wisconsin Card Sorting Test Failure to Maintain Set, and the Finger Tapping Test; 2) Selecting number of performance validity measures to administer in an assessment; and 3) Hypothetical clinical decision-making models for use of performance validity testing in a neuropsychological assessment collectively considering behavior, patient reporting, and data indicating invalid or noncredible performance. Performance validity testing helps inform the clinician about an individual's general approach to tasks: response to failure, task engagement and persistence, compliance with task demands. Data-driven clinical suggestions provide a resource to clinicians and to instigate conversation within the field to make more uniform, testable decisions to further the discussion, and guide future research in this area.
Detecting rare, abnormally large grains by x-ray diffraction
Boyce, Brad L.; Furnish, Timothy Allen; Padilla, H. A.; ...
2015-07-16
Bimodal grain structures are common in many alloys, arising from a number of different causes including incomplete recrystallization and abnormal grain growth. These bimodal grain structures have important technological implications, such as the well-known Goss texture which is now a cornerstone for electrical steels. Yet our ability to detect bimodal grain distributions is largely confined to brute force cross-sectional metallography. The present study presents a new method for rapid detection of unusually large grains embedded in a sea of much finer grains. Traditional X-ray diffraction-based grain size measurement techniques such as Scherrer, Williamson–Hall, or Warren–Averbach rely on peak breadth andmore » shape to extract information regarding the average crystallite size. However, these line broadening techniques are not well suited to identify a very small fraction of abnormally large grains. The present method utilizes statistically anomalous intensity spikes in the Bragg peak to identify regions where abnormally large grains are contributing to diffraction. This needle-in-a-haystack technique is demonstrated on a nanocrystalline Ni–Fe alloy which has undergone fatigue-induced abnormal grain growth. In this demonstration, the technique readily identifies a few large grains that occupy <0.00001 % of the interrogation volume. Finally, while the technique is demonstrated in the current study on nanocrystalline metal, it would likely apply to any bimodal polycrystal including ultrafine grained and fine microcrystalline materials with sufficiently distinct bimodal grain statistics.« less
Pashaei, Elnaz; Pashaei, Elham; Aydin, Nizamettin
2018-04-14
In cancer classification, gene selection is an important data preprocessing technique, but it is a difficult task due to the large search space. Accordingly, the objective of this study is to develop a hybrid meta-heuristic Binary Black Hole Algorithm (BBHA) and Binary Particle Swarm Optimization (BPSO) (4-2) model that emphasizes gene selection. In this model, the BBHA is embedded in the BPSO (4-2) algorithm to make the BPSO (4-2) more effective and to facilitate the exploration and exploitation of the BPSO (4-2) algorithm to further improve the performance. This model has been associated with Random Forest Recursive Feature Elimination (RF-RFE) pre-filtering technique. The classifiers which are evaluated in the proposed framework are Sparse Partial Least Squares Discriminant Analysis (SPLSDA); k-nearest neighbor and Naive Bayes. The performance of the proposed method was evaluated on two benchmark and three clinical microarrays. The experimental results and statistical analysis confirm the better performance of the BPSO (4-2)-BBHA compared with the BBHA, the BPSO (4-2) and several state-of-the-art methods in terms of avoiding local minima, convergence rate, accuracy and number of selected genes. The results also show that the BPSO (4-2)-BBHA model can successfully identify known biologically and statistically significant genes from the clinical datasets. Copyright © 2018 Elsevier Inc. All rights reserved.
Hamatani, Kiyohiro; Eguchi, Hidetaka; Mukai, Mayumi; Koyama, Kazuaki; Taga, Masataka; Ito, Reiko; Hayashi, Yuzo; Nakachi, Kei
2010-01-01
Since many thyroid cancer tissue samples from atomic bomb (A-bomb) survivors have been preserved for several decades as unbuffered formalin-fixed, paraffin-embedded specimens, molecular oncological analysis of such archival specimens is indispensable for clarifying the mechanisms of thyroid carcinogenesis in A-bomb survivors. Although RET gene rearrangements are the most important targets, it is a difficult task to examine all of the 13 known types of RET gene rearrangements with the use of the limited quantity of RNA that has been extracted from invaluable paraffin-embedded tissue specimens of A-bomb survivors. In this study, we established an improved 5' rapid amplification of cDNA ends (RACE) method using a small amount of RNA extracted from archival thyroid cancer tissue specimens. Three archival thyroid cancer tissue specimens from three different patients were used as in-house controls to determine the conditions for an improved switching mechanism at 5' end of RNA transcript (SMART) RACE method; one tissue specimen with RET/PTC1 rearrangement and one with RET/PTC3 rearrangement were used as positive samples. One other specimen, used as a negative sample, revealed no detectable expression of the RET gene tyrosine kinase domain. We established a 5' RACE method using an amount of RNA as small as 10 ng extracted from long-term preserved, unbuffered formalin-fixed, paraffin-embedded thyroid cancer tissue by application of SMART technology. This improved SMART RACE method not only identified common RET gene rearrangements, but also isolated a clone containing a 93-bp insert of rare RTE/PTC8 in RNA extracted from formalin-fixed, paraffin-embedded thyroid cancer specimens from one A-bomb survivor who had been exposed to a high radiation dose. In addition, in the papillary thyroid cancer of another high-dose A-bomb survivor, this method detected one novel type of RET gene rearrangement whose partner gene is acyl coenzyme A binding domain 5, located on chromosome 10p. We conclude that our improved SMART RACE method is expected to prove useful in molecular analyses using archival formalin-fixed, paraffin-embedded tissue samples of limited quantity.
Correlative Stochastic Optical Reconstruction Microscopy and Electron Microscopy
Kim, Doory; Deerinck, Thomas J.; Sigal, Yaron M.; Babcock, Hazen P.; Ellisman, Mark H.; Zhuang, Xiaowei
2015-01-01
Correlative fluorescence light microscopy and electron microscopy allows the imaging of spatial distributions of specific biomolecules in the context of cellular ultrastructure. Recent development of super-resolution fluorescence microscopy allows the location of molecules to be determined with nanometer-scale spatial resolution. However, correlative super-resolution fluorescence microscopy and electron microscopy (EM) still remains challenging because the optimal specimen preparation and imaging conditions for super-resolution fluorescence microscopy and EM are often not compatible. Here, we have developed several experiment protocols for correlative stochastic optical reconstruction microscopy (STORM) and EM methods, both for un-embedded samples by applying EM-specific sample preparations after STORM imaging and for embedded and sectioned samples by optimizing the fluorescence under EM fixation, staining and embedding conditions. We demonstrated these methods using a variety of cellular targets. PMID:25874453
Method for making a microporous membrane
NASA Technical Reports Server (NTRS)
Gavalas, Lillian Susan (Inventor)
2013-01-01
A method for making a microporous membrane comprises the steps of: providing a plurality of carbon nanotubes having a hollow interior diameter of 20 Angstroms or less; sonicating the plurality of carbon nanotubes utilizing a solution comprising deionized, distilled water and a surfactant that coats at least one of the plurality of carbon nanotubes; collecting the coated carbon nanotubes; forming a matrix that supports the plurality of carbon nanotubes; embedding the coated carbon nanotubes into the matrix; rinsing the coated nanotubes to remove at least a portion of the surfactant; curing the nanotube-matrix assembly; and cutting the nanotube-matrix assembly to a particular thickness so as to open the ends of the embedded nanotubes. The hollow interiors of the plurality of embedded carbon nanotubes comprise the pores of the microporous membrane.
Incorporation of prefabricated screw, pneumatic, and solenoid valves into microfluidic devices
Hulme, S. Elizabeth; Shevkoplyas, Sergey S.
2011-01-01
This paper describes a method for prefabricating screw, pneumatic, and solenoid valves and embedding them in microfluidic devices. This method of prefabrication and embedding is simple, requires no advanced fabrication, and is compatible with soft lithography. Because prefabrication allows many identical valves to be made at one time, the performance across different valves made in the same manner is reproducible. In addition, the performance of a single valve is reproducible over many cycles of opening and closing: an embedded solenoid valve opened and closed a microfluidic channel more than 100,000 times with no apparent deterioration in its function. It was possible to combine all three types of prefabricated valves in a single microfluidic device to control chemical gradients in a microfluidic channel temporally and spatially. PMID:19209338
Incorporation of prefabricated screw, pneumatic, and solenoid valves into microfluidic devices.
Hulme, S Elizabeth; Shevkoplyas, Sergey S; Whitesides, George M
2009-01-07
This paper describes a method for prefabricating screw, pneumatic, and solenoid valves and embedding them in microfluidic devices. This method of prefabrication and embedding is simple, requires no advanced fabrication, and is compatible with soft lithography. Because prefabrication allows many identical valves to be made at one time, the performance across different valves made in the same manner is reproducible. In addition, the performance of a single valve is reproducible over many cycles of opening and closing: an embedded solenoid valve opened and closed a microfluidic channel more than 100,000 times with no apparent deterioration in its function. It was possible to combine all three types of prefabricated valves in a single microfluidic device to control chemical gradients in a microfluidic channel temporally and spatially.
Jiang, JingLe; Marathe, Amar R; Keene, Jennifer C; Taylor, Dawn M
2017-02-01
Custom-fitted skull replacement pieces are often used after a head injury or surgery to replace damaged bone. Chronic brain recordings are beneficial after injury/surgery for monitoring brain health and seizure development. Embedding electrodes directly in these artificial skull replacement pieces would be a novel, low-risk way to perform chronic brain monitoring in these patients. Similarly, embedding electrodes directly in healthy skull would be a viable minimally-invasive option for many other neuroscience and neurotechnology applications requiring chronic brain recordings. We demonstrate a preclinical testbed that can be used for refining electrode designs embedded in artificial skull replacement pieces or for embedding directly into the skull itself. Options are explored to increase the surface area of the contacts without increasing recording contact diameter to maximize recording resolution. Embedding electrodes in real or artificial skull allows one to lower electrode impedance without increasing the recording contact diameter by making use of conductive channels that extend into the skull. The higher density of small contacts embedded in the artificial skull in this testbed enables one to optimize electrode spacing for use in real bone. For brain monitoring applications, skull-embedded electrodes fill a gap between electroencephalograms recorded on the scalp surface and the more invasive epidural or subdural electrode sheets. Embedding electrodes into the skull or in skull replacement pieces may provide a safe, convenient, minimally-invasive alternative for chronic brain monitoring. The manufacturing methods described here will facilitate further testing of skull-embedded electrodes in animal models. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Alfano, Candice A.
2012-01-01
Despite the approach of the "Diagnostic and Statistical Manual of Mental Disorders" (5th ed.), generalized anxiety disorder (GAD) of childhood continues to face questions as to whether it should be considered a distinct clinical disorder. A potentially critical issue embedded in this debate involves the role of functional impairment which has yet…
Early seizure detection in an animal model of temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Ditto, William; Carney, Paul R.
2007-11-01
The performance of five seizure detection schemes, i.e., Nonlinear embedding delay, Hurst scaling, Wavelet Scale, autocorrelation and gradient of accumulated energy, in their ability to detect EEG seizures close to the seizure onset time were evaluated to determine the feasibility of their application in the development of a real time closed loop seizure intervention program (RCLSIP). The criteria chosen for the performance evaluation were, high statistical robustness as determined through the predictability index, the sensitivity and the specificity of a given measure to detect an EEG seizure, the lag in seizure detection with respect to the EEG seizure onset time, as determined through visual inspection and the computational efficiency for each detection measure. An optimality function was designed to evaluate the overall performance of each measure dependent on the criteria chosen. While each of the above measures analyzed for seizure detection performed very well in terms of the statistical parameters, the nonlinear embedding delay measure was found to have the highest optimality index due to its ability to detect seizure very close to the EEG seizure onset time, thereby making it the most suitable dynamical measure in the development of RCLSIP in rat model with chronic limbic epilepsy.
Pole pulling apparatus and method
McIntire, Gary L.
1989-01-01
An apparatus for removal of embedded utility-type poles which removes the poles quickly and efficiently from their embedded position without damage to the pole or surrounding structures. The apparatus includes at least 2 piston/cylinder members equally spaced about the pole, and a head member affixed to the top of each piston. Elongation of the piston induces rotation of the head into the pole to increase the gripping action and reduce slippage. Repeated actuation and retraction of the piston and head member will "jack" the pole from its embedded position.
Vibrational properties of TaW alloy using modified embedded atom method potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chand, Manesh, E-mail: maneshchand@gmail.com; Uniyal, Shweta; Joshi, Subodh
2016-05-06
Force-constants up to second neighbours of pure transition metal Ta and TaW alloy are determined using the modified embedded atom method (MEAM) potential. The obtained force-constants are used to calculate the phonon dispersion of pure Ta and TaW alloy. As a further application of MEAM potential, the force-constants are used to calculate the local vibrational density of states and mean square thermal displacements of pure Ta and W impurity atoms with Green’s function method. The calculated results are found to be in agreement with the experimental measurements.
H.264/AVC digital fingerprinting based on spatio-temporal just noticeable distortion
NASA Astrophysics Data System (ADS)
Ait Saadi, Karima; Bouridane, Ahmed; Guessoum, Abderrezak
2014-01-01
This paper presents a robust adaptive embedding scheme using a modified Spatio-Temporal noticeable distortion (JND) model that is designed for tracing the distribution of the H.264/AVC video content and protecting them from unauthorized redistribution. The Embedding process is performed during coding process in selected macroblocks type Intra 4x4 within I-Frame. The method uses spread-spectrum technique in order to obtain robustness against collusion attacks and the JND model to dynamically adjust the embedding strength and control the energy of the embedded fingerprints so as to ensure their imperceptibility. Linear and non linear collusion attacks are performed to show the robustness of the proposed technique against collusion attacks while maintaining visual quality unchanged.
Cyclin d1 expression in odontogenic cysts.
Taghavi, Nasim; Modabbernia, Shirin; Akbarzadeh, Alireza; Sajjadi, Samad
2013-01-01
In the present study expression of cyclin D1 in the epithelial lining of odontogenic keratocyst, radicular cyst, dentigerous cyst and glandular odontogenic cyst was investigated to compare proliferative activity in these lesions. Immunohistochemical staining of cyclin D1 on formalin-fixed, paraffin-embedded tissue sections of odontogenic keratocysts (n=23), dentigerous cysts (n=20), radicular cysts (n=20) and glandular odontogenic cysts (n=5) was performed by standard EnVision method. Then, slides were studied to evaluate the following parameters in epithelial lining of cysts: expression, expression pattern, staining intensity and localization of expression. The data analysis showed statistically significant difference in cyclin D1 expression in studied groups (p < 0.001). Assessment of staining intensity and staining pattern showed more strong intensity and focally pattern in odontogenic keratocysts, but difference was not statistically significant among groups respectively (p=0.204, 0.469). Considering expression localization, cyclin D1 positive cells in odontogenic keratocysts and dentigerous cysts were frequently confined in parabasal layer, different from radicular cysts and glandular odontogenic cysts. The difference was statistically significant (p < 0.01). Findings showed higher expression of cyclin D1 in parabasal layer of odontogenic keratocyst and the entire cystic epithelium of glandular odontogenic cysts comparing to dentigerous cysts and radicular cysts, implying the possible role of G1-S cell cycle phase disturbances in the aggressiveness of odontogenic keratocyst and glandular odontogenic cyst.
NASA Astrophysics Data System (ADS)
Ayral, Thomas; Lee, Tsung-Han; Kotliar, Gabriel
2017-12-01
We present a unified perspective on dynamical mean-field theory (DMFT), density-matrix embedding theory (DMET), and rotationally invariant slave bosons (RISB). We show that DMET can be regarded as a simplification of the RISB method where the quasiparticle weight is set to unity. This relation makes it easy to transpose extensions of a given method to another: For instance, a temperature-dependent version of RISB can be used to derive a temperature-dependent free-energy formula for DMET.
Reconstructing latent dynamical noise for better forecasting observables
NASA Astrophysics Data System (ADS)
Hirata, Yoshito
2018-03-01
I propose a method for reconstructing multi-dimensional dynamical noise inspired by the embedding theorem of Muldoon et al. [Dyn. Stab. Syst. 13, 175 (1998)] by regarding multiple predictions as different observables. Then, applying the embedding theorem by Stark et al. [J. Nonlinear Sci. 13, 519 (2003)] for a forced system, I produce time series forecast by supplying the reconstructed past dynamical noise as auxiliary information. I demonstrate the proposed method on toy models driven by auto-regressive models or independent Gaussian noise.
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorella, S., E-mail: sorella@sissa.it; Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wavemore » function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.« less
Prediction of enhancer-promoter interactions via natural language processing.
Zeng, Wanwen; Wu, Mengmeng; Jiang, Rui
2018-05-09
Precise identification of three-dimensional genome organization, especially enhancer-promoter interactions (EPIs), is important to deciphering gene regulation, cell differentiation and disease mechanisms. Currently, it is a challenging task to distinguish true interactions from other nearby non-interacting ones since the power of traditional experimental methods is limited due to low resolution or low throughput. We propose a novel computational framework EP2vec to assay three-dimensional genomic interactions. We first extract sequence embedding features, defined as fixed-length vector representations learned from variable-length sequences using an unsupervised deep learning method in natural language processing. Then, we train a classifier to predict EPIs using the learned representations in supervised way. Experimental results demonstrate that EP2vec obtains F1 scores ranging from 0.841~ 0.933 on different datasets, which outperforms existing methods. We prove the robustness of sequence embedding features by carrying out sensitivity analysis. Besides, we identify motifs that represent cell line-specific information through analysis of the learned sequence embedding features by adopting attention mechanism. Last, we show that even superior performance with F1 scores 0.889~ 0.940 can be achieved by combining sequence embedding features and experimental features. EP2vec sheds light on feature extraction for DNA sequences of arbitrary lengths and provides a powerful approach for EPIs identification.
Real-Time Distributed Embedded Oscillator Operating Frequency Monitoring
NASA Technical Reports Server (NTRS)
Pollock, Julie; Oliver, Brett; Brickner, Christopher
2012-01-01
A document discusses the utilization of embedded clocks inside of operating network data links as an auxiliary clock source to satisfy local oscillator monitoring requirements. Modem network interfaces, typically serial network links, often contain embedded clocking information of very tight precision to recover data from the link. This embedded clocking data can be utilized by the receiving device to monitor the local oscillator for tolerance to required specifications, often important in high-integrity fault-tolerant applications. A device can utilize a received embedded clock to determine if the local or the remote device is out of tolerance by using a single link. The local device can determine if it is failing, assuming a single fault model, with two or more active links. Network fabric components, containing many operational links, can potentially determine faulty remote or local devices in the presence of multiple faults. Two methods of implementation are described. In one method, a recovered clock can be directly used to monitor the local clock as a direct replacement of an external local oscillator. This scheme is consistent with a general clock monitoring function whereby clock sources are clocking two counters and compared over a fixed interval of time. In another method, overflow/underflow conditions can be used to detect clock relationships for monitoring. These network interfaces often provide clock compensation circuitry to allow data to be transferred from the received (network) clock domain to the internal clock domain. This circuit could be modified to detect overflow/underflow conditions of the buffering required and report a fast or slow receive clock, respectively.
Identifying synonymy between relational phrases using word embeddings.
Nguyen, Nhung T H; Miwa, Makoto; Tsuruoka, Yoshimasa; Tojo, Satoshi
2015-08-01
Many text mining applications in the biomedical domain benefit from automatic clustering of relational phrases into synonymous groups, since it alleviates the problem of spurious mismatches caused by the diversity of natural language expressions. Most of the previous work that has addressed this task of synonymy resolution uses similarity metrics between relational phrases based on textual strings or dependency paths, which, for the most part, ignore the context around the relations. To overcome this shortcoming, we employ a word embedding technique to encode relational phrases. We then apply the k-means algorithm on top of the distributional representations to cluster the phrases. Our experimental results show that this approach outperforms state-of-the-art statistical models including latent Dirichlet allocation and Markov logic networks. Copyright © 2015 Elsevier Inc. All rights reserved.
Web Service Architecture Framework for Embedded Devices
ERIC Educational Resources Information Center
Yanzick, Paul David
2009-01-01
The use of Service Oriented Architectures, namely web services, has become a widely adopted method for transfer of data between systems across the Internet as well as the Enterprise. Adopting a similar approach to embedded devices is also starting to emerge as personal devices and sensor networks are becoming more common in the industry. This…
Site-occupation embedding theory using Bethe ansatz local density approximations
NASA Astrophysics Data System (ADS)
Senjean, Bruno; Nakatani, Naoki; Tsuchiizu, Masahisa; Fromager, Emmanuel
2018-06-01
Site-occupation embedding theory (SOET) is an alternative formulation of density functional theory (DFT) for model Hamiltonians where the fully interacting Hubbard problem is mapped, in principle exactly, onto an impurity-interacting (rather than a noninteracting) one. It provides a rigorous framework for combining wave-function (or Green function)-based methods with DFT. In this work, exact expressions for the per-site energy and double occupation of the uniform Hubbard model are derived in the context of SOET. As readily seen from these derivations, the so-called bath contribution to the per-site correlation energy is, in addition to the latter, the key density functional quantity to model in SOET. Various approximations based on Bethe ansatz and perturbative solutions to the Hubbard and single-impurity Anderson models are constructed and tested on a one-dimensional ring. The self-consistent calculation of the embedded impurity wave function has been performed with the density-matrix renormalization group method. It has been shown that promising results are obtained in specific regimes of correlation and density. Possible further developments have been proposed in order to provide reliable embedding functionals and potentials.
NASA Astrophysics Data System (ADS)
Stam, Christina N.; Bruckner, James; Spry, J. Andy; Venkateswaran, Kasthuri; La Duc, Myron T.
2012-07-01
Current assessments of bioburden embedded in spacecraft materials are based on work performed in the Viking era (1970s), and the ability to culture organisms extracted from such materials. To circumvent the limitations of such approaches, DNA-based techniques were evaluated alongside established culturing techniques to determine the recovery and survival of bacterial spores encapsulated in spacecraft-qualified polymer materials. Varying concentrations of Bacillus pumilus SAFR-032 spores were completely embedded in silicone epoxy. An organic dimethylacetamide-based solvent was used to digest the epoxy and spore recovery was evaluated via gyrB-targeted qPCR, direct agar plating, most probably number analysis, and microscopy. Although full-strength solvent was shown to inhibit the germination and/or outgrowth of spores, dilution in excess of 100-fold allowed recovery with no significant decrease in cultivability. Similarly, qPCR (quantitative PCR) detection sensitivities as low as ~103 CFU ml-1 were achieved upon removal of inhibitory substances associated with the epoxy and/or solvent. These detection and enumeration methods show promise for use in assessing the embedded bioburden of spacecraft hardware.
Embedded Hyperchaotic Generators: A Comparative Analysis
NASA Astrophysics Data System (ADS)
Sadoudi, Said; Tanougast, Camel; Azzaz, Mohamad Salah; Dandache, Abbas
In this paper, we present a comparative analysis of FPGA implementation performances, in terms of throughput and resources cost, of five well known autonomous continuous hyperchaotic systems. The goal of this analysis is to identify the embedded hyperchaotic generator which leads to designs with small logic area cost, satisfactory throughput rates, low power consumption and low latency required for embedded applications such as secure digital communications between embedded systems. To implement the four-dimensional (4D) chaotic systems, we use a new structural hardware architecture based on direct VHDL description of the forth order Runge-Kutta method (RK-4). The comparative analysis shows that the hyperchaotic Lorenz generator provides attractive performances compared to that of others. In fact, its hardware implementation requires only 2067 CLB-slices, 36 multipliers and no block RAMs, and achieves a throughput rate of 101.6 Mbps, at the output of the FPGA circuit, at a clock frequency of 25.315 MHz with a low latency time of 316 ns. Consequently, these good implementation performances offer to the embedded hyperchaotic Lorenz generator the advantage of being the best candidate for embedded communications applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Kuang; Libisch, Florian; Carter, Emily A., E-mail: eac@princeton.edu
We report a new implementation of the density functional embedding theory (DFET) in the VASP code, using the projector-augmented-wave (PAW) formalism. Newly developed algorithms allow us to efficiently perform optimized effective potential optimizations within PAW. The new algorithm generates robust and physically correct embedding potentials, as we verified using several test systems including a covalently bound molecule, a metal surface, and bulk semiconductors. We show that with the resulting embedding potential, embedded cluster models can reproduce the electronic structure of point defects in bulk semiconductors, thereby demonstrating the validity of DFET in semiconductors for the first time. Compared to ourmore » previous version, the new implementation of DFET within VASP affords use of all features of VASP (e.g., a systematic PAW library, a wide selection of functionals, a more flexible choice of U correction formalisms, and faster computational speed) with DFET. Furthermore, our results are fairly robust with respect to both plane-wave and Gaussian type orbital basis sets in the embedded cluster calculations. This suggests that the density functional embedding method is potentially an accurate and efficient way to study properties of isolated defects in semiconductors.« less
Lossless data embedding for all image formats
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2002-04-01
Lossless data embedding has the property that the distortion due to embedding can be completely removed from the watermarked image without accessing any side channel. This can be a very important property whenever serious concerns over the image quality and artifacts visibility arise, such as for medical images, due to legal reasons, for military images or images used as evidence in court that may be viewed after enhancement and zooming. We formulate two general methodologies for lossless embedding that can be applied to images as well as any other digital objects, including video, audio, and other structures with redundancy. We use the general principles as guidelines for designing efficient, simple, and high-capacity lossless embedding methods for three most common image format paradigms - raw, uncompressed formats (BMP), lossy or transform formats (JPEG), and palette formats (GIF, PNG). We close the paper with examples of how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of non-trivial tasks, including elegant lossless authentication using fragile watermarks. Note on terminology: some authors coined the terms erasable, removable, reversible, invertible, and distortion-free for the same concept.
Cheah, Pike See; Mohidin, Norhani; Mohd Ali, Bariah; Maung, Myint; Latif, Azian Abdul
2008-01-01
This study illustrates and quantifies the changes on corneal tissue between the paraffin-embedded and resin-embedded blocks and thus, selects a better target in investigational ophthalmology and optometry via light microscopy. Corneas of two cynomolgus monkeys (Macaca fascicularis) were used in this study. The formalin-fixed cornea was prepared in paraffin block via the conventional tissue processing protocol (4-day protocol) and stained with haematoxylin and eosin. The glutaraldehyde-fixed cornea was prepared in resin block via the rapid and modified tissue processing procedure (1.2-day protocol) and stained with toluidine blue. The paraffin-embedded sample exhibits various undesired tissue damage and artifact such as thinner epithelium (due to the substantial volumic extraction from the tissue), thicker stroma layer (due to the separation of lamellae and the presence of voids) and the distorted endothelium. In contrast, the resin-embedded corneal tissue has demonstrated satisfactory corneal ultrastructural preservation. The rapid and modified tissue processing method for preparing the resin-embedded is particularly beneficial to accelerate the microscopic evaluation in ophthalmology and optometry. PMID:22570589
Staining Methods for Normal and Regenerative Myelin in the Nervous System.
Carriel, Víctor; Campos, Antonio; Alaminos, Miguel; Raimondo, Stefania; Geuna, Stefano
2017-01-01
Histochemical techniques enable the specific identification of myelin by light microscopy. Here we describe three histochemical methods for the staining of myelin suitable for formalin-fixed and paraffin-embedded materials. The first method is conventional luxol fast blue (LFB) method which stains myelin in blue and Nissl bodies and mast cells in purple. The second method is a LBF-based method called MCOLL, which specifically stains the myelin as well the collagen fibers and cells, giving an integrated overview of the histology and myelin content of the tissue. Finally, we describe the osmium tetroxide method, which consist in the osmication of previously fixed tissues. Osmication is performed prior the embedding of tissues in paraffin giving a permanent positive reaction for myelin as well as other lipids present in the tissue.
A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry
NASA Astrophysics Data System (ADS)
Al-Marouf, M.; Samtaney, R.
2017-05-01
We present an embedded ghost fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.
Embedded 3D shape measurement system based on a novel spatio-temporal coding method
NASA Astrophysics Data System (ADS)
Xu, Bin; Tian, Jindong; Tian, Yong; Li, Dong
2016-11-01
Structured light measurement has been wildly used since 1970s in industrial component detection, reverse engineering, 3D molding, robot navigation, medical and many other fields. In order to satisfy the demand for high speed, high precision and high resolution 3-D measurement for embedded system, a new patterns combining binary and gray coding principle in space are designed and projected onto the object surface orderly. Each pixel corresponds to the designed sequence of gray values in time - domain, which is treated as a feature vector. The unique gray vector is then dimensionally reduced to a scalar which could be used as characteristic information for binocular matching. In this method, the number of projected structured light patterns is reduced, and the time-consuming phase unwrapping in traditional phase shift methods is avoided. This algorithm is eventually implemented on DM3730 embedded system for 3-D measuring, which consists of an ARM and a DSP core and has a strong capability of digital signal processing. Experimental results demonstrated the feasibility of the proposed method.
Embedding intensity image into a binary hologram with strong noise resistant capability
NASA Astrophysics Data System (ADS)
Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-11-01
A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.
A study of the effects of strong magnetic fields on the image resolution of PET scanners
NASA Astrophysics Data System (ADS)
Burdette, Don J.
Very high resolution images can be achieved in small animal PET systems utilizing solid state silicon pad detectors. In such systems using detectors with sub-millimeter intrinsic resolutions, the range of the positron is the largest contribution to the image blur. The size of the positron range effect depends on the initial positron energy and hence the radioactive tracer used. For higher energy positron emitters, such as 68Ga and 94mTc, the variation of the annihilation point dominates the spatial resolution. In this study two techniques are investigated to improve the image resolution of PET scanners limited by the range of the positron. One, the positron range can be reduced by embedding the PET field of view in a strong magnetic field. We have developed a silicon pad detector based PET instrument that can operate in strong magnetic fields with an image resolution of 0.7 mm FWHM to study this effect. Two, iterative reconstruction methods can be used to statistically correct for the range of the positron. Both strong magnetic fields and iterative reconstruction algorithms that statistically account for the positron range distribution are investigated in this work.
Unsupervised universal steganalyzer for high-dimensional steganalytic features
NASA Astrophysics Data System (ADS)
Hou, Xiaodan; Zhang, Tao
2016-11-01
The research in developing steganalytic features has been highly successful. These features are extremely powerful when applied to supervised binary classification problems. However, they are incompatible with unsupervised universal steganalysis because the unsupervised method cannot distinguish embedding distortion from varying levels of noises caused by cover variation. This study attempts to alleviate the problem by introducing similarity retrieval of image statistical properties (SRISP), with the specific aim of mitigating the effect of cover variation on the existing steganalytic features. First, cover images with some statistical properties similar to those of a given test image are searched from a retrieval cover database to establish an aided sample set. Then, unsupervised outlier detection is performed on a test set composed of the given test image and its aided sample set to determine the type (cover or stego) of the given test image. Our proposed framework, called SRISP-aided unsupervised outlier detection, requires no training. Thus, it does not suffer from model mismatch mess. Compared with prior unsupervised outlier detectors that do not consider SRISP, the proposed framework not only retains the universality but also exhibits superior performance when applied to high-dimensional steganalytic features.
Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View
NASA Astrophysics Data System (ADS)
Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.
2017-09-01
Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.
Wei, Zuojun; Lou, Jiongtao; Su, Chuanmin; Guo, Dechao; Liu, Yingxin; Deng, Shuguang
2017-04-22
To achieve a higher activity and reusability of a Ru-based catalyst, Ru nanoparticles were embedded in N-doped mesoporous carbon through a hard-template method. The catalyst showed excellent catalytic performance (314 h -1 turnover frequency) and recyclability (reusable five times with 3 % activity loss) for the hydrogenolysis of levulinic acid to γ-valerolactone. Compared with the mesoporous carbon without N-doping and conventional activated carbon, the introduction of N-dopant effectively improved the dispersion of Ru nanoparticles, decreased the average size of Ru nanoparticles to as small as 1.32 nm, and improved the adsorption of levulinic acid, which contributed to the increase in the activity of the catalyst. Additionally, the embedding method increased the interaction between Ru nanoparticles and carbon support in contrast with the conventional impregnation method, thus preventing the Ru nanoparticles from migration, aggregation, and leaching from the carbon surface and therefore increasing the reusability of the catalyst. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Electrostatically Embedded Many-Body Expansion for Neutral and Charged Metalloenzyme Model Systems.
Kurbanov, Elbek K; Leverentz, Hannah R; Truhlar, Donald G; Amin, Elizabeth A
2012-01-10
The electrostatically embedded many-body (EE-MB) method has proven accurate for calculating cohesive and conformational energies in clusters, and it has recently been extended to obtain bond dissociation energies for metal-ligand bonds in positively charged inorganic coordination complexes. In the present paper, we present four key guidelines that maximize the accuracy and efficiency of EE-MB calculations for metal centers. Then, following these guidelines, we show that the EE-MB method can also perform well for bond dissociation energies in a variety of neutral and negatively charged inorganic coordination systems representing metalloenzyme active sites, including a model of the catalytic site of the zinc-bearing anthrax toxin lethal factor, a popular target for drug development. In particular, we find that the electrostatically embedded three-body (EE-3B) method is able to reproduce conventionally calculated bond-breaking energies in a series of pentacoordinate and hexacoordinate zinc-containing systems with an average absolute error (averaged over 25 cases) of only 0.98 kcal/mol.
Höfener, Sebastian; Gomes, André Severo Pereira; Visscher, Lucas
2012-01-28
In this article, we present a consistent derivation of a density functional theory (DFT) based embedding method which encompasses wave-function theory-in-DFT (WFT-in-DFT) and the DFT-based subsystem formulation of response theory (DFT-in-DFT) by Neugebauer [J. Neugebauer, J. Chem. Phys. 131, 084104 (2009)] as special cases. This formulation, which is based on the time-averaged quasi-energy formalism, makes use of the variation Lagrangian techniques to allow the use of non-variational (in particular: coupled cluster) wave-function-based methods. We show how, in the time-independent limit, we naturally obtain expressions for the ground-state DFT-in-DFT and WFT-in-DFT embedding via a local potential. We furthermore provide working equations for the special case in which coupled cluster theory is used to obtain the density and excitation energies of the active subsystem. A sample application is given to demonstrate the method. © 2012 American Institute of Physics
On the behavior of isolated and embedded carbon nano-tubes in a polymeric matrix
NASA Astrophysics Data System (ADS)
Rahimian-Koloor, Seyed Mostafa; Moshrefzadeh-Sani, Hadi; Mehrdad Shokrieh, Mahmood; Majid Hashemianzadeh, Seyed
2018-02-01
In the classical micro-mechanical method, the moduli of the reinforcement and the matrix are used to predict the stiffness of composites. However, using the classical micro-mechanical method to predict the stiffness of CNT/epoxy nanocomposites leads to overestimated results. One of the main reasons for this overestimation is using the stiffness of the isolated CNT and ignoring the CNT nanoscale effect by the method. In the present study the non-equilibrium molecular dynamics simulation was used to consider the influence of CNT length on the stiffness of the nanocomposites through the isothermal-isobaric ensemble. The results indicated that, due to the nanoscale effects, the reinforcing efficiency of the embedded CNT is not constant and decreases with decreasing its length. Based on the results, a relationship was derived, which predicts the effective stiffness of an embedded CNT in terms of its length. It was shown that using this relationship leads to predict more accurate elastic modulus of nanocomposite, which was validated by some experimental counterparts.
Nonlinear dimensionality reduction of data lying on the multicluster manifold.
Meng, Deyu; Leung, Yee; Fung, Tung; Xu, Zongben
2008-08-01
A new method, which is called decomposition-composition (D-C) method, is proposed for the nonlinear dimensionality reduction (NLDR) of data lying on the multicluster manifold. The main idea is first to decompose a given data set into clusters and independently calculate the low-dimensional embeddings of each cluster by the decomposition procedure. Based on the intercluster connections, the embeddings of all clusters are then composed into their proper positions and orientations by the composition procedure. Different from other NLDR methods for multicluster data, which consider associatively the intracluster and intercluster information, the D-C method capitalizes on the separate employment of the intracluster neighborhood structures and the intercluster topologies for effective dimensionality reduction. This, on one hand, isometrically preserves the rigid-body shapes of the clusters in the embedding process and, on the other hand, guarantees the proper locations and orientations of all clusters. The theoretical arguments are supported by a series of experiments performed on the synthetic and real-life data sets. In addition, the computational complexity of the proposed method is analyzed, and its efficiency is theoretically analyzed and experimentally demonstrated. Related strategies for automatic parameter selection are also examined.
Farnum, C E; Wilsman, N J
1984-06-01
A postembedment method for the localization of lectin-binding glycoconjugates was developed using Epon-embedded growth plate cartilage from Yucatan miniature swine. By testing a variety of etching, blocking, and incubation procedures, a standard protocol was developed for 1 micron thick sections that allowed visualization of both intracellular and extracellular glycoconjugates with affinity for wheat germ agglutinin and concanavalin A. Both fluorescent and peroxidase techniques were used, and comparisons were made between direct methods and indirect methods using the biotin-avidin bridging system. Differential extracellular lectin binding allowed visualization of interterritorial , territorial, and pericellular matrices. Double labeling experiments showed the precision with which intracellular binding could be localized to specific cytoplasmic compartments, with resolution of binding to the Golgi apparatus, endoplasmic reticulum, and nuclear membrane at the light microscopic level. This method allows the localization of both intracellular and extracellular lectin-binding glycoconjugates using fixation and embedment procedures that are compatible with simultaneous ultrastructural analysis. As such it should have applicability both to the morphological analysis of growth plate organization during normal endochondral ossification, as well as to the diagnostic pathology of matrix abnormalities in disease states of growing cartilage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogl, Christopher J.
Here, the Closest Point method, initially developed by Ruuth and Merriman, allows for the numerical solution of surface partial differential equations without the need for a parameterization of the surface itself. Surface quantities are embedded into the surrounding domain by assigning each value at a given spatial location to the corresponding value at the closest point on the surface. This embedding allows for surface derivatives to be replaced by their Cartesian counterparts (e.g. ∇ s=∇). This equivalence is only valid on the surface, and thus, interpolation is used to enforce what is known as the side condition away from themore » surface. To improve upon the method, this work derives an operator embedding that incorporates curvature information, making it valid in a neighborhood of the surface. With this, direct enforcement of the side condition is no longer needed. Comparisons in R 2 and R 3 show that the resulting Curvature-Augmented Closest Point method has better accuracy and requires less memory, through increased matrix sparsity, than the Closest Point method, while maintaining similar matrix condition numbers. To demonstrate the utility of the method in a physical application, simulations of inextensible, bi-lipid vesicles evolving toward equilibrium shapes are also included.« less
Detection of LSB+/-1 steganography based on co-occurrence matrix and bit plane clipping
NASA Astrophysics Data System (ADS)
Abolghasemi, Mojtaba; Aghaeinia, Hassan; Faez, Karim; Mehrabi, Mohammad Ali
2010-01-01
Spatial LSB+/-1 steganography changes smooth characteristics between adjoining pixels of the raw image. We present a novel steganalysis method for LSB+/-1 steganography based on feature vectors derived from the co-occurrence matrix in the spatial domain. We investigate how LSB+/-1 steganography affects the bit planes of an image and show that it changes more least significant bit (LSB) planes of it. The co-occurrence matrix is derived from an image in which some of its most significant bit planes are clipped. By this preprocessing, in addition to reducing the dimensions of the feature vector, the effects of embedding were also preserved. We compute the co-occurrence matrix in different directions and with different dependency and use the elements of the resulting co-occurrence matrix as features. This method is sensitive to the data embedding process. We use a Fisher linear discrimination (FLD) classifier and test our algorithm on different databases and embedding rates. We compare our scheme with the current LSB+/-1 steganalysis methods. It is shown that the proposed scheme outperforms the state-of-the-art methods in detecting the LSB+/-1 steganographic method for grayscale images.
Donczo, Boglarka; Guttman, Andras
2018-06-05
More than a century ago in 1893, a revolutionary idea about fixing biological tissue specimens was introduced by Ferdinand Blum, a German physician. Since then, a plethora of fixation methods have been investigated and used. Formalin fixation with paraffin embedment became the most widely used types of fixation and preservation method, due to its proper architectural conservation of tissue structures and cellular shape. The huge collection of formalin-fixed, paraffin-embedded (FFPE) sample archives worldwide holds a large amount of unearthed information about diseases that could be the Holy Grail in contemporary biomarker research utilizing analytical omics based molecular diagnostics. The aim of this review is to critically evaluate the omics options for FFPE tissue sample analysis in the molecular diagnostics field. Copyright © 2018. Published by Elsevier B.V.
An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization
Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.
2017-04-17
We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less
An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.
We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less
Canron, Marie-Hélène; Bouillot, Sandrine; Favereaux, Alexandre; Petry, Klaus G; Vital, Anne
2003-03-01
Ultrastructural immunolabeling of peripheral nervous system components is an important tool to study the relation between structure and function. Owing to the scarcity of certain antigens and the dense structure of the peripheral nerve, a pre-embedding technique is likely appropriate. After several investigations on procedures for pre-embedding immunolabeling, we propose a method that offers a good compromise between detection of antigenic sites and preservation of morphology at the ultrastructural level, and that is easy to use and suitable for investigations on peripheral nerve biopsies from humans. Pre-fixation by immersion in paraformaldehyde/glutaraldehyde is necessary to stabilize the ultrastructure. Then, ultrasmall gold particles with silver enhancement are advised. Antibodies against myelin protein zero and myelin basic protein were chosen for demonstration. The same technique was applied to localize a 35 kDa myelin protein.
Pandey, Pinki; Dixit, Alok; Tanwar, Aparna; Sharma, Anuradha; Mittal, Sanjeev
2014-07-01
Our study presents a new deparaffinizing and hematoxylin and eosin (H and E) staining method that involves the use of easily available, nontoxic and eco-friendly liquid diluted dish washing soap (DWS) by completely eliminating expensive and hazardous xylene and alcohol from deparaffinizing and rehydration prior to staining, staining and from dehydration prior to mounting. The aim was to evaluate and compare the quality of liquid DWS treated xylene and alcohol free (XAF) sections with that of the conventional H and E sections. A total of 100 paraffin embedded tissue blocks from different tissues were included. From each tissue block, one section was stained with conventional H and E (normal sections) and the other with XAF H and E (soapy sections) staining method. Slides were scored using five parameters: Nuclear, cytoplasmic, clarity, uniformity, and crispness of staining. Z-test was used for statistical analysis. Soapy sections scored better for cytoplasmic (90%) and crisp staining (95%) with a statistically significant difference. Whereas for uniformity of staining, normal sections (88%) scored over soapy sections (72%) (Z = 2.82, P < 0.05). For nuclear (90%) and clarity of staining (90%) total scored favored soapy sections, but the difference was not statistically significant. About 84% normal sections stained adequately for diagnosis when compared with 86% in soapy sections (Z = 0.396, P > 0.05). Liquid DWS is a safe and efficient alternative to xylene and alcohol in deparaffinization and routine H and E staining procedure. We are documenting this project that can be used as a model for other histology laboratories.
Supervised linear dimensionality reduction with robust margins for object recognition
NASA Astrophysics Data System (ADS)
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
State-space model with deep learning for functional dynamics estimation in resting-state fMRI.
Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang
2016-04-01
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. Copyright © 2016 Elsevier Inc. All rights reserved.
State-space model with deep learning for functional dynamics estimation in resting-state fMRI
Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang
2017-01-01
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. PMID:26774612
Hatamleh, Muhanad M; Watts, David C
2010-07-01
The purpose of this study was to test the effect of different periods of accelerated artificial daylight aging on bond strength of glass fiber bundles embedded into maxillofacial silicone elastomer and on bending strength of the glass fiber bundles. Forty specimens were fabricated by embedding resin-impregnated fiber bundles (1.5-mm diameter, 20-mm long) into maxillofacial silicone elastomer. Specimens were randomly allocated into four groups, and each group was subjected to different periods of accelerated daylight aging as follows (in hours); 0, 200, 400, and 600. The aging cycle included continuous exposure to quartz-filtered visible daylight (irradiance 760 W/m(2)) under an alternating weathering cycle (wet for 18 minutes, dry for 102 minutes). Pull-out tests were performed to evaluate bond strength between fiber bundles and silicone using a universal testing machine at 1 mm/min crosshead speed. Also a three-point bending test was performed to evaluate bending strength of the fiber bundles. One-way ANOVA and Bonferroni post hoc tests were carried out to detect statistical significance (p < 0.05). Mean (SD) values of maximum pull-out forces (in N) for groups 1 to 4 were: 13.63 (7.45), 19.67 (1.37), 13.58 (2.61), and 10.37 (2.52). Group 2 exhibited the highest pull-out force that was statistically significant when compared to the other groups. Maximum bending strengths of fiber bundles were in the range of 917.72 MPa to 1124.06 MPa. Bending strength significantly increased after 200 and 400 hours of aging only. After 200 hours of exposure to artificial daylight and moisture conditions, bond strength between glass fibers and heat-cured silicones is optimal, and the bending strength of the glass fiber bundles is enhanced.
Senter, Leigha; O'Malley, David M; Backes, Floor J; Copeland, Larry J; Fowler, Jeffery M; Salani, Ritu; Cohn, David E
2017-10-01
Analyze the impact of embedding genetic counseling services in gynecologic oncology on clinician referral and patient uptake of cancer genetics services. Data were reviewed for a total of 737 newly diagnosed epithelial ovarian cancer patients seen in gynecologic oncology at a large academic medical center including 401 from 11/2011-7/2014 (a time when cancer genetics services were provided as an off-site consultation). These data were compared to data from 8/2014-9/2016 (n=336), when the model changed to the genetics embedded model (GEM), incorporating a cancer genetic counselor on-site in the gynecologic oncology clinic. A statistically significant difference in proportion of patients referred pre- and post-GEM was observed (21% vs. 44%, p<0.0001). Pre-GEM, only 38% of referred patients were actually scheduled for genetics consultation and post-GEM 82% were scheduled (p<0.00001). The difference in the time from referral to scheduling in genetics was also statistically significant (3.92months pre-GEM vs. 0.79months post-GEM, p<0.00001) as was the time from referral to completion of genetics consultation (2.52months pre-GEM vs. 1.67months post-GEM, p<0.01). Twenty-five percent of patients referred post GEM were seen by the genetic counselor on the same day as the referral. Providing cancer genetics services on-site in gynecologic oncology and modifying the process by which patients are referred and scheduled significantly increases referral to cancer genetics and timely completion of genetics consultation, improving compliance with guideline-based care. Practice changes are critical given the impact of genetic test results on treatment and familial cancer risks. Copyright © 2017 Elsevier Inc. All rights reserved.
Vu, Trung N; Valkenborg, Dirk; Smets, Koen; Verwaest, Kim A; Dommisse, Roger; Lemière, Filip; Verschoren, Alain; Goethals, Bart; Laukens, Kris
2011-10-20
Nuclear magnetic resonance spectroscopy (NMR) is a powerful technique to reveal and compare quantitative metabolic profiles of biological tissues. However, chemical and physical sample variations make the analysis of the data challenging, and typically require the application of a number of preprocessing steps prior to data interpretation. For example, noise reduction, normalization, baseline correction, peak picking, spectrum alignment and statistical analysis are indispensable components in any NMR analysis pipeline. We introduce a novel suite of informatics tools for the quantitative analysis of NMR metabolomic profile data. The core of the processing cascade is a novel peak alignment algorithm, called hierarchical Cluster-based Peak Alignment (CluPA). The algorithm aligns a target spectrum to the reference spectrum in a top-down fashion by building a hierarchical cluster tree from peak lists of reference and target spectra and then dividing the spectra into smaller segments based on the most distant clusters of the tree. To reduce the computational time to estimate the spectral misalignment, the method makes use of Fast Fourier Transformation (FFT) cross-correlation. Since the method returns a high-quality alignment, we can propose a simple methodology to study the variability of the NMR spectra. For each aligned NMR data point the ratio of the between-group and within-group sum of squares (BW-ratio) is calculated to quantify the difference in variability between and within predefined groups of NMR spectra. This differential analysis is related to the calculation of the F-statistic or a one-way ANOVA, but without distributional assumptions. Statistical inference based on the BW-ratio is achieved by bootstrapping the null distribution from the experimental data. The workflow performance was evaluated using a previously published dataset. Correlation maps, spectral and grey scale plots show clear improvements in comparison to other methods, and the down-to-earth quantitative analysis works well for the CluPA-aligned spectra. The whole workflow is embedded into a modular and statistically sound framework that is implemented as an R package called "speaq" ("spectrum alignment and quantitation"), which is freely available from http://code.google.com/p/speaq/.
Glancy, Brian; Hsu, Li-Yueh; Dao, Lam; Bakalar, Matthew; French, Stephanie; Chess, David J.; Taylor, Joni L.; Picard, Martin; Aponte, Angel; Daniels, Mathew P.; Esfahani, Shervin; Cushman, Samuel; Balaban, Robert S.
2013-01-01
Objective To provide insight into mitochondrial function in vivo, we evaluated the 3D spatial relationship between capillaries, mitochondria, and muscle fibers in live mice. Methods 3D volumes of in vivo murine Tibialis anterior muscles were imaged by multi-photon microscopy (MPM). Muscle fiber type, mitochondrial distribution, number of capillaries, and capillary-to-fiber contact were assessed. The role of myoglobin-facilitated diffusion was examined in myoglobin knockout mice. Distribution of GLUT4 was also evaluated in the context of the capillary and mitochondrial network. Results MPM revealed that 43.6 ± 3.3% of oxidative fiber capillaries had ≥ 50% of their circumference embedded in a groove in the sarcolemma, in vivo. Embedded capillaries were tightly associated with dense mitochondrial populations lateral to capillary grooves and nearly absent below the groove. Mitochondrial distribution, number of embedded capillaries, and capillary-to-fiber contact were proportional to fiber oxidative capacity and unaffected by myoglobin knockout. GLUT4 did not preferentially localize to embedded capillaries. Conclusions Embedding capillaries in the sarcolemma may provide a regulatory mechanism to optimize delivery of oxygen to heterogeneous groups of muscle fibers. We hypothesize that mitochondria locate to paravascular regions due to myofibril voids created by embedded capillaries, not to enhance the delivery of oxygen to the mitochondria. PMID:25279425
NASA Astrophysics Data System (ADS)
Metusala, D.
2017-07-01
This alternative method provides a simple and faster procedure for preparing cross-sections of leaves and roots in herbaceous plants, especially for living specimens of orchids (Orchidaceae). This method used a clamp-on hand sliding microtome to make cross-sections of leaves and roots, with sections preserved inside the microtubes containing preservation liquid. This preservation technique allowed the sections to be restained and to be used for further usage in future. This method was more practical than the paraffin embedding method because it does not need the additional steps of paraffin embedding and deparaffinization. It may also provide better cross-section results than free-hand sectioning method. The procedure is very feasible and is recommended for use in plant anatomy observation.
NASA Astrophysics Data System (ADS)
Huang, Daniel Z.; De Santis, Dante; Farhat, Charbel
2018-07-01
The Finite Volume method with Exact two-material Riemann Problems (FIVER) is both a computational framework for multi-material flows characterized by large density jumps, and an Embedded Boundary Method (EBM) for computational fluid dynamics and highly nonlinear Fluid-Structure Interaction (FSI) problems. This paper deals with the EBM aspect of FIVER. For FSI problems, this EBM has already demonstrated the ability to address viscous effects along wall boundaries, and large deformations and topological changes of such boundaries. However, like for most EBMs - also known as immersed boundary methods - the performance of FIVER in the vicinity of a wall boundary can be sensitive with respect to the position and orientation of this boundary relative to the embedding mesh. This is mainly due to ill-conditioning issues that arise when an embedded interface becomes too close to a node of the embedding mesh, which may lead to spurious oscillations in the computed solution gradients at the wall boundary. This paper resolves these issues by introducing an alternative definition of the active/inactive status of a mesh node that leads to the removal of all sources of potential ill-conditioning from all spatial approximations performed by FIVER in the vicinity of a fluid-structure interface. It also makes two additional contributions. The first one is a new procedure for constructing the fluid-structure half Riemann problem underlying the semi-discretization by FIVER of the convective fluxes. This procedure eliminates one extrapolation from the conventional treatment of the wall boundary conditions and replaces it by an interpolation, which improves robustness. The second contribution is a post-processing algorithm for computing quantities of interest at the wall that achieves smoothness in the computed solution and its gradients. Lessons learned from these enhancements and contributions that are triggered by the new definition of the status of a mesh node are then generalized and exploited to eliminate from the original version of the FIVER method its sensitivities with respect to both of the position and orientation of the wall boundary relative to the embedding mesh, while maintaining the original definition of the status of a mesh node. This leads to a family of second-generation FIVER methods whose performance is illustrated in this paper for several flow and FSI problems. These include a challenging flow problem over a bird wing characterized by a feather-induced surface roughness, and a complex flexible flapping wing problem for which experimental data is available.
Effect of temperature on the spectrum of fiber Bragg grating sensors embedded in polymer composite
NASA Astrophysics Data System (ADS)
Anoshkin, A. N.; Shipunov, G. S.; Voronkov, A. A.; Shardakov, I. N.
2017-12-01
This work presents the experimental results on the effect of temperature on the spectrum of fiber Bragg grating (FBG) sensors embedded in a polymer composite material manufactured by the prepreg method. The tests are carried out for flat bar specimens made of fiberglass with five embedded FBG sensors. For measuring the reflected wave power, the ASTRO X322 Interrogator is used. It is shown that embedding leads to the occurrence of an additional power peak and decreases the reflection spectrum signal by 10-12 dB. This is due to the effect of transverse compression force and the anisotropic character of the thermal expansion coefficient of the material. In heating, the reflected spectrum is close to the initial state of the material, but it has a less power.
D-Move: A Mobile Communication Based Delphi for Digital Natives to Support Embedded Research
ERIC Educational Resources Information Center
Petrovic, Otto
2017-01-01
Digital Natives are raised with computers and the Internet, which are a familiar part of their daily life. To gain insights into their attitude and behavior, methods and media for empirical research face new challenges like gamification, context oriented embedded research, integration of multiple data sources, and the increased importance of…
ERIC Educational Resources Information Center
Marcks, Melissa A.
2017-01-01
Instructional coaching is one method of job-embedded professional development approach that provides teachers an opportunity to build teacher expertise, raise student achievement, and advance school reform. The problem that was addressed in this qualitative case study was that few principals' understand the process of instructional coaching as…
ERIC Educational Resources Information Center
Edwards, Mary E.; Black, Erik W.
2012-01-01
This paper reports the results of a case study evaluation of an embedded librarian project at a large, land-grant, research institution. The case is comprised of learners who are full-time academic health care professionals enrolled in an online graduate educational technology program. The mixed methods methodology focused on assessing the…
ERIC Educational Resources Information Center
Amendum, Steven J.
2014-01-01
The purpose of the current mixed-methods study was to investigate a model of professional development and classroom-based early reading intervention implemented by the 1st-grade teaching team in a large urban/suburban school district in the southeastern United States. The intervention provided teachers with ongoing embedded professional…
No Child Left Behind: Values and Research Issues in High-Stakes Assessments
ERIC Educational Resources Information Center
Duffy, Maureen; Giordano, Victoria A.; Farrell, Jill B.; Paneque, Oneyda M.; Crump, Genae B.
2008-01-01
High-stakes testing and mandated assessments, which are major outcomes of the No Child Left Behind Act of 2001 (NCLB) contain multiple embedded values that affect the lives of students, their families, teachers, and counselors. A primary embedded value within the NCLB is the privileging of quantitative science over other methods of inquiry and…
Embedded Librarianship Is Job One: Building on Instructional Synergies
ERIC Educational Resources Information Center
Tumbleson, Beth E.; Burke, John J.
2010-01-01
Information literacy instruction is provided in five formats: reference, one-shot sessions, credit courses, library Web sites, and embedded librarians. Each method offers distinct merits as well as limitations. Much can be gained by considering the swirl or interplay of all five and how working with one approach informs the others and results in a…
ICT Teachers' Acceptance of "Scratch" as Algorithm Visualization Software
ERIC Educational Resources Information Center
Saltan, Fatih; Kara, Mehmet
2016-01-01
This study aims to investigate the acceptance of ICT teachers pertaining to the use of Scratch as an Algorithm Visualization (AV) software in terms of perceived ease of use and perceived usefulness. An embedded mixed method research design was used in the study, in which qualitative data were embedded in quantitative ones and used to explain the…
The Curvature-Augmented Closest Point method with vesicle inextensibility application
Vogl, Christopher J.
2017-06-06
Here, the Closest Point method, initially developed by Ruuth and Merriman, allows for the numerical solution of surface partial differential equations without the need for a parameterization of the surface itself. Surface quantities are embedded into the surrounding domain by assigning each value at a given spatial location to the corresponding value at the closest point on the surface. This embedding allows for surface derivatives to be replaced by their Cartesian counterparts (e.g. ∇ s=∇). This equivalence is only valid on the surface, and thus, interpolation is used to enforce what is known as the side condition away from themore » surface. To improve upon the method, this work derives an operator embedding that incorporates curvature information, making it valid in a neighborhood of the surface. With this, direct enforcement of the side condition is no longer needed. Comparisons in R 2 and R 3 show that the resulting Curvature-Augmented Closest Point method has better accuracy and requires less memory, through increased matrix sparsity, than the Closest Point method, while maintaining similar matrix condition numbers. To demonstrate the utility of the method in a physical application, simulations of inextensible, bi-lipid vesicles evolving toward equilibrium shapes are also included.« less
Sub-Poissonian phonon statistics in an acoustical resonator coupled to a pumped two-level emitter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceban, V., E-mail: victor.ceban@phys.asm.md; Macovei, M. A., E-mail: macovei@phys.asm.md
2015-11-15
The concept of an acoustical analog of the optical laser has been developed recently in both theoretical and experimental works. We here discuss a model of a coherent phonon generator with a direct signature of the quantum properties of sound vibrations. The considered setup is made of a laser-driven quantum dot embedded in an acoustical nanocavity. The system dynamics is solved for a single phonon mode in the steady-state and in the strong quantum dot—phonon coupling regime beyond the secular approximation. We demonstrate that the phonon statistics exhibits quantum features, i.e., is sub-Poissonian.
Detection of weak signals in memory thermal baths.
Jiménez-Aquino, J I; Velasco, R M; Romero-Bastida, M
2014-11-01
The nonlinear relaxation time and the statistics of the first passage time distribution in connection with the quasideterministic approach are used to detect weak signals in the decay process of the unstable state of a Brownian particle embedded in memory thermal baths. The study is performed in the overdamped approximation of a generalized Langevin equation characterized by an exponential decay in the friction memory kernel. A detection criterion for each time scale is studied: The first one is referred to as the receiver output, which is given as a function of the nonlinear relaxation time, and the second one is related to the statistics of the first passage time distribution.
Embedded academic writing support for nursing students with English as a second language.
Salamonson, Yenna; Koch, Jane; Weaver, Roslyn; Everett, Bronwyn; Jackson, Debra
2010-02-01
This paper reports a study which evaluated a brief, embedded academic support workshop as a strategy for improving academic writing skills in first-year nursing students with low-to-medium English language proficiency. Nursing students who speak English as a second language have lower academic success compared with their native English-speaking counterparts. The development of academic writing skills is known to be most effective when embedded into discipline-specific curricula. Using a randomized controlled design, in 2008 106 students pre-enrolled in an introductory bioscience subject were randomized to receive either the intervention, a 4-day embedded academic learning support workshop facilitated by two bioscience (content) nursing academics and a writing and editing professional, or to act as the control group. The primary focus of the workshop was to support students to work through a mock assignment by providing progressive feedback and written suggestions on how to improve their answers. Of the 59 students randomized to the intervention, only 28 attended the workshop. Bioscience assignment results were analysed for those who attended (attendees), those randomized to the intervention but who did not attend (non-attendees), and the control group. Using anova, the results indicated that attendees achieved statistically significantly higher mean scores (70.8, sd: 6.1) compared to both control group (58.4, sd: 3.4, P = 0.002) and non-attendees (48.5, sd: 5.5, P = 0.001). A brief, intensive, embedded academic support workshop was effective in improving the academic writing ability of nursing students with low-to-medium English language proficiency, although reaching all students who are likely to benefit from this intervention remains a challenge.
Embedding and partial resolution of complex cones over Fano threefolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dwivedi, Siddharth, E-mail: sdwivedi@iitk.ac.in
2016-12-15
This work deals with the study of embeddings of toric Calabi–Yau fourfolds which are complex cones over the smooth Fano threefolds. In particular, we focus on finding various embeddings of Fano threefolds inside other Fano threefolds and study the partial resolution of the latter in hope to find new toric dualities. We find many diagrams possible for many of these Fano threefolds, but unfortunately, none of them are consistent quiver theories. We also obtain a quiver Chern–Simons theory which matches a theory known to the literature, thus providing an alternate method of obtaining it.
Particle Size Influence on the Effective Permeability of Composite Materials
NASA Astrophysics Data System (ADS)
Xiang, Tai; Zhong, Ru-Neng; Yao, Bin; Qin, Shao-Jing; Zheng, Qin-Hong
2018-05-01
The energy method, which estimates the effective permeability of composite material is proposed. We approximate the effective static magnetic permeability by energy method and Maxwell-Garnett method for spherical particles dispersing system. Considering the effect of the interface layer between the medium and the particle, we study the nanoparticles embedded in a medium exactly. The interface layer property plays a significant factor for the effective permeability of the composite material in which nano-sized particles embedded. Supported by National Natural Science Foundation of Yunnan province under Grant No. 2014FB141 and National Natural Science Foundation under Grant No. 1121403 of China
Dissimilarity measure based on ordinal pattern for physiological signals
NASA Astrophysics Data System (ADS)
Wang, Jing; Shang, Pengjian; Shi, Wenbin; Cui, Xingran
2016-08-01
Complex physiologic signals may carry information of their underlying mechanisms. In this paper, we introduce a dissimilarity measure to capture the features of underlying dynamics from various types of physiologic signals based on rank order statistics of ordinal patterns. Simulated 1/f noise and white noise are used to evaluate the effect of data length, embedding dimension and time delay on this measure. We then apply this measure to different physiologic signals. The method can successfully characterize the unique underlying patterns of subjects at similar physiologic states. It can also serve as a good discriminative tool for the healthy young, healthy elderly, congestive heart failure, atrial fibrilation and white noise groups. Furthermore, when investigated into the details of underlying ordinal patterns for each group, it is found that the distributions of ordinal patterns varies significantly for healthy and pathologic states, as well as aging.
Nonlinear filtering properties of detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Tsujimoto, Yutaka
2016-11-01
Detrended fluctuation analysis (DFA) has been widely used for quantifying long-range correlation and fractal scaling behavior. In DFA, to avoid spurious detection of scaling behavior caused by a nonstationary trend embedded in the analyzed time series, a detrending procedure using piecewise least-squares fitting has been applied. However, it has been pointed out that the nonlinear filtering properties involved with detrending may induce instabilities in the scaling exponent estimation. To understand this issue, we investigate the adverse effects of the DFA detrending procedure on the statistical estimation. We show that the detrending procedure using piecewise least-squares fitting results in the nonuniformly weighted estimation of the root-mean-square deviation and that this property could induce an increase in the estimation error. In addition, for comparison purposes, we investigate the performance of a centered detrending moving average analysis with a linear detrending filter and sliding window DFA and show that these methods have better performance than the standard DFA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Begoli, Edmon; Dunning, Ted; Charlie, Frasure
We present a service platform for schema-leess exploration of data and discovery of patient-related statistics from healthcare data sets. The architecture of this platform is motivated by the need for fast, schema-less, and flexible approaches to SQL-based exploration and discovery of information embedded in the common, heterogeneously structured healthcare data sets and supporting components (electronic health records, practice management systems, etc.) The motivating use cases described in the paper are clinical trials candidate discovery, and a treatment effectiveness analysis. Following the use cases, we discuss the key features and software architecture of the platform, the underlying core components (Apache Parquet,more » Drill, the web services server), and the runtime profiles and performance characteristics of the platform. We conclude by showing dramatic speedup with some approaches, and the performance tradeoffs and limitations of others.« less
NASA Astrophysics Data System (ADS)
Dori, Yehudit J.
2003-01-01
Matriculation 2000 was a 5-year project aimed at moving from the nationwide traditional examination system in Israel to a school-based alternative embedded assessment. Encompassing 22 high schools from various communities in the country, the Project aimed at fostering deep understanding, higher-order thinking skills, and students' engagement in learning through alternative teaching and embedded assessment methods. This article describes research conducted during the fifth year of the Project at 2 experimental and 2 control schools. The research objective was to investigate students' learning outcomes in chemistry and biology in the Matriculation 2000 Project. The assumption was that alternative embedded assessment has some effect on students' performance. The experimental students scored significantly higher than their control group peers on low-level assignments and more so on assignments that required higher-order thinking skills. The findings indicate that given adequate support and teachers' consent and collaboration, schools can transfer from nationwide or statewide standardized testing to school-based alter-native embedded assessment.
On-chip self-assembly of cell embedded microstructures to vascular-like microtubes.
Yue, Tao; Nakajima, Masahiro; Takeuchi, Masaru; Hu, Chengzhi; Huang, Qiang; Fukuda, Toshio
2014-03-21
Currently, research on the construction of vascular-like tubular structures is a hot area of tissue engineering, since it has potential applications in the building of artificial blood vessels. In this paper, we report a fluidic self-assembly method using cell embedded microstructures to construct vascular-like microtubes. A novel 4-layer microfluidic device was fabricated using polydimethylsiloxane (PDMS), which contains fabrication, self-assembly and extraction areas inside one channel. Cell embedded microstructures were directly fabricated using poly(ethylene glycol) diacrylate (PEGDA) in the fabrication area, namely on-chip fabrication. Self-assembly of the fabricated microstructures was performed in the assembly area which has a micro well. Assembled tubular structures (microtubes) were extracted outside the channel into culture dishes using a normally closed (NC) micro valve in the extraction area. The self-assembly mechanism was experimentally demonstrated. The performance of the NC micro valve and embedded cell concentration were both evaluated. Fibroblast (NIH/3T3) embedded vascular-like microtubes were constructed inside this reusable microfluidic device.
Faruki, Hawazin; Mayhew, Gregory M; Fan, Cheng; Wilkerson, Matthew D; Parker, Scott; Kam-Morgan, Lauren; Eisenberg, Marcia; Horten, Bruce; Hayes, D Neil; Perou, Charles M; Lai-Goldman, Myla
2016-06-01
Context .- A histologic classification of lung cancer subtypes is essential in guiding therapeutic management. Objective .- To complement morphology-based classification of lung tumors, a previously developed lung subtyping panel (LSP) of 57 genes was tested using multiple public fresh-frozen gene-expression data sets and a prospectively collected set of formalin-fixed, paraffin-embedded lung tumor samples. Design .- The LSP gene-expression signature was evaluated in multiple lung cancer gene-expression data sets totaling 2177 patients collected from 4 platforms: Illumina RNAseq (San Diego, California), Agilent (Santa Clara, California) and Affymetrix (Santa Clara) microarrays, and quantitative reverse transcription-polymerase chain reaction. Gene centroids were calculated for each of 3 genomic-defined subtypes: adenocarcinoma, squamous cell carcinoma, and neuroendocrine, the latter of which encompassed both small cell carcinoma and carcinoid. Classification by LSP into 3 subtypes was evaluated in both fresh-frozen and formalin-fixed, paraffin-embedded tumor samples, and agreement with the original morphology-based diagnosis was determined. Results .- The LSP-based classifications demonstrated overall agreement with the original clinical diagnosis ranging from 78% (251 of 322) to 91% (492 of 538 and 869 of 951) in the fresh-frozen public data sets and 84% (65 of 77) in the formalin-fixed, paraffin-embedded data set. The LSP performance was independent of tissue-preservation method and gene-expression platform. Secondary, blinded pathology review of formalin-fixed, paraffin-embedded samples demonstrated concordance of 82% (63 of 77) with the original morphology diagnosis. Conclusions .- The LSP gene-expression signature is a reproducible and objective method for classifying lung tumors and demonstrates good concordance with morphology-based classification across multiple data sets. The LSP panel can supplement morphologic assessment of lung cancers, particularly when classification by standard methods is challenging.
Singularity embedding method in potential flow calculations
NASA Technical Reports Server (NTRS)
Jou, W. H.; Huynh, H.
1982-01-01
The so-called H-type mesh is used in a finite-element (or finite-volume) calculation of the potential flow past an airfoil. Due to coordinate singularity at the leading edge, a special singular trial function is used for the elements neighboring the leading edge. The results using the special singular elements are compared to those using the regular elements. It is found that the unreasonable pressure distribution obtained by the latter is removed by the embedding of the singular element. Suggestions to extend the present method to transonic cases are given.
Ayral, Thomas; Lee, Tsung-Han; Kotliar, Gabriel
2017-12-26
In this paper, we present a unified perspective on dynamical mean-field theory (DMFT), density-matrix embedding theory (DMET), and rotationally invariant slave bosons (RISB). We show that DMET can be regarded as a simplification of the RISB method where the quasiparticle weight is set to unity. Finally, this relation makes it easy to transpose extensions of a given method to another: For instance, a temperature-dependent version of RISB can be used to derive a temperature-dependent free-energy formula for DMET.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayral, Thomas; Lee, Tsung-Han; Kotliar, Gabriel
In this paper, we present a unified perspective on dynamical mean-field theory (DMFT), density-matrix embedding theory (DMET), and rotationally invariant slave bosons (RISB). We show that DMET can be regarded as a simplification of the RISB method where the quasiparticle weight is set to unity. Finally, this relation makes it easy to transpose extensions of a given method to another: For instance, a temperature-dependent version of RISB can be used to derive a temperature-dependent free-energy formula for DMET.
A 3-D chimera grid embedding technique
NASA Technical Reports Server (NTRS)
Benek, J. A.; Buning, P. G.; Steger, J. L.
1985-01-01
A three-dimensional (3-D) chimera grid-embedding technique is described. The technique simplifies the construction of computational grids about complex geometries. The method subdivides the physical domain into regions which can accommodate easily generated grids. Communication among the grids is accomplished by interpolation of the dependent variables at grid boundaries. The procedures for constructing the composite mesh and the associated data structures are described. The method is demonstrated by solution of the Euler equations for the transonic flow about a wing/body, wing/body/tail, and a configuration of three ellipsoidal bodies.
A new collage steganographic algorithm using cartoon design
NASA Astrophysics Data System (ADS)
Yi, Shuang; Zhou, Yicong; Pun, Chi-Man; Chen, C. L. Philip
2014-02-01
Existing collage steganographic methods suffer from low payload of embedding messages. To improve the payload while providing a high level of security protection to messages, this paper introduces a new collage steganographic algorithm using cartoon design. It embeds messages into the least significant bits (LSBs) of color cartoon objects, applies different permutations to each object, and adds objects to a cartoon cover image to obtain the stego image. Computer simulations and comparisons demonstrate that the proposed algorithm shows significantly higher capacity of embedding messages compared with existing collage steganographic methods.
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Rosen, Mark; Madabhushi, Anant
2008-03-01
Current techniques for localization of prostatic adenocarcinoma (CaP) via blinded trans-rectal ultrasound biopsy are associated with a high false negative detection rate. While high resolution endorectal in vivo Magnetic Resonance (MR) prostate imaging has been shown to have improved contrast and resolution for CaP detection over ultrasound, similarity in intensity characteristics between benign and cancerous regions on MR images contribute to a high false positive detection rate. In this paper, we present a novel unsupervised segmentation method that employs manifold learning via consensus schemes for detection of cancerous regions from high resolution 1.5 Tesla (T) endorectal in vivo prostate MRI. A significant contribution of this paper is a method to combine multiple weak, lower-dimensional representations of high dimensional feature data in a way analogous to classifier ensemble schemes, and hence create a stable and accurate reduced dimensional representation. After correcting for MR image intensity artifacts, such as bias field inhomogeneity and intensity non-standardness, our algorithm extracts over 350 3D texture features at every spatial location in the MR scene at multiple scales and orientations. Non-linear dimensionality reduction schemes such as Locally Linear Embedding (LLE) and Graph Embedding (GE) are employed to create multiple low dimensional data representations of this high dimensional texture feature space. Our novel consensus embedding method is used to average object adjacencies from within the multiple low dimensional projections so that class relationships are preserved. Unsupervised consensus clustering is then used to partition the objects in this consensus embedding space into distinct classes. Quantitative evaluation on 18 1.5 T prostate MR data against corresponding histology obtained from the multi-site ACRIN trials show a sensitivity of 92.65% and a specificity of 82.06%, which suggests that our method is successfully able to detect suspicious regions in the prostate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dréan, Gaël; Acosta, Oscar, E-mail: Oscar.Acosta@univ-rennes1.fr; Simon, Antoine
2016-06-15
Purpose: Recent studies revealed a trend toward voxelwise population analysis in order to understand the local dose/toxicity relationships in prostate cancer radiotherapy. Such approaches require, however, an accurate interindividual mapping of the anatomies and 3D dose distributions toward a common coordinate system. This step is challenging due to the high interindividual variability. In this paper, the authors propose a method designed for interindividual nonrigid registration of the rectum and dose mapping for population analysis. Methods: The method is based on the computation of a normalized structural description of the rectum using a Laplacian-based model. This description takes advantage of themore » tubular structure of the rectum and its centerline to be embedded in a nonrigid registration-based scheme. The performances of the method were evaluated on 30 individuals treated for prostate cancer in a leave-one-out cross validation. Results: Performance was measured using classical metrics (Dice score and Hausdorff distance), along with new metrics devised to better assess dose mapping in relation with structural deformation (dose-organ overlap). Considering these scores, the proposed method outperforms intensity-based and distance maps-based registration methods. Conclusions: The proposed method allows for accurately mapping interindividual 3D dose distributions toward a single anatomical template, opening the way for further voxelwise statistical analysis.« less
Scanning electron microscopy of bone.
Boyde, Alan
2012-01-01
This chapter described methods for Scanning Electron Microscopical imaging of bone and bone cells. Backscattered electron (BSE) imaging is by far the most useful in the bone field, followed by secondary electrons (SE) and the energy dispersive X-ray (EDX) analytical modes. This chapter considers preparing and imaging samples of unembedded bone having 3D detail in a 3D surface, topography-free, polished or micromilled, resin-embedded block surfaces, and resin casts of space in bone matrix. The chapter considers methods for fixation, drying, looking at undersides of bone cells, and coating. Maceration with alkaline bacterial pronase, hypochlorite, hydrogen peroxide, and sodium or potassium hydroxide to remove cells and unmineralised matrix is described in detail. Attention is given especially to methods for 3D BSE SEM imaging of bone samples and recommendations for the types of resin embedding of bone for BSE imaging are given. Correlated confocal and SEM imaging of PMMA-embedded bone requires the use of glycerol to coverslip. Cathodoluminescence (CL) mode SEM imaging is an alternative for visualising fluorescent mineralising front labels such as calcein and tetracyclines. Making spatial casts from PMMA or other resin embedded samples is an important use of this material. Correlation with other imaging means, including microradiography and microtomography is important. Shipping wet bone samples between labs is best done in glycerol. Environmental SEM (ESEM, controlled vacuum mode) is valuable in eliminating -"charging" problems which are common with complex, cancellous bone samples.
NASA Astrophysics Data System (ADS)
Chen, Gui-zhen; Zhang, Sha-sha; Xu, Yun-xiang; Wang, Xiao-yun
2011-11-01
Nuclear Magnetic Resonance (NMR) is a diagnostic method which is non-invasive and non-ionizing irradiative to the human body. It not only suits structural, but also functional imaging. The NMR technique develops rapidly in its application in life science, which has become the hotspot in recent years. Menopausal panic disorder (MPD) is a typical psychosomatic disease during climacteric period, which may affect physical and mental health. Looking for a convenient, effective, and safe method, which is free of toxic-side effects to control the disease, is a modern medical issue. Based on reviewing the etiology and pathogenesis of MPD according to dual traditional Chinese medicine (TCM) and western medicine, further analyzed the advantages and principles for selecting acupoint prescription by tonifying kidney and benefiting marrow therapy for acupoint catgut-embedding to this disease. The application of Nuclear Magnetic Resonance Spectroscopy (NMRS) and Magnetic Resonance Imaging (MRI) technologies in mechanism research on acupoint catgut embedding for the treatment of MPD was discussed. It's pointed out that this intervention method is safe and effective to treat MPD. Breakthrough will be achieved from the research of the selection of acupoint prescription and therapeutic mechanism of acupoint catgut embedding for the treatment of menopausal panic disorder by utilizing the Functional Nuclear Magnetic Resonance Imaging (fMRI) and Metabonomics technologies.
NASA Astrophysics Data System (ADS)
Hibbard-Lubow, David Luke
The demands of digital memory have increased exponentially in recent history, requiring faster, smaller and more accurate storage methods. Two promising solutions to this ever-present problem are Bit Patterned Media (BPM) and Spin-Transfer Torque Magnetic Random Access Memory (STT-MRAM). Producing these technologies requires difficult and expensive fabrication techniques. Thus, the production processes must be optimized to allow these storage methods to compete commercially while continuing to increase their information storage density and reliability. I developed a process for the production of nanomagnetic devices (which can take the form of several types of digital memory) embedded in thin silicon nitride films. My focus was on optimizing the reactive ion etching recipe required to embed the device in the film. Ultimately, I found that recipe 37 (Power: 250W, CF4 nominal/actual flow rate: 25/25.4 sccm, O2 nominal/actual flow rate: 3.1/5.2 sccm, which gave a maximum pressure around 400 mTorr) gave the most repeatable and anisotropic results. I successfully used processes described in this thesis to make embedded nanomagnets, which could be used as bit patterned media. Another promising application of this work is to make embedded magnetic tunneling junctions, which are the storage medium used in MRAM. Doing so will require still some tweaks to the fabrication methods. Techniques for making these changes and their potential effects are discussed.
3D printing of highly elastic strain sensors using polyurethane/multiwall carbon nanotube composites
NASA Astrophysics Data System (ADS)
Christ, Josef F.; Hohimer, Cameron J.; Aliheidari, Nahal; Ameli, Amir; Mo, Changki; Pötschke, Petra
2017-04-01
As the desire for wearable electronics increases and the soft robotics industry advances, the need for novel sensing materials has also increased. Recently, there have been many attempts at producing novel materials, which exhibit piezoresistive behavior. However, one of the major shortcomings in strain sensing technologies is in the fabrication of such sensors. While there is significant research and literature covering the various methods for developing piezoresistive materials, fabricating complex sensor platforms is still a manufacturing challenge. Here, we report a facile method to fabricate multidirectional embedded strain sensors using additive manufacturing technology. Pure thermoplastic polyurethane (TPU) and TPU/multiwall carbon nanotubes (MWCNT) nanocomposites were 3D printed in tandem using a low-cost multi-material FDM printer to fabricate uniaxial and biaxial strain sensors with conductive paths embedded within the insulative TPU platform. The sensors were then subjected to a series of cyclic strain loads. The results revealed excellent piezoresistive responses of the sensors with cyclic repeatability in both the axial and transverse directions and in response to strains as high as 50%. Further, while strain-softening did occur in the embedded printed strain sensors, it was predictable and similar to the results found in the literature for bulk polymer nanocomposites. This works demonstrates the possibility of manufacturing embedded and multidirectional flexible strain sensors using an inexpensive and versatile method, with potential applications in soft robotics and flexible electronics and health monitoring.
NASA Astrophysics Data System (ADS)
Chen, Gui-zhen; Zhang, Sha-sha; Xu, Yun-xiang; Wang, Xiao-yun
2012-03-01
Nuclear Magnetic Resonance (NMR) is a diagnostic method which is non-invasive and non-ionizing irradiative to the human body. It not only suits structural, but also functional imaging. The NMR technique develops rapidly in its application in life science, which has become the hotspot in recent years. Menopausal panic disorder (MPD) is a typical psychosomatic disease during climacteric period, which may affect physical and mental health. Looking for a convenient, effective, and safe method, which is free of toxic-side effects to control the disease, is a modern medical issue. Based on reviewing the etiology and pathogenesis of MPD according to dual traditional Chinese medicine (TCM) and western medicine, further analyzed the advantages and principles for selecting acupoint prescription by tonifying kidney and benefiting marrow therapy for acupoint catgut-embedding to this disease. The application of Nuclear Magnetic Resonance Spectroscopy (NMRS) and Magnetic Resonance Imaging (MRI) technologies in mechanism research on acupoint catgut embedding for the treatment of MPD was discussed. It's pointed out that this intervention method is safe and effective to treat MPD. Breakthrough will be achieved from the research of the selection of acupoint prescription and therapeutic mechanism of acupoint catgut embedding for the treatment of menopausal panic disorder by utilizing the Functional Nuclear Magnetic Resonance Imaging (fMRI) and Metabonomics technologies.
Park, Jee Won; Seo, Eun Ji; You, Mi-Ae; Song, Ju-Eun
2016-03-01
Program outcome evaluation is important because it is an indicator for good quality of education. Course-embedded assessment is one of the program outcome evaluation methods. However, it is rarely used in Korean nursing education. The study purpose was to develop and apply preliminarily a course-embedded assessment system to evaluate one program outcome and to share our experiences. This was a methodological study to develop and apply the course-embedded assessment system based on the theoretical framework in one nursing program in South Korea. Scores for 77 students generated from the three practicum courses were used. The course-embedded assessment system was developed following the six steps suggested by Han's model as follows. 1) One program outcome in the undergraduate program, "nursing process application ability", was selected and 2) the three clinical practicum courses related to the selected program outcome were identified. 3) Evaluation tools including rubric and items were selected for outcome measurement and 4) performance criterion, the educational goal level for the program, was established. 5) Program outcome was actually evaluated using the rubric and evaluation items in the three practicum courses and 6) the obtained scores were analyzed to identify the achievement rate, which was compared with the performance criterion. Achievement rates for the selected program outcome in adult, maternity, and pediatric nursing practicum were 98.7%, 100%, and 66.2% in the case report and 100% for all three in the clinical practice, and 100%, 100%, and 87% respectively for the conference. These are considered as satisfactory levels when compared with the performance criterion of "at least 60% or more". Course-embedded assessment can be used as an effective and economic method to evaluate the program outcome without running an integrative course additionally. Further studies to develop course-embedded assessment systems for other program outcomes in nursing education are needed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Forman, Dawn; Nicol, Pam; Nicol, Paul
2015-01-01
Adapting to interprofessional education and practice requires a change of perspective for many health professionals. We aimed to explore the potential of scenario planning to bridge the understanding gap and framing strategic planning for interprofessional education (IPE) and practice (IPP), as well as to implement innovative techniques and technology for large-group scenario planning. A full-day scenario planning workshop incorporating innovative methodology was designed and offered to participants. The 71 participants included academics from nine universities, as well as service providers, government, students and consumer organisations. The outcomes were evaluated by statistical and thematic analysis of a mixed method survey questionnaire. The scenario planning method resulted in a positive response as a means of collaboratively exploring current knowledge and broadening entrenched attitudes. It was perceived to be an effective instrument for framing strategy for the implementation of IPE/IPP, with 81 percent of respondents to a post-workshop survey indicating they would consider using scenario planning in their own organisations. The scenario planning method can be used by tertiary academic institutions as a strategy in developing, implementing and embedding IPE, and for the enculturation of IPP in practice settings.
Evaluation of scaling invariance embedded in short time series.
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.
Evaluation of Scaling Invariance Embedded in Short Time Series
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356
Acidity in DMSO from the embedded cluster integral equation quantum solvation model.
Heil, Jochen; Tomazic, Daniel; Egbers, Simon; Kast, Stefan M
2014-04-01
The embedded cluster reference interaction site model (EC-RISM) is applied to the prediction of acidity constants of organic molecules in dimethyl sulfoxide (DMSO) solution. EC-RISM is based on a self-consistent treatment of the solute's electronic structure and the solvent's structure by coupling quantum-chemical calculations with three-dimensional (3D) RISM integral equation theory. We compare available DMSO force fields with reference calculations obtained using the polarizable continuum model (PCM). The results are evaluated statistically using two different approaches to eliminating the proton contribution: a linear regression model and an analysis of pK(a) shifts for compound pairs. Suitable levels of theory for the integral equation methodology are benchmarked. The results are further analyzed and illustrated by visualizing solvent site distribution functions and comparing them with an aqueous environment.
2014-01-01
Background Determination of fetal aneuploidy is central to evaluation of recurrent pregnancy loss (RPL). However, obtaining this information at the time of a miscarriage is not always possible or may not have been ordered. Here we report on “rescue karyotyping”, wherein DNA extracted from archived paraffin-embedded pregnancy loss tissue from a prior dilation and curettage (D&C) is evaluated by array-based comparative genomic hybridization (aCGH). Methods A retrospective case series was conducted at an academic medical center. Patients included had unexplained RPL and a prior pregnancy loss for which karyotype information would be clinically informative but was unavailable. After extracting DNA from slides of archived tissue, aCGH with a reduced stringency approach was performed, allowing for analysis of partially degraded DNA. Statistics were computed using STATA v12.1 (College Station, TX). Results Rescue karyotyping was attempted on 20 specimens from 17 women. DNA was successfully extracted in 16 samples (80.0%), enabling analysis at either high or low resolution. The longest interval from tissue collection to DNA extraction was 4.2 years. There was no significant difference in specimen sufficiency for analysis in the collection-to-extraction interval (p = 0.14) or gestational age at pregnancy loss (p = 0.32). Eight specimens showed copy number variants: 3 trisomies, 2 partial chromosomal deletions, 1 mosaic abnormality and 2 unclassified variants. Conclusions Rescue karyotyping using aCGH on DNA extracted from paraffin-embedded tissue provides the opportunity to obtain critical fetal cytogenetic information from a prior loss, even if it occurred years earlier. Given the ubiquitous archiving of paraffin embedded tissue obtained during a D&C and the ease of obtaining results despite long loss-to-testing intervals or early gestational age at time of fetal demise, this may provide a useful technique in the evaluation of couples with recurrent pregnancy loss. PMID:24589081
Santhi, B; Dheeptha, B
2016-01-01
The field of telemedicine has gained immense momentum, owing to the need for transmitting patients' information securely. This paper puts forth a unique method for embedding data in medical images. It is based on edge based embedding and XOR coding. The algorithm proposes a novel key generation technique by utilizing the design of a sudoku puzzle to enhance the security of the transmitted message. The edge blocks of the cover image alone, are utilized to embed the payloads. The least significant bit of the pixel values are changed by XOR coding depending on the data to be embedded and the key generated. Hence the distortion in the stego image is minimized and the information is retrieved accurately. Data is embedded in the RGB planes of the cover image, thus increasing its embedding capacity. Several measures including peak signal noise ratio (PSNR), mean square error (MSE), universal image quality index (UIQI) and correlation coefficient (R) are the image quality measures that have been used to analyze the quality of the stego image. It is evident from the results that the proposed technique outperforms the former methodologies.
Embedded optical interconnect technology in data storage systems
NASA Astrophysics Data System (ADS)
Pitwon, Richard C. A.; Hopkins, Ken; Milward, Dave; Muggeridge, Malcolm
2010-05-01
As both data storage interconnect speeds increase and form factors in hard disk drive technologies continue to shrink, the density of printed channels on the storage array midplane goes up. The dominant interconnect protocol on storage array midplanes is expected to increase to 12 Gb/s by 2012 thereby exacerbating the performance bottleneck in future digital data storage systems. The design challenges inherent to modern data storage systems are discussed and an embedded optical infrastructure proposed to mitigate this bottleneck. The proposed solution is based on the deployment of an electro-optical printed circuit board and active interconnect technology. The connection architecture adopted would allow for electronic line cards with active optical edge connectors to be plugged into and unplugged from a passive electro-optical midplane with embedded polymeric waveguides. A demonstration platform has been developed to assess the viability of embedded electro-optical midplane technology in dense data storage systems and successfully demonstrated at 10.3 Gb/s. Active connectors incorporate optical transceiver interfaces operating at 850 nm and are connected in an in-plane coupling configuration to the embedded waveguides in the midplane. In addition a novel method of passively aligning and assembling passive optical devices to embedded polymer waveguide arrays has also been demonstrated.
NMR measurements of gaseous sulfur hexafluoride (SF6) to probe the cross-linking of EPDM rubber.
Terekhov, M; Neutzler, S; Aluas, M; Hoepfel, D; Oellrich, L R
2005-11-01
The effects of embedding gaseous SF6 into EPDM rubber were investigated using NMR methods. It was found that observed sorption and desorption processes follow the behavior of the dual mode sorption model. A strong correlation was found between EPDM cross-linking and transversal relaxation time of embedded SF6. EPDM samples with different cross-link densities, preliminarily determined by 1H transversal relaxation using the Gotlib model and Litvinov's method, were investigated using embedded SF6. The sensitivity of the 19F transversal relaxation rate of SF6 to the EPDM cross-link density variation was found to be at least 10 times higher than for 1H in the polymer chain. First experiments on probing the swelling effects in EPDM due to its contact with polar liquids have been performed. Copyright (c) 2005 John Wiley & Sons, Ltd.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
NASA Astrophysics Data System (ADS)
Ohnuma, Hidetoshi; Kawahira, Hiroichi
1998-09-01
An automatic alternative phase shift mask (PSM) pattern layout tool has been newly developed. This tool is dedicated for embedded DRAM in logic device to shrink gate line width with improving line width controllability in lithography process with a design rule below 0.18 micrometers by the KrF excimer laser exposure. The tool can crete Levenson type PSM used being coupled with a binary mask adopting a double exposure method for positive photo resist. By using graphs, this tool automatically creates alternative PSM patterns. Moreover, it does not give any phase conflicts. By adopting it to actual embedded DRAM in logic cells, we have provided 0.16 micrometers gate resist patterns at both random logic and DRAM areas. The patterns were fabricated using two masks with the double exposure method. Gate line width has been well controlled under a practical exposure-focus window.
Reversible integer wavelet transform for blind image hiding method
Bibi, Nargis; Mahmood, Zahid; Akram, Tallha; Naqvi, Syed Rameez
2017-01-01
In this article, a blind data hiding reversible methodology to embed the secret data for hiding purpose into cover image is proposed. The key advantage of this research work is to resolve the privacy and secrecy issues raised during the data transmission over the internet. Firstly, data is decomposed into sub-bands using the integer wavelets. For decomposition, the Fresnelet transform is utilized which encrypts the secret data by choosing a unique key parameter to construct a dummy pattern. The dummy pattern is then embedded into an approximated sub-band of the cover image. Our proposed method reveals high-capacity and great imperceptibility of the secret embedded data. With the utilization of family of integer wavelets, the proposed novel approach becomes more efficient for hiding and retrieving process. It retrieved the secret hidden data from the embedded data blindly, without the requirement of original cover image. PMID:28498855
Recovering an elastic obstacle containing embedded objects by the acoustic far-field measurements
NASA Astrophysics Data System (ADS)
Qu, Fenglong; Yang, Jiaqing; Zhang, Bo
2018-01-01
Consider the inverse scattering problem of time-harmonic acoustic waves by a 3D bounded elastic obstacle which may contain embedded impenetrable obstacles inside. We propose a novel and simple technique to show that the elastic obstacle can be uniquely recovered by the acoustic far-field pattern at a fixed frequency, disregarding its contents. Our method is based on constructing a well-posed modified interior transmission problem on a small domain and makes use of an a priori estimate for both the acoustic and elastic wave fields in the usual H 1-norm. In the case when there is no obstacle embedded inside the elastic body, our method gives a much simpler proof for the uniqueness result obtained previously in the literature (Natroshvili et al 2000 Rend. Mat. Serie VII 20 57-92 Monk and Selgas 2009 Inverse Problems Imaging 3 173-98).
Forensic steganalysis: determining the stego key in spatial domain steganography
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Soukal, David; Holotyak, Taras
2005-03-01
This paper is an extension of our work on stego key search for JPEG images published at EI SPIE in 2004. We provide a more general theoretical description of the methodology, apply our approach to the spatial domain, and add a method that determines the stego key from multiple images. We show that in the spatial domain the stego key search can be made significantly more efficient by working with the noise component of the image obtained using a denoising filter. The technique is tested on the LSB embedding paradigm and on a special case of embedding by noise adding (the +/-1 embedding). The stego key search can be performed for a wide class of steganographic techniques even for sizes of secret message well below those detectable using known methods. The proposed strategy may prove useful to forensic analysts and law enforcement.
Embedded diagnostic, prognostic, and health management system and method for a humanoid robot
NASA Technical Reports Server (NTRS)
Barajas, Leandro G. (Inventor); Strawser, Philip A (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)
2013-01-01
A robotic system includes a humanoid robot with multiple compliant joints, each moveable using one or more of the actuators, and having sensors for measuring control and feedback data. A distributed controller controls the joints and other integrated system components over multiple high-speed communication networks. Diagnostic, prognostic, and health management (DPHM) modules are embedded within the robot at the various control levels. Each DPHM module measures, controls, and records DPHM data for the respective control level/connected device in a location that is accessible over the networks or via an external device. A method of controlling the robot includes embedding a plurality of the DPHM modules within multiple control levels of the distributed controller, using the DPHM modules to measure DPHM data within each of the control levels, and recording the DPHM data in a location that is accessible over at least one of the high-speed communication networks.
Non-standard analysis and embedded software
NASA Technical Reports Server (NTRS)
Platek, Richard
1995-01-01
One model for computing in the future is ubiquitous, embedded computational devices analogous to embedded electrical motors. Many of these computers will control physical objects and processes. Such hidden computerized environments introduce new safety and correctness concerns whose treatment go beyond present Formal Methods. In particular, one has to begin to speak about Real Space software in analogy with Real Time software. By this we mean, computerized systems which have to meet requirements expressed in the real geometry of space. How to translate such requirements into ordinary software specifications and how to carry out proofs is a major challenge. In this talk we propose a research program based on the use of no-standard analysis. Much detail remains to be carried out. The purpose of the talk is to inform the Formal Methods community that Non-Standard Analysis provides a possible avenue to attack which we believe will be fruitful.
Gear fatigue crack prognosis using embedded model, gear dynamic model and fracture mechanics
NASA Astrophysics Data System (ADS)
Li, C. James; Lee, Hyungdae
2005-07-01
This paper presents a model-based method that predicts remaining useful life of a gear with a fatigue crack. The method consists of an embedded model to identify gear meshing stiffness from measured gear torsional vibration, an inverse method to estimate crack size from the estimated meshing stiffness; a gear dynamic model to simulate gear meshing dynamics and determine the dynamic load on the cracked tooth; and a fast crack propagation model to forecast the remaining useful life based on the estimated crack size and dynamic load. The fast crack propagation model was established to avoid repeated calculations of FEM and facilitate field deployment of the proposed method. Experimental studies were conducted to validate and demonstrate the feasibility of the proposed method for prognosis of a cracked gear.
New technologies for supporting real-time on-board software development
NASA Astrophysics Data System (ADS)
Kerridge, D.
1995-03-01
The next generation of on-board data management systems will be significantly more complex than current designs, and will be required to perform more complex and demanding tasks in software. Improved hardware technology, in the form of the MA31750 radiation hard processor, is one key component in addressing the needs of future embedded systems. However, to complement these hardware advances, improved support for the design and implementation of real-time data management software is now needed. This will help to control the cost and risk assoicated with developing data management software development as it becomes an increasingly significant element within embedded systems. One particular problem with developing embedded software is managing the non-functional requirements in a systematic way. This paper identifies how Logica has exploited recent developments in hard real-time theory to address this problem through the use of new hard real-time analysis and design methods which can be supported by specialized tools. The first stage in transferring this technology from the research domain to industrial application has already been completed. The MA37150 Hard Real-Time Embedded Software Support Environment (HESSE) is a loosely integrated set of hardware and software tools which directly support the process of hard real-time analysis for software targeting the MA31750 processor. With further development, this HESSE promises to provide embedded system developers with software tools which can reduce the risks associated with developing complex hard real-time software. Supported in this way by more sophisticated software methods and tools, it is foreseen that MA31750 based embedded systems can meet the processing needs for the next generation of on-board data management systems.
Embedded Piezoresistive Microcantilever Sensors for Chemical and Biological Sensing
NASA Astrophysics Data System (ADS)
Porter, Timothy; Eastman, Michael; Kooser, Ara; Manygoats, Kevin; Zhine, Rosalie
2003-03-01
Microcantilever sensors based on embedded piezoresisative technology offer a promising, low-cost method of sensing chemical and biological species. Here, we present data on the detection of various gaseous analytes, including volatile organic compounds (VOC's) and carbon monoxide. Also, we have used these sensors to detect the protein bovine serum albumin (BSA), a protein important in the study of human childhood diabetes.