Sample records for set embedding scheme

  1. A new approach for embedding causal sets into Minkowski space

    NASA Astrophysics Data System (ADS)

    Liu, He; Reid, David D.

    2018-06-01

    This paper reports on recent work toward an approach for embedding causal sets into two-dimensional Minkowski space. The main new feature of the present scheme is its use of the spacelike distance measure to construct an ordering of causal set elements within anti-chains of a causal set as an aid to the embedding procedure.

  2. Communication: Density functional theory embedding with the orthogonality constrained basis set expansion procedure

    NASA Astrophysics Data System (ADS)

    Culpitt, Tanner; Brorsen, Kurt R.; Hammes-Schiffer, Sharon

    2017-06-01

    Density functional theory (DFT) embedding approaches have generated considerable interest in the field of computational chemistry because they enable calculations on larger systems by treating subsystems at different levels of theory. To circumvent the calculation of the non-additive kinetic potential, various projector methods have been developed to ensure the orthogonality of molecular orbitals between subsystems. Herein the orthogonality constrained basis set expansion (OCBSE) procedure is implemented to enforce this subsystem orbital orthogonality without requiring a level shifting parameter. This scheme is a simple alternative to existing parameter-free projector-based schemes, such as the Huzinaga equation. The main advantage of the OCBSE procedure is that excellent convergence behavior is attained for DFT-in-DFT embedding without freezing any of the subsystem densities. For the three chemical systems studied, the level of accuracy is comparable to or higher than that obtained with the Huzinaga scheme with frozen subsystem densities. Allowing both the high-level and low-level DFT densities to respond to each other during DFT-in-DFT embedding calculations provides more flexibility and renders this approach more generally applicable to chemical systems. It could also be useful for future extensions to embedding approaches combining wavefunction theories and DFT.

  3. Permutation entropy with vector embedding delays

    NASA Astrophysics Data System (ADS)

    Little, Douglas J.; Kane, Deb M.

    2017-12-01

    Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.

  4. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorella, S., E-mail: sorella@sissa.it; Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wavemore » function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.« less

  5. Embedding multiple watermarks in the DFT domain using low- and high-frequency bands

    NASA Astrophysics Data System (ADS)

    Ganic, Emir; Dexter, Scott D.; Eskicioglu, Ahmet M.

    2005-03-01

    Although semi-blind and blind watermarking schemes based on Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT) are robust to a number of attacks, they fail in the presence of geometric attacks such as rotation, scaling, and translation. The Discrete Fourier Transform (DFT) of a real image is conjugate symmetric, resulting in a symmetric DFT spectrum. Because of this property, the popularity of DFT-based watermarking has increased in the last few years. In a recent paper, we generalized a circular watermarking idea to embed multiple watermarks in lower and higher frequencies. Nevertheless, a circular watermark is visible in the DFT domain, providing a potential hacker with valuable information about the location of the watermark. In this paper, our focus is on embedding multiple watermarks that are not visible in the DFT domain. Using several frequency bands increases the overall robustness of the proposed watermarking scheme. Specifically, our experiments show that the watermark embedded in lower frequencies is robust to one set of attacks, and the watermark embedded in higher frequencies is robust to a different set of attacks.

  6. Effective scheme for partitioning covalent bonds in density-functional embedding theory: From molecules to extended covalent systems.

    PubMed

    Huang, Chen; Muñoz-García, Ana Belén; Pavone, Michele

    2016-12-28

    Density-functional embedding theory provides a general way to perform multi-physics quantum mechanics simulations of large-scale materials by dividing the total system's electron density into a cluster's density and its environment's density. It is then possible to compute the accurate local electronic structures and energetics of the embedded cluster with high-level methods, meanwhile retaining a low-level description of the environment. The prerequisite step in the density-functional embedding theory is the cluster definition. In covalent systems, cutting across the covalent bonds that connect the cluster and its environment leads to dangling bonds (unpaired electrons). These represent a major obstacle for the application of density-functional embedding theory to study extended covalent systems. In this work, we developed a simple scheme to define the cluster in covalent systems. Instead of cutting covalent bonds, we directly split the boundary atoms for maintaining the valency of the cluster. With this new covalent embedding scheme, we compute the dehydrogenation energies of several different molecules, as well as the binding energy of a cobalt atom on graphene. Well localized cluster densities are observed, which can facilitate the use of localized basis sets in high-level calculations. The results are found to converge faster with the embedding method than the other multi-physics approach ONIOM. This work paves the way to perform the density-functional embedding simulations of heterogeneous systems in which different types of chemical bonds are present.

  7. Quantum annealing correction with minor embedding

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.

    2015-10-01

    Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.

  8. Diversification of Processors Based on Redundancy in Instruction Set

    NASA Astrophysics Data System (ADS)

    Ichikawa, Shuichi; Sawada, Takashi; Hata, Hisashi

    By diversifying processor architecture, computer software is expected to be more resistant to plagiarism, analysis, and attacks. This study presents a new method to diversify instruction set architecture (ISA) by utilizing the redundancy in the instruction set. Our method is particularly suited for embedded systems implemented with FPGA technology, and realizes a genuine instruction set randomization, which has not been provided by the preceding studies. The evaluation results on four typical ISAs indicate that our scheme can provide a far larger degree of freedom than the preceding studies. Diversified processors based on MIPS architecture were actually implemented and evaluated with Xilinx Spartan-3 FPGA. The increase of logic scale was modest: 5.1% in Specialized design and 3.6% in RAM-mapped design. The performance overhead was also modest: 3.4% in Specialized design and 11.6% in RAM-mapped design. From these results, our scheme is regarded as a practical and promising way to secure FPGA-based embedded systems.

  9. An RGB colour image steganography scheme using overlapping block-based pixel-value differencing

    PubMed Central

    Pal, Arup Kumar

    2017-01-01

    This paper presents a steganographic scheme based on the RGB colour cover image. The secret message bits are embedded into each colour pixel sequentially by the pixel-value differencing (PVD) technique. PVD basically works on two consecutive non-overlapping components; as a result, the straightforward conventional PVD technique is not applicable to embed the secret message bits into a colour pixel, since a colour pixel consists of three colour components, i.e. red, green and blue. Hence, in the proposed scheme, initially the three colour components are represented into two overlapping blocks like the combination of red and green colour components, while another one is the combination of green and blue colour components, respectively. Later, the PVD technique is employed on each block independently to embed the secret data. The two overlapping blocks are readjusted to attain the modified three colour components. The notion of overlapping blocks has improved the embedding capacity of the cover image. The scheme has been tested on a set of colour images and satisfactory results have been achieved in terms of embedding capacity and upholding the acceptable visual quality of the stego-image. PMID:28484623

  10. Embedded WENO: A design strategy to improve existing WENO schemes

    NASA Astrophysics Data System (ADS)

    van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.

    2017-02-01

    Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.

  11. Content-independent embedding scheme for multi-modal medical image watermarking.

    PubMed

    Nyeem, Hussain; Boles, Wageeh; Boyd, Colin

    2015-02-04

    As the increasing adoption of information technology continues to offer better distant medical services, the distribution of, and remote access to digital medical images over public networks continues to grow significantly. Such use of medical images raises serious concerns for their continuous security protection, which digital watermarking has shown great potential to address. We present a content-independent embedding scheme for medical image watermarking. We observe that the perceptual content of medical images varies widely with their modalities. Recent medical image watermarking schemes are image-content dependent and thus they may suffer from inconsistent embedding capacity and visual artefacts. To attain the image content-independent embedding property, we generalise RONI (region of non-interest, to the medical professionals) selection process and use it for embedding by utilising RONI's least significant bit-planes. The proposed scheme thus avoids the need for RONI segmentation that incurs capacity and computational overheads. Our experimental results demonstrate that the proposed embedding scheme performs consistently over a dataset of 370 medical images including their 7 different modalities. Experimental results also verify how the state-of-the-art reversible schemes can have an inconsistent performance for different modalities of medical images. Our scheme has MSSIM (Mean Structural SIMilarity) larger than 0.999 with a deterministically adaptable embedding capacity. Our proposed image-content independent embedding scheme is modality-wise consistent, and maintains a good image quality of RONI while keeping all other pixels in the image untouched. Thus, with an appropriate watermarking framework (i.e., with the considerations of watermark generation, embedding and detection functions), our proposed scheme can be viable for the multi-modality medical image applications and distant medical services such as teleradiology and eHealth.

  12. A novel quantum steganography scheme for color images

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Liu, Xiande

    In quantum image steganography, embedding capacity and security are two important issues. This paper presents a novel quantum steganography scheme using color images as cover images. First, the secret information is divided into 3-bit segments, and then each 3-bit segment is embedded into the LSB of one color pixel in the cover image according to its own value and using Gray code mapping rules. Extraction is the inverse of embedding. We designed the quantum circuits that implement the embedding and extracting process. The simulation results on a classical computer show that the proposed scheme outperforms several other existing schemes in terms of embedding capacity and security.

  13. Exact density functional and wave function embedding schemes based on orbital localization

    NASA Astrophysics Data System (ADS)

    Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály

    2016-08-01

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.

  14. Quantum Error Correction for Minor Embedded Quantum Annealing

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Paz Silva, Gerardo; Mishra, Anurag; Albash, Tameem; Lidar, Daniel

    2015-03-01

    While quantum annealing can take advantage of the intrinsic robustness of adiabatic dynamics, some form of quantum error correction (QEC) is necessary in order to preserve its advantages over classical computation. Moreover, realistic quantum annealers are subject to a restricted connectivity between qubits. Minor embedding techniques use several physical qubits to represent a single logical qubit with a larger set of interactions, but necessarily introduce new types of errors (whenever the physical qubits corresponding to the same logical qubit disagree). We present a QEC scheme where a minor embedding is used to generate a 8 × 8 × 2 cubic connectivity out of the native one and perform experiments on a D-Wave quantum annealer. Using a combination of optimized encoding and decoding techniques, our scheme enables the D-Wave device to solve minor embedded hard instances at least as well as it would on a native implementation. Our work is a proof-of-concept that minor embedding can be advantageously implemented in order to increase both the robustness and the connectivity of a programmable quantum annealer. Applied in conjunction with decoding techniques, this paves the way toward scalable quantum annealing with applications to hard optimization problems.

  15. Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow.

    PubMed

    Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong

    2015-01-01

    Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time.

  16. Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow

    PubMed Central

    Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong

    2015-01-01

    Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time. PMID:26351657

  17. A self-consistent density based embedding scheme applied to the adsorption of CO on Pd(111)

    NASA Astrophysics Data System (ADS)

    Lahav, D.; Klüner, T.

    2007-06-01

    We derive a variant of a density based embedded cluster approach as an improvement to a recently proposed embedding theory for metallic substrates (Govind et al 1999 J. Chem. Phys. 110 7677; Klüner et al 2001 Phys. Rev. Lett. 86 5954). In this scheme, a local region in space is represented by a small cluster which is treated by accurate quantum chemical methodology. The interaction of the cluster with the infinite solid is taken into account by an effective one-electron embedding operator representing the surrounding region. We propose a self-consistent embedding scheme which resolves intrinsic problems of the former theory, in particular a violation of strict density conservation. The proposed scheme is applied to the well-known benchmark system CO/Pd(111).

  18. Solvatochromic shifts from coupled-cluster theory embedded in density functional theory

    NASA Astrophysics Data System (ADS)

    Höfener, Sebastian; Gomes, André Severo Pereira; Visscher, Lucas

    2013-09-01

    Building on the framework recently reported for determining general response properties for frozen-density embedding [S. Höfener, A. S. P. Gomes, and L. Visscher, J. Chem. Phys. 136, 044104 (2012)], 10.1063/1.3675845, in this work we report a first implementation of an embedded coupled-cluster in density-functional theory (CC-in-DFT) scheme for electronic excitations, where only the response of the active subsystem is taken into account. The formalism is applied to the calculation of coupled-cluster excitation energies of water and uracil in aqueous solution. We find that the CC-in-DFT results are in good agreement with reference calculations and experimental results. The accuracy of calculations is mainly sensitive to factors influencing the correlation treatment (basis set quality, truncation of the cluster operator) and to the embedding treatment of the ground-state (choice of density functionals). This allows for efficient approximations at the excited state calculation step without compromising the accuracy. This approximate scheme makes it possible to use a first principles approach to investigate environment effects with specific interactions at coupled-cluster level of theory at a cost comparable to that of calculations of the individual subsystems in vacuum.

  19. Third-order 2N-storage Runge-Kutta schemes with error control

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Kennedy, Christopher A.

    1994-01-01

    A family of four-stage third-order explicit Runge-Kutta schemes is derived that requires only two storage locations and has desirable stability characteristics. Error control is achieved by embedding a second-order scheme within the four-stage procedure. Certain schemes are identified that are as efficient and accurate as conventional embedded schemes of comparable order and require fewer storage locations.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hégely, Bence; Nagy, Péter R.; Kállay, Mihály, E-mail: kallay@mail.bme.hu

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up themore » system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.« less

  1. Quantum Watermarking Scheme Based on INEQR

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-04-01

    Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.

  2. A Study on the Security Levels of Spread-Spectrum Embedding Schemes in the WOA Framework.

    PubMed

    Wang, Yuan-Gen; Zhu, Guopu; Kwong, Sam; Shi, Yun-Qing

    2017-08-23

    Security analysis is a very important issue for digital watermarking. Several years ago, according to Kerckhoffs' principle, the famous four security levels, namely insecurity, key security, subspace security, and stego-security, were defined for spread-spectrum (SS) embedding schemes in the framework of watermarked-only attack. However, up to now there has been little application of the definition of these security levels to the theoretical analysis of the security of SS embedding schemes, due to the difficulty of the theoretical analysis. In this paper, based on the security definition, we present a theoretical analysis to evaluate the security levels of five typical SS embedding schemes, which are the classical SS, the improved SS (ISS), the circular extension of ISS, the nonrobust and robust natural watermarking, respectively. The theoretical analysis of these typical SS schemes are successfully performed by taking advantage of the convolution of probability distributions to derive the probabilistic models of watermarked signals. Moreover, simulations are conducted to illustrate and validate our theoretical analysis. We believe that the theoretical and practical analysis presented in this paper can bridge the gap between the definition of the four security levels and its application to the theoretical analysis of SS embedding schemes.

  3. LSB-based Steganography Using Reflected Gray Code for Color Quantum Images

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Lu, Aiping

    2018-02-01

    At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.

  4. Optimal Embedding for Shape Indexing in Medical Image Databases

    PubMed Central

    Qian, Xiaoning; Tagare, Hemant D.; Fulbright, Robert K.; Long, Rodney; Antani, Sameer

    2010-01-01

    This paper addresses the problem of indexing shapes in medical image databases. Shapes of organs are often indicative of disease, making shape similarity queries important in medical image databases. Mathematically, shapes with landmarks belong to shape spaces which are curved manifolds with a well defined metric. The challenge in shape indexing is to index data in such curved spaces. One natural indexing scheme is to use metric trees, but metric trees are prone to inefficiency. This paper proposes a more efficient alternative. We show that it is possible to optimally embed finite sets of shapes in shape space into a Euclidean space. After embedding, classical coordinate-based trees can be used for efficient shape retrieval. The embedding proposed in the paper is optimal in the sense that it least distorts the partial Procrustes shape distance. The proposed indexing technique is used to retrieve images by vertebral shape from the NHANES II database of cervical and lumbar spine x-ray images maintained at the National Library of Medicine. Vertebral shape strongly correlates with the presence of osteophytes, and shape similarity retrieval is proposed as a tool for retrieval by osteophyte presence and severity. Experimental results included in the paper evaluate (1) the usefulness of shape-similarity as a proxy for osteophytes, (2) the computational and disk access efficiency of the new indexing scheme, (3) the relative performance of indexing with embedding to the performance of indexing without embedding, and (4) the computational cost of indexing using the proposed embedding versus the cost of an alternate embedding. The experimental results clearly show the relevance of shape indexing and the advantage of using the proposed embedding. PMID:20163981

  5. Optimal embedding for shape indexing in medical image databases.

    PubMed

    Qian, Xiaoning; Tagare, Hemant D; Fulbright, Robert K; Long, Rodney; Antani, Sameer

    2010-06-01

    This paper addresses the problem of indexing shapes in medical image databases. Shapes of organs are often indicative of disease, making shape similarity queries important in medical image databases. Mathematically, shapes with landmarks belong to shape spaces which are curved manifolds with a well defined metric. The challenge in shape indexing is to index data in such curved spaces. One natural indexing scheme is to use metric trees, but metric trees are prone to inefficiency. This paper proposes a more efficient alternative. We show that it is possible to optimally embed finite sets of shapes in shape space into a Euclidean space. After embedding, classical coordinate-based trees can be used for efficient shape retrieval. The embedding proposed in the paper is optimal in the sense that it least distorts the partial Procrustes shape distance. The proposed indexing technique is used to retrieve images by vertebral shape from the NHANES II database of cervical and lumbar spine X-ray images maintained at the National Library of Medicine. Vertebral shape strongly correlates with the presence of osteophytes, and shape similarity retrieval is proposed as a tool for retrieval by osteophyte presence and severity. Experimental results included in the paper evaluate (1) the usefulness of shape similarity as a proxy for osteophytes, (2) the computational and disk access efficiency of the new indexing scheme, (3) the relative performance of indexing with embedding to the performance of indexing without embedding, and (4) the computational cost of indexing using the proposed embedding versus the cost of an alternate embedding. The experimental results clearly show the relevance of shape indexing and the advantage of using the proposed embedding. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  6. Hardware/software codesign for embedded RISC core

    NASA Astrophysics Data System (ADS)

    Liu, Peng

    2001-12-01

    This paper describes hardware/software codesign method of the extendible embedded RISC core VIRGO, which based on MIPS-I instruction set architecture. VIRGO is described by Verilog hardware description language that has five-stage pipeline with shared 32-bit cache/memory interface, and it is controlled by distributed control scheme. Every pipeline stage has one small controller, which controls the pipeline stage status and cooperation among the pipeline phase. Since description use high level language and structure is distributed, VIRGO core has highly extension that can meet the requirements of application. We take look at the high-definition television MPEG2 MPHL decoder chip, constructed the hardware/software codesign virtual prototyping machine that can research on VIRGO core instruction set architecture, and system on chip memory size requirements, and system on chip software, etc. We also can evaluate the system on chip design and RISC instruction set based on the virtual prototyping machine platform.

  7. Self-consistent Green's function embedding for advanced electronic structure methods based on a dynamical mean-field concept

    NASA Astrophysics Data System (ADS)

    Chibani, Wael; Ren, Xinguo; Scheffler, Matthias; Rinke, Patrick

    2016-04-01

    We present an embedding scheme for periodic systems that facilitates the treatment of the physically important part (here a unit cell or a supercell) with advanced electronic structure methods, that are computationally too expensive for periodic systems. The rest of the periodic system is treated with computationally less demanding approaches, e.g., Kohn-Sham density-functional theory, in a self-consistent manner. Our scheme is based on the concept of dynamical mean-field theory formulated in terms of Green's functions. Our real-space dynamical mean-field embedding scheme features two nested Dyson equations, one for the embedded cluster and another for the periodic surrounding. The total energy is computed from the resulting Green's functions. The performance of our scheme is demonstrated by treating the embedded region with hybrid functionals and many-body perturbation theory in the GW approach for simple bulk systems. The total energy and the density of states converge rapidly with respect to the computational parameters and approach their bulk limit with increasing cluster (i.e., computational supercell) size.

  8. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    PubMed

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  9. Computing the multifractal spectrum from time series: an algorithmic approach.

    PubMed

    Harikrishnan, K P; Misra, R; Ambika, G; Amritkar, R E

    2009-12-01

    We show that the existing methods for computing the f(alpha) spectrum from a time series can be improved by using a new algorithmic scheme. The scheme relies on the basic idea that the smooth convex profile of a typical f(alpha) spectrum can be fitted with an analytic function involving a set of four independent parameters. While the standard existing schemes [P. Grassberger et al., J. Stat. Phys. 51, 135 (1988); A. Chhabra and R. V. Jensen, Phys. Rev. Lett. 62, 1327 (1989)] generally compute only an incomplete f(alpha) spectrum (usually the top portion), we show that this can be overcome by an algorithmic approach, which is automated to compute the D(q) and f(alpha) spectra from a time series for any embedding dimension. The scheme is first tested with the logistic attractor with known f(alpha) curve and subsequently applied to higher-dimensional cases. We also show that the scheme can be effectively adapted for analyzing practical time series involving noise, with examples from two widely different real world systems. Moreover, some preliminary results indicating that the set of four independent parameters may be used as diagnostic measures are also included.

  10. Progressive transmission of images over fading channels using rate-compatible LDPC codes.

    PubMed

    Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul

    2006-12-01

    In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.

  11. A Blind Reversible Robust Watermarking Scheme for Relational Databases

    PubMed Central

    Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen

    2013-01-01

    Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as “histogram shifting of adjacent pixel difference” (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks. PMID:24223033

  12. A blind reversible robust watermarking scheme for relational databases.

    PubMed

    Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen

    2013-01-01

    Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as "histogram shifting of adjacent pixel difference" (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks.

  13. Steganography based on pixel intensity value decomposition

    NASA Astrophysics Data System (ADS)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  14. Nonlinear secret image sharing scheme.

    PubMed

    Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.

  15. Nonlinear Secret Image Sharing Scheme

    PubMed Central

    Shin, Sang-Ho; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2⁡m⌉ bit-per-pixel (bpp), respectively. PMID:25140334

  16. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  17. Digital watermarking for color images in hue-saturation-value color space

    NASA Astrophysics Data System (ADS)

    Tachaphetpiboon, Suwat; Thongkor, Kharittha; Amornraksa, Thumrongrat; Delp, Edward J.

    2014-05-01

    This paper proposes a new watermarking scheme for color images, in which all pixels of the image are used for embedding watermark bits in order to achieve the highest amount of embedding. For watermark embedding, the S component in the hue-saturation-value (HSV) color space is used to carry the watermark bits, while the V component is used in accordance with a human visual system model to determine the proper watermark strength. In the proposed scheme, the number of watermark bits equals the number of pixels in the host image. Watermark extraction is accomplished blindly based on the use of a 3×3 spatial domain Wiener filter. The efficiency of our proposed image watermarking scheme depends mainly on the accuracy of the estimate of the original S component. The experimental results show that the performance of the proposed scheme, under no attacks and against various types of attacks, was superior to the previous existing watermarking schemes.

  18. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  19. Non-integer expansion embedding techniques for reversible image watermarking

    NASA Astrophysics Data System (ADS)

    Xiang, Shijun; Wang, Yi

    2015-12-01

    This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.

  20. Optical image hiding based on chaotic vibration of deformable moiré grating

    NASA Astrophysics Data System (ADS)

    Lu, Guangqing; Saunoriene, Loreta; Aleksiene, Sandra; Ragulskis, Minvydas

    2018-03-01

    Image hiding technique based on chaotic vibration of deformable moiré grating is presented in this paper. The embedded secret digital image is leaked in a form of a pattern of time-averaged moiré fringes when the deformable cover grating vibrates according to a chaotic law of motion with a predefined set of parameters. Computational experiments are used to demonstrate the features and the applicability of the proposed scheme.

  1. Local matrix learning in clustering and applications for manifold visualization.

    PubMed

    Arnonkijpanich, Banchar; Hasenfuss, Alexander; Hammer, Barbara

    2010-05-01

    Electronic data sets are increasing rapidly with respect to both, size of the data sets and data resolution, i.e. dimensionality, such that adequate data inspection and data visualization have become central issues of data mining. In this article, we present an extension of classical clustering schemes by local matrix adaptation, which allows a better representation of data by means of clusters with an arbitrary spherical shape. Unlike previous proposals, the method is derived from a global cost function. The focus of this article is to demonstrate the applicability of this matrix clustering scheme to low-dimensional data embedding for data inspection. The proposed method is based on matrix learning for neural gas and manifold charting. This provides an explicit mapping of a given high-dimensional data space to low dimensionality. We demonstrate the usefulness of this method for data inspection and manifold visualization. 2009 Elsevier Ltd. All rights reserved.

  2. Special Issue: Very large eddy simulation. Issue Edited by Dimitris Drikakis.Copyright © 2002 John Wiley & Sons, Ltd.Save Title to My Profile

    E-MailPrint

    Volume 39, Issue 9, Pages 763-864(30 July 2002)

    Research Article

    Embedded turbulence model in numerical methods for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Drikakis, D.

    2002-07-01

    The paper describes the use of numerical methods for hyperbolic conservation laws as an embedded turbulence modelling approach. Different Godunov-type schemes are utilized in computations of Burgers' turbulence and a two-dimensional mixing layer. The schemes include a total variation diminishing, characteristic-based scheme which is developed in this paper using the flux limiter approach. The embedded turbulence modelling property of the above methods is demonstrated through coarsely resolved large eddy simulations with and without subgrid scale models. Copyright

  3. Gait Characteristic Analysis and Identification Based on the iPhone's Accelerometer and Gyrometer

    PubMed Central

    Sun, Bing; Wang, Yang; Banda, Jacob

    2014-01-01

    Gait identification is a valuable approach to identify humans at a distance. In this paper, gait characteristics are analyzed based on an iPhone's accelerometer and gyrometer, and a new approach is proposed for gait identification. Specifically, gait datasets are collected by the triaxial accelerometer and gyrometer embedded in an iPhone. Then, the datasets are processed to extract gait characteristic parameters which include gait frequency, symmetry coefficient, dynamic range and similarity coefficient of characteristic curves. Finally, a weighted voting scheme dependent upon the gait characteristic parameters is proposed for gait identification. Four experiments are implemented to validate the proposed scheme. The attitude and acceleration solutions are verified by simulation. Then the gait characteristics are analyzed by comparing two sets of actual data, and the performance of the weighted voting identification scheme is verified by 40 datasets of 10 subjects. PMID:25222034

  4. Small Private Key PKS on an Embedded Microprocessor

    PubMed Central

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-01-01

    Multivariate quadratic ( ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012. PMID:24651722

  5. Small private key MQPKS on an embedded microprocessor.

    PubMed

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-03-19

    Multivariate quadratic (MQ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.

  6. Simplifying the representation of complex free-energy landscapes using sketch-map

    PubMed Central

    Ceriotti, Michele; Tribello, Gareth A.; Parrinello, Michele

    2011-01-01

    A new scheme, sketch-map, for obtaining a low-dimensional representation of the region of phase space explored during an enhanced dynamics simulation is proposed. We show evidence, from an examination of the distribution of pairwise distances between frames, that some features of the free-energy surface are inherently high-dimensional. This makes dimensionality reduction problematic because the data does not satisfy the assumptions made in conventional manifold learning algorithms We therefore propose that when dimensionality reduction is performed on trajectory data one should think of the resultant embedding as a quickly sketched set of directions rather than a road map. In other words, the embedding tells one about the connectivity between states but does not provide the vectors that correspond to the slow degrees of freedom. This realization informs the development of sketch-map, which endeavors to reproduce the proximity information from the high-dimensionality description in a space of lower dimensionality even when a faithful embedding is not possible. PMID:21730167

  7. Watermarking scheme for authentication of compressed image

    NASA Astrophysics Data System (ADS)

    Hsieh, Tsung-Han; Li, Chang-Tsun; Wang, Shuo

    2003-11-01

    As images are commonly transmitted or stored in compressed form such as JPEG, to extend the applicability of our previous work, a new scheme for embedding watermark in compressed domain without resorting to cryptography is proposed. In this work, a target image is first DCT transformed and quantised. Then, all the coefficients are implicitly watermarked in order to minimize the risk of being attacked on the unwatermarked coefficients. The watermarking is done through registering/blending the zero-valued coefficients with a binary sequence to create the watermark and involving the unembedded coefficients during the process of embedding the selected coefficients. The second-order neighbors and the block itself are considered in the process of the watermark embedding in order to thwart different attacks such as cover-up, vector quantisation, and transplantation. The experiments demonstrate the capability of the proposed scheme in thwarting local tampering, geometric transformation such as cropping, and common signal operations such as lowpass filtering.

  8. Quantum decimation in Hilbert space: Coarse graining without structure

    NASA Astrophysics Data System (ADS)

    Singh, Ashmeet; Carroll, Sean M.

    2018-03-01

    We present a technique to coarse grain quantum states in a finite-dimensional Hilbert space. Our method is distinguished from other approaches by not relying on structures such as a preferred factorization of Hilbert space or a preferred set of operators (local or otherwise) in an associated algebra. Rather, we use the data corresponding to a given set of states, either specified independently or constructed from a single state evolving in time. Our technique is based on principle component analysis (PCA), and the resulting coarse-grained quantum states live in a lower-dimensional Hilbert space whose basis is defined using the underlying (isometric embedding) transformation of the set of fine-grained states we wish to coarse grain. Physically, the transformation can be interpreted to be an "entanglement coarse-graining" scheme that retains most of the global, useful entanglement structure of each state, while needing fewer degrees of freedom for its reconstruction. This scheme could be useful for efficiently describing collections of states whose number is much smaller than the dimension of Hilbert space, or a single state evolving over time.

  9. Etch Profile Simulation Using Level Set Methods

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Etching and deposition of materials are critical steps in semiconductor processing for device manufacturing. Both etching and deposition may have isotropic and anisotropic components, due to directional sputtering and redeposition of materials, for example. Previous attempts at modeling profile evolution have used so-called "string theory" to simulate the moving solid-gas interface between the semiconductor and the plasma. One complication of this method is that extensive de-looping schemes are required at the profile corners. We will present a 2D profile evolution simulation using level set theory to model the surface. (1) By embedding the location of the interface in a field variable, the need for de-looping schemes is eliminated and profile corners are more accurately modeled. This level set profile evolution model will calculate both isotropic and anisotropic etch and deposition rates of a substrate in low pressure (10s mTorr) plasmas, considering the incident ion energy angular distribution functions and neutral fluxes. We will present etching profiles of Si substrates in Ar/Cl2 discharges for various incident ion energies and trench geometries.

  10. A Robust Blind Quantum Copyright Protection Method for Colored Images Based on Owner's Signature

    NASA Astrophysics Data System (ADS)

    Heidari, Shahrokh; Gheibi, Reza; Houshmand, Monireh; Nagata, Koji

    2017-08-01

    Watermarking is the imperceptible embedding of watermark bits into multimedia data in order to use for different applications. Among all its applications, copyright protection is the most prominent usage which conceals information about the owner in the carrier, so as to prohibit others from assertion copyright. This application requires high level of robustness. In this paper, a new blind quantum copyright protection method based on owners's signature in RGB images is proposed. The method utilizes one of the RGB channels as indicator and two remained channels are used for embedding information about the owner. In our contribution the owner's signature is considered as a text. Therefore, in order to embed in colored image as watermark, a new quantum representation of text based on ASCII character set is offered. Experimental results which are analyzed in MATLAB environment, exhibit that the presented scheme shows good performance against attacks and can be used to find out who the real owner is. Finally, the discussed quantum copyright protection method is compared with a related work that our analysis confirm that the presented scheme is more secure and applicable than the previous ones currently found in the literature.

  11. Force Field for Water Based on Neural Network.

    PubMed

    Wang, Hao; Yang, Weitao

    2018-05-18

    We developed a novel neural network based force field for water based on training with high level ab initio theory. The force field was built based on electrostatically embedded many-body expansion method truncated at binary interactions. Many-body expansion method is a common strategy to partition the total Hamiltonian of large systems into a hierarchy of few-body terms. Neural networks were trained to represent electrostatically embedded one-body and two-body interactions, which require as input only one and two water molecule calculations at the level of ab initio electronic structure method CCSD/aug-cc-pVDZ embedded in the molecular mechanics water environment, making it efficient as a general force field construction approach. Structural and dynamic properties of liquid water calculated with our force field show good agreement with experimental results. We constructed two sets of neural network based force fields: non-polarizable and polarizable force fields. Simulation results show that the non-polarizable force field using fixed TIP3P charges has already behaved well, since polarization effects and many-body effects are implicitly included due to the electrostatic embedding scheme. Our results demonstrate that the electrostatically embedded many-body expansion combined with neural network provides a promising and systematic way to build the next generation force fields at high accuracy and low computational costs, especially for large systems.

  12. Consensus embedding: theory, algorithms and application to segmentation and classification of biomedical data

    PubMed Central

    2012-01-01

    Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103

  13. A meta-classifier for detecting prostate cancer by quantitative integration of in vivo magnetic resonance spectroscopy and magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Tiwari, Pallavi; Rosen, Mark; Madabhushi, Anant

    2008-03-01

    Recently, in vivo Magnetic Resonance Imaging (MRI) and Magnetic Resonance Spectroscopy (MRS) have emerged as promising new modalities to aid in prostate cancer (CaP) detection. MRI provides anatomic and structural information of the prostate while MRS provides functional data pertaining to biochemical concentrations of metabolites such as creatine, choline and citrate. We have previously presented a hierarchical clustering scheme for CaP detection on in vivo prostate MRS and have recently developed a computer-aided method for CaP detection on in vivo prostate MRI. In this paper we present a novel scheme to develop a meta-classifier to detect CaP in vivo via quantitative integration of multimodal prostate MRS and MRI by use of non-linear dimensionality reduction (NLDR) methods including spectral clustering and locally linear embedding (LLE). Quantitative integration of multimodal image data (MRI and PET) involves the concatenation of image intensities following image registration. However multimodal data integration is non-trivial when the individual modalities include spectral and image intensity data. We propose a data combination solution wherein we project the feature spaces (image intensities and spectral data) associated with each of the modalities into a lower dimensional embedding space via NLDR. NLDR methods preserve the relationships between the objects in the original high dimensional space when projecting them into the reduced low dimensional space. Since the original spectral and image intensity data are divorced from their original physical meaning in the reduced dimensional space, data at the same spatial location can be integrated by concatenating the respective embedding vectors. Unsupervised consensus clustering is then used to partition objects into different classes in the combined MRS and MRI embedding space. Quantitative results of our multimodal computer-aided diagnosis scheme on 16 sets of patient data obtained from the ACRIN trial, for which corresponding histological ground truth for spatial extent of CaP is known, show a marginally higher sensitivity, specificity, and positive predictive value compared to corresponding CAD results with the individual modalities.

  14. Introducing Convective Cloud Microphysics to a Deep Convection Parameterization Facilitating Aerosol Indirect Effects

    NASA Astrophysics Data System (ADS)

    Alapaty, K.; Zhang, G. J.; Song, X.; Kain, J. S.; Herwehe, J. A.

    2012-12-01

    Short lived pollutants such as aerosols play an important role in modulating not only the radiative balance but also cloud microphysical properties and precipitation rates. In the past, to understand the interactions of aerosols with clouds, several cloud-resolving modeling studies were conducted. These studies indicated that in the presence of anthropogenic aerosols, single-phase deep convection precipitation is reduced or suppressed. On the other hand, anthropogenic aerosol pollution led to enhanced precipitation for mixed-phase deep convective clouds. To date, there have not been many efforts to incorporate such aerosol indirect effects (AIE) in mesoscale models or global models that use parameterization schemes for deep convection. Thus, the objective of this work is to implement a diagnostic cloud microphysical scheme directly into a deep convection parameterization facilitating aerosol indirect effects in the WRF-CMAQ integrated modeling systems. Major research issues addressed in this study are: What is the sensitivity of a deep convection scheme to cloud microphysical processes represented by a bulk double-moment scheme? How close are the simulated cloud water paths as compared to observations? Does increased aerosol pollution lead to increased precipitation for mixed-phase clouds? These research questions are addressed by performing several WRF simulations using the Kain-Fritsch convection parameterization and a diagnostic cloud microphysical scheme. In the first set of simulations (control simulations) the WRF model is used to simulate two scenarios of deep convection over the continental U.S. during two summer periods at 36 km grid resolution. In the second set, these simulations are repeated after incorporating a diagnostic cloud microphysical scheme to study the impacts of inclusion of cloud microphysical processes. Finally, in the third set, aerosol concentrations simulated by the CMAQ modeling system are supplied to the embedded cloud microphysical scheme to study impacts of aerosol concentrations on precipitation and radiation fields. Observations available from the ARM microbase data, the SURFRAD network, GOES imagery, and other reanalysis and measurements will be used to analyze the impacts of a cloud microphysical scheme and aerosol concentrations on parameterized convection.

  15. Watermarking protocols for authentication and ownership protection based on timestamps and holograms

    NASA Astrophysics Data System (ADS)

    Dittmann, Jana; Steinebach, Martin; Croce Ferri, Lucilla

    2002-04-01

    Digital watermarking has become an accepted technology for enabling multimedia protection schemes. One problem here is the security of these schemes. Without a suitable framework, watermarks can be replaced and manipulated. We discuss different protocols providing security against rightful ownership attacks and other fraud attempts. We compare the characteristics of existing protocols for different media like direct embedding or seed based and required attributes of the watermarking technology like robustness or payload. We introduce two new media independent protocol schemes for rightful ownership authentication. With the first scheme we ensure security of digital watermarks used for ownership protection with a combination of two watermarks: first watermark of the copyright holder and a second watermark from a Trusted Third Party (TTP). It is based on hologram embedding and the watermark consists of e.g. a company logo. As an example we use digital images and specify the properties of the embedded additional security information. We identify components necessary for the security protocol like timestamp, PKI and cryptographic algorithms. The second scheme is used for authentication. It is designed for invertible watermarking applications which require high data integrity. We combine digital signature schemes and digital watermarking to provide a public verifiable integrity. The original data can only be reproduced with a secret key. Both approaches provide solutions for copyright and authentication watermarking and are introduced for image data but can be easily adopted for video and audio data as well.

  16. Principled design for an integrated computational environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Disessa, A.A.

    Boxer is a computer language designed to be the base of an integrated computational environment providing a broad array of functionality -- from text editing to programming -- for naive and novice users. It stands in the line of Lisp inspired languages (Lisp, Logo, Scheme), but differs from these in achieving much of its understandability from pervasive use of a spatial metaphor reinforced through suitable graphics. This paper describes a set of learnability and understandability issues first and then uses them to motivate design decisions made concerning Boxer and the environment in which it is embedded.

  17. Polarization-Analyzing CMOS Image Sensor With Monolithically Embedded Polarizer for Microchemistry Systems.

    PubMed

    Tokuda, T; Yamada, H; Sasagawa, K; Ohta, J

    2009-10-01

    This paper proposes and demonstrates a polarization-analyzing CMOS sensor based on image sensor architecture. The sensor was designed targeting applications for chiral analysis in a microchemistry system. The sensor features a monolithically embedded polarizer. Embedded polarizers with different angles were implemented to realize a real-time absolute measurement of the incident polarization angle. Although the pixel-level performance was confirmed to be limited, estimation schemes based on the variation of the polarizer angle provided a promising performance for real-time polarization measurements. An estimation scheme using 180 pixels in a 1deg step provided an estimation accuracy of 0.04deg. Polarimetric measurements of chiral solutions were also successfully performed to demonstrate the applicability of the sensor to optical chiral analysis.

  18. Spatial-frequency composite watermarking for digital image copyright protection

    NASA Astrophysics Data System (ADS)

    Su, Po-Chyi; Kuo, C.-C. Jay

    2000-05-01

    Digital watermarks can be classified into two categories according to the embedding and retrieval domain, i.e. spatial- and frequency-domain watermarks. Because the two watermarks have different characteristics and limitations, combination of them can have various interesting properties when applied to different applications. In this research, we examine two spatial-frequency composite watermarking schemes. In both cases, a frequency-domain watermarking technique is applied as a baseline structure in the system. The embedded frequency- domain watermark is robust against filtering and compression. A spatial-domain watermarking scheme is then built to compensate some deficiency of the frequency-domain scheme. The first composite scheme is to embed a robust watermark in images to convey copyright or author information. The frequency-domain watermark contains owner's identification number while the spatial-domain watermark is embedded for image registration to resist cropping attack. The second composite scheme is to embed fragile watermark for image authentication. The spatial-domain watermark helps in locating the tampered part of the image while the frequency-domain watermark indicates the source of the image and prevents double watermarking attack. Experimental results show that the two watermarks do not interfere with each other and different functionalities can be achieved. Watermarks in both domains are detected without resorting to the original image. Furthermore, the resulting watermarked image can still preserve high fidelity without serious visual degradation.

  19. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    NASA Astrophysics Data System (ADS)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  20. Implementation of a web-based, interactive polytrauma tutorial in computed tomography for radiology residents: how we do it.

    PubMed

    Schlorhaufer, C; Behrends, M; Diekhaus, G; Keberle, M; Weidemann, J

    2012-12-01

    Due to the time factor in polytraumatized patients all relevant pathologies in a polytrauma computed tomography (CT) scan have to be read and communicated very quickly. During radiology residency acquisition of effective reading schemes based on typical polytrauma pathologies is very important. Thus, an online tutorial for the structured diagnosis of polytrauma CT was developed. Based on current multimedia theories like the cognitive load theory a didactic concept was developed. As a web-environment the learning management system ILIAS was chosen. CT data sets were converted into online scrollable QuickTime movies. Audiovisual tutorial movies with guided image analyses by a consultant radiologist were recorded. The polytrauma tutorial consists of chapterized text content and embedded interactive scrollable CT data sets. Selected trauma pathologies are demonstrated to the user by guiding tutor movies. Basic reading schemes are communicated with the help of detailed commented movies of normal data sets. Common and important pathologies could be explored in a self-directed manner. Ambitious didactic concepts can be supported by a web based application on the basis of cognitive load theory and currently available software tools. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. No need for external orthogonality in subsystem density-functional theory.

    PubMed

    Unsleber, Jan P; Neugebauer, Johannes; Jacob, Christoph R

    2016-08-03

    Recent reports on the necessity of using externally orthogonal orbitals in subsystem density-functional theory (SDFT) [Annu. Rep. Comput. Chem., 8, 2012, 53; J. Phys. Chem. A, 118, 2014, 9182] are re-investigated. We show that in the basis-set limit, supermolecular Kohn-Sham-DFT (KS-DFT) densities can exactly be represented as a sum of subsystem densities, even if the subsystem orbitals are not externally orthogonal. This is illustrated using both an analytical example and in basis-set free numerical calculations for an atomic test case. We further show that even with finite basis sets, SDFT calculations using accurate reconstructed potentials can closely approach the supermolecular KS-DFT density, and that the deviations between SDFT and KS-DFT decrease as the basis-set limit is approached. Our results demonstrate that formally, there is no need to enforce external orthogonality in SDFT, even though this might be a useful strategy when developing projection-based DFT embedding schemes.

  2. Realisation and robustness evaluation of a blind spatial domain watermarking technique

    NASA Astrophysics Data System (ADS)

    Parah, Shabir A.; Sheikh, Javaid A.; Assad, Umer I.; Bhat, Ghulam M.

    2017-04-01

    A blind digital image watermarking scheme based on spatial domain is presented and investigated in this paper. The watermark has been embedded in intermediate significant bit planes besides the least significant bit plane at the address locations determined by pseudorandom address vector (PAV). The watermark embedding using PAV makes it difficult for an adversary to locate the watermark and hence adds to security of the system. The scheme has been evaluated to ascertain the spatial locations that are robust to various image processing and geometric attacks JPEG compression, additive white Gaussian noise, salt and pepper noise, filtering and rotation. The experimental results obtained, reveal an interesting fact, that, for all the above mentioned attacks, other than rotation, higher the bit plane in which watermark is embedded more robust the system. Further, the perceptual quality of the watermarked images obtained in the proposed system has been compared with some state-of-art watermarking techniques. The proposed technique outperforms the techniques under comparison, even if compared with the worst case peak signal-to-noise ratio obtained in our scheme.

  3. Enhanced multi-protocol analysis via intelligent supervised embedding (EMPrAvISE): detecting prostate cancer on multi-parametric MRI

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Bloch, B. Nicholas; Chappelow, Jonathan; Patel, Pratik; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant

    2011-03-01

    Currently, there is significant interest in developing methods for quantitative integration of multi-parametric (structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities and scales of individual protocols, while deriving an integrated multi-parametric data representation which best captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric (T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging (MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols via automated image registration, followed by quantification of image attributes from individual protocols. Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier, yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground truth for CaP extent on MRI (obtained by spatially registering pre-operative MRI with available whole-mount histological specimens) reveals that EMPrAvISE yields a statistically significant improvement (AUC=0.77) over classifiers constructed from individual protocols (AUC=0.62, 0.62, 0.65, for T2w, DCE, DWI respectively) as well as one trained using multi-parametric feature concatenation (AUC=0.67).

  4. An Efficient Semi-fragile Watermarking Scheme for Tamper Localization and Recovery

    NASA Astrophysics Data System (ADS)

    Hou, Xiang; Yang, Hui; Min, Lianquan

    2018-03-01

    To solve the problem that remote sensing images are vulnerable to be tampered, a semi-fragile watermarking scheme was proposed. Binary random matrix was used as the authentication watermark, which was embedded by quantizing the maximum absolute value of directional sub-bands coefficients. The average gray level of every non-overlapping 4×4 block was adopted as the recovery watermark, which was embedded in the least significant bit. Watermarking detection could be done directly without resorting to the original images. Experimental results showed our method was robust against rational distortions to a certain extent. At the same time, it was fragile to malicious manipulation, and realized accurate localization and approximate recovery of the tampered regions. Therefore, this scheme can protect the security of remote sensing image effectively.

  5. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  6. Computer-aided diagnosis of lung cancer: the effect of training data sets on classification accuracy of lung nodules.

    PubMed

    Gong, Jing; Liu, Ji-Yu; Sun, Xi-Wen; Zheng, Bin; Nie, Sheng-Dong

    2018-02-05

    This study aims to develop a computer-aided diagnosis (CADx) scheme for classification between malignant and benign lung nodules, and also assess whether CADx performance changes in detecting nodules associated with early and advanced stage lung cancer. The study involves 243 biopsy-confirmed pulmonary nodules. Among them, 76 are benign, 81 are stage I and 86 are stage III malignant nodules. The cases are separated into three data sets involving: (1) all nodules, (2) benign and stage I malignant nodules, and (3) benign and stage III malignant nodules. A CADx scheme is applied to segment lung nodules depicted on computed tomography images and we initially computed 66 3D image features. Then, three machine learning models namely, a support vector machine, naïve Bayes classifier and linear discriminant analysis, are separately trained and tested by using three data sets and a leave-one-case-out cross-validation method embedded with a Relief-F feature selection algorithm. When separately using three data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.94, 0.90 and 0.99, respectively. When using the classifiers trained using data sets with all nodules, average AUC values are 0.88 and 0.99 for detecting early and advanced stage nodules, respectively. AUC values computed from three classifiers trained using the same data set are consistent without statistically significant difference (p  >  0.05). This study demonstrates (1) the feasibility of applying a CADx scheme to accurately distinguish between benign and malignant lung nodules, and (2) a positive trend between CADx performance and cancer progression stage. Thus, in order to increase CADx performance in detecting subtle and early cancer, training data sets should include more diverse early stage cancer cases.

  7. Computer-aided diagnosis of lung cancer: the effect of training data sets on classification accuracy of lung nodules

    NASA Astrophysics Data System (ADS)

    Gong, Jing; Liu, Ji-Yu; Sun, Xi-Wen; Zheng, Bin; Nie, Sheng-Dong

    2018-02-01

    This study aims to develop a computer-aided diagnosis (CADx) scheme for classification between malignant and benign lung nodules, and also assess whether CADx performance changes in detecting nodules associated with early and advanced stage lung cancer. The study involves 243 biopsy-confirmed pulmonary nodules. Among them, 76 are benign, 81 are stage I and 86 are stage III malignant nodules. The cases are separated into three data sets involving: (1) all nodules, (2) benign and stage I malignant nodules, and (3) benign and stage III malignant nodules. A CADx scheme is applied to segment lung nodules depicted on computed tomography images and we initially computed 66 3D image features. Then, three machine learning models namely, a support vector machine, naïve Bayes classifier and linear discriminant analysis, are separately trained and tested by using three data sets and a leave-one-case-out cross-validation method embedded with a Relief-F feature selection algorithm. When separately using three data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.94, 0.90 and 0.99, respectively. When using the classifiers trained using data sets with all nodules, average AUC values are 0.88 and 0.99 for detecting early and advanced stage nodules, respectively. AUC values computed from three classifiers trained using the same data set are consistent without statistically significant difference (p  >  0.05). This study demonstrates (1) the feasibility of applying a CADx scheme to accurately distinguish between benign and malignant lung nodules, and (2) a positive trend between CADx performance and cancer progression stage. Thus, in order to increase CADx performance in detecting subtle and early cancer, training data sets should include more diverse early stage cancer cases.

  8. A novel lost packets recovery scheme based on visual secret sharing

    NASA Astrophysics Data System (ADS)

    Lu, Kun; Shan, Hong; Li, Zhi; Niu, Zhao

    2017-08-01

    In this paper, a novel lost packets recovery scheme which encrypts the effective parts of an original packet into two shadow packets based on (2, 2)-threshold XOR-based visual Secret Sharing (VSS) is proposed. The two shadow packets used as watermarks would be embedded into two normal data packets with digital watermarking embedding technology and then sent from one sensor node to another. Each shadow packet would reveal no information of the original packet, which can improve the security of original packet delivery greatly. The two shadow packets which can be extracted from the received two normal data packets delivered from a sensor node can recover the original packet lossless based on XOR-based VSS. The Performance analysis present that the proposed scheme provides essential services as long as possible in the presence of selective forwarding attack. The proposed scheme would not increase the amount of additional traffic, namely, lower energy consumption, which is suitable for Wireless Sensor Network (WSN).

  9. Secure Communication Based on a Hybrid of Chaos and Ica Encryptions

    NASA Astrophysics Data System (ADS)

    Chen, Wei Ching; Yuan, John

    Chaos and independent component analysis (ICA) encryptions are two novel schemes for secure communications. In this paper, a new scheme combining chaos and ICA techniques is proposed to enhance the security level during communication. In this scheme, a master chaotic system is embedded at the transmitter. The message signal is mixed with a chaotic signal and a Gaussian white noise into two mixed signals and then transmitted to the receiver through the public channels. A signal for synchronization is transmitted through another public channel to the receiver where a slave chaotic system is embedded to reproduce the chaotic signal. A modified ICA is used to recover the message signal at the receiver. Since only two of the three transmitted signals contain the information of message signal, a hacker would not be able to retrieve the message signal by using ICA even though all the transmitted signals are intercepted. Spectrum analyses are used to prove that the message signal can be securely hidden under this scheme.

  10. Numerical Methods Using B-Splines

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Merriam, Marshal (Technical Monitor)

    1997-01-01

    The seminar will discuss (1) The current range of applications for which B-spline schemes may be appropriate (2) The property of high-resolution and the relationship between B-spline and compact schemes (3) Comparison between finite-element, Hermite finite element and B-spline schemes (4) Mesh embedding using B-splines (5) A method for the incompressible Navier-Stokes equations in curvilinear coordinates using divergence-free expansions.

  11. Watermarking textures in video games

    NASA Astrophysics Data System (ADS)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  12. Embedding beyond electrostatics-The role of wave function confinement.

    PubMed

    Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Holmgaard List, Nanna; Solanko, Lukasz M; Wüstner, Daniel; Kongsted, Jacob

    2016-09-14

    We study excited states of cholesterol in solution and show that, in this specific case, solute wave-function confinement is the main effect of the solvent. This is rationalized on the basis of the polarizable density embedding scheme, which in addition to polarizable embedding includes non-electrostatic repulsion that effectively confines the solute wave function to its cavity. We illustrate how the inclusion of non-electrostatic repulsion results in a successful identification of the intense π → π(∗) transition, which was not possible using an embedding method that only includes electrostatics. This underlines the importance of non-electrostatic repulsion in quantum-mechanical embedding-based methods.

  13. Creating Culturally Sustainable Agri-Environmental Schemes

    ERIC Educational Resources Information Center

    Burton, Rob J. F.; Paragahawewa, Upananda Herath

    2011-01-01

    Evidence is emerging from across Europe that contemporary agri-environmental schemes are having only limited, if any, influence on farmers' long-term attitudes towards the environment. In this theoretical paper we argue that these approaches are not "culturally sustainable," i.e. the actions are not becoming embedded within farming…

  14. The Use of Color as a Third Dimension on Maps

    NASA Astrophysics Data System (ADS)

    Cid, X.; Lopez, R.; Lazarus, S.

    2007-12-01

    As experts, we are trained to understand color schemes used in visualizations in our respective scientific fields. As experts we also forget how complicated graphics can be when viewed for the first time. Previous studies have shown that three-dimensional diagrams can produce a cognitive overload when rendered on a two-dimensional surface, so the same might apply to graphics that use color as a third dimension. This study was conducted to investigate the use of color as a third dimension. We looked at the use of color as a scale height on a basic topographic map, as well as the use of color as temperature. Fifty-four undergraduates from two different physics courses and REU programs during the spring and summer semesters in 2007 were given surveys regarding the use of color. Of these 54 students, eight students were chosen to participate in interviews designed to investigate, in more detail, the responses provided by the students in the hopes to discover where confusions occur. It was found that students have an embedded color scheme for temperatures of red representing hot and blue representing cold as a product of societal influences, which was expected, but there was no embedded color scheme when color was applied to height. We found that students did not have a preference when viewing a topographic map with different color schemes, but did prefer the color scheme of the figure that they viewed first. We observed that the students did have an embedded notion of what the topographic figure was representing, and tried to fit the color scheme shown to match their idea. During the interviews we also found that even the slightest deviations from a specific color scheme gives rise to confusion. These results, therefore, show the importance of detail consistency when using visualizations in a lecture where the population is composed of novices.

  15. Motives of Log Schemes

    NASA Astrophysics Data System (ADS)

    Howell, Nicholas L.

    This thesis introduces two notions of motive associated to a log scheme. We introduce a category of log motives a la Voevodsky, and prove that the embedding of Voevodsky motives is an equivalence, in particular proving that any homotopy-invariant cohomology theory of schemes extends uniquely to log schemes. In the case of a log smooth degeneration, we give an explicit construction of the motivic Albanese of the degeneration, and show that the Hodge realization of this construction gives the Albanese of the limit Hodge structure.

  16. Image encryption based on a delayed fractional-order chaotic logistic system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na

    2012-05-01

    A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.

  17. Wavelet-based scalable L-infinity-oriented compression.

    PubMed

    Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter

    2006-09-01

    Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.

  18. First time combination of frozen density embedding theory with the algebraic diagrammatic construction scheme for the polarization propagator of second order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prager, Stefan, E-mail: stefan.prager@iwr.uni-heidelberg.de; Dreuw, Andreas, E-mail: dreuw@uni-heidelberg.de; Zech, Alexander, E-mail: alexander.zech@unige.ch

    The combination of Frozen Density Embedding Theory (FDET) and the Algebraic Diagrammatic Construction (ADC) scheme for the polarization propagator for describing environmental effects on electronically excited states is presented. Two different ways of interfacing and expressing the so-called embedding operator are introduced. The resulting excited states are compared with supermolecular calculations of the total system at the ADC(2) level of theory. Molecular test systems were chosen to investigate molecule–environment interactions of varying strength from dispersion interaction up to multiple hydrogen bonds. The overall difference between the supermolecular and the FDE-ADC calculations in excitation energies is lower than 0.09 eV (max)more » and 0.032 eV in average, which is well below the intrinsic error of the ADC(2) method itself.« less

  19. An improved quantum watermarking scheme using small-scale quantum circuits and color scrambling

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Zhao, Ya; Xiao, Hong; Cao, Maojun

    2017-05-01

    In order to solve the problem of embedding the watermark into the quantum color image, in this paper, an improved scheme of using small-scale quantum circuits and color scrambling is proposed. Both color carrier image and color watermark image are represented using novel enhanced quantum representation. The image sizes for carrier and watermark are assumed to be 2^{n+1}× 2^{n+2} and 2n× 2n, respectively. At first, the color of pixels in watermark image is scrambled using the controlled rotation gates, and then, the scrambled watermark with 2^n× 2^n image size and 24-qubit gray scale is expanded to an image with 2^{n+1}× 2^{n+2} image size and 3-qubit gray scale. Finally, the expanded watermark image is embedded into the carrier image by the controlled-NOT gates. The extraction of watermark is the reverse process of embedding it into carrier image, which is achieved by applying operations in the reverse order. Simulation-based experimental results show that the proposed scheme is superior to other similar algorithms in terms of three items, visual quality, scrambling effect of watermark image, and noise resistibility.

  20. Hiding Electronic Patient Record (EPR) in medical images: A high capacity and computationally efficient technique for e-healthcare applications.

    PubMed

    Loan, Nazir A; Parah, Shabir A; Sheikh, Javaid A; Akhoon, Jahangir A; Bhat, Ghulam M

    2017-09-01

    A high capacity and semi-reversible data hiding scheme based on Pixel Repetition Method (PRM) and hybrid edge detection for scalable medical images has been proposed in this paper. PRM has been used to scale up the small sized image (seed image) and hybrid edge detection ensures that no important edge information is missed. The scaled up version of seed image has been divided into 2×2 non overlapping blocks. In each block there is one seed pixel whose status decides the number of bits to be embedded in the remaining three pixels of that block. The Electronic Patient Record (EPR)/data have been embedded by using Least Significant and Intermediate Significant Bit Substitution (ISBS). The RC4 encryption has been used to add an additional security layer for embedded EPR/data. The proposed scheme has been tested for various medical and general images and compared with some state of art techniques in the field. The experimental results reveal that the proposed scheme besides being semi-reversible and computationally efficient is capable of handling high payload and as such can be used effectively for electronic healthcare applications. Copyright © 2017. Published by Elsevier Inc.

  1. Functions, Use and Effects of Embedded Support Devices in Printed Distance Learning Materials.

    ERIC Educational Resources Information Center

    Martens, Rob; And Others

    1996-01-01

    To support distance learning, printed materials for the course are enriched with embedded support devices (ESD) such as schemes, illustrations, examples, questions, or margin texts. Results of 3 studies involving 900 Dutch university students indicated that students used and appreciated ESD, and that they led to better study results. (SLD)

  2. A Laboratory Testbed for Embedded Fuzzy Control

    ERIC Educational Resources Information Center

    Srivastava, S.; Sukumar, V.; Bhasin, P. S.; Arun Kumar, D.

    2011-01-01

    This paper presents a novel scheme called "Laboratory Testbed for Embedded Fuzzy Control of a Real Time Nonlinear System." The idea is based upon the fact that project-based learning motivates students to learn actively and to use their engineering skills acquired in their previous years of study. It also fosters initiative and focuses…

  3. Fingerprint multicast in secure video streaming.

    PubMed

    Zhao, H Vicky; Liu, K J Ray

    2006-01-01

    Digital fingerprinting is an emerging technology to protect multimedia content from illegal redistribution, where each distributed copy is labeled with unique identification information. In video streaming, huge amount of data have to be transmitted to a large number of users under stringent latency constraints, so the bandwidth-efficient distribution of uniquely fingerprinted copies is crucial. This paper investigates the secure multicast of anticollusion fingerprinted video in streaming applications and analyzes their performance. We first propose a general fingerprint multicast scheme that can be used with most spread spectrum embedding-based multimedia fingerprinting systems. To further improve the bandwidth efficiency, we explore the special structure of the fingerprint design and propose a joint fingerprint design and distribution scheme. From our simulations, the two proposed schemes can reduce the bandwidth requirement by 48% to 87%, depending on the number of users, the characteristics of video sequences, and the network and computation constraints. We also show that under the constraint that all colluders have the same probability of detection, the embedded fingerprints in the two schemes have approximately the same collusion resistance. Finally, we propose a fingerprint drift compensation scheme to improve the quality of the reconstructed sequences at the decoder's side without introducing extra communication overhead.

  4. Adaptive grid embedding for the two-dimensional flux-split Euler equations. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Warren, Gary Patrick

    1990-01-01

    A numerical algorithm is presented for solving the 2-D flux-split Euler equations using a multigrid method with adaptive grid embedding. The method uses an unstructured data set along with a system of pointers for communication on the irregularly shaped grid topologies. An explicit two-stage time advancement scheme is implemented. A multigrid algorithm is used to provide grid level communication and to accelerate the convergence of the solution to steady state. Results are presented for a subcritical airfoil and a transonic airfoil with 3 levels of adaptation. Comparisons are made with a structured upwind Euler code which uses the same flux integration techniques of the present algorithm. Good agreement is obtained with converged surface pressure coefficients. The lift coefficients of the adaptive code are within 2 1/2 percent of the structured code for the sub-critical case and within 4 1/2 percent of the structured code for the transonic case using approximately one-third the number of grid points.

  5. A Real-Time High Performance Data Compression Technique For Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2001.

  6. A fast algorithm to compute precise type-2 centroids for real-time control applications.

    PubMed

    Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R

    2015-02-01

    An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.

  7. LDFT-based watermarking resilient to local desynchronization attacks.

    PubMed

    Tian, Huawei; Zhao, Yao; Ni, Rongrong; Qin, Lunming; Li, Xuelong

    2013-12-01

    Up to now, a watermarking scheme that is robust against desynchronization attacks (DAs) is still a grand challenge. Most image watermarking resynchronization schemes in literature can survive individual global DAs (e.g., rotation, scaling, translation, and other affine transforms), but few are resilient to challenging cropping and local DAs. The main reason is that robust features for watermark synchronization are only globally invariable rather than locally invariable. In this paper, we present a blind image watermarking resynchronization scheme against local transform attacks. First, we propose a new feature transform named local daisy feature transform (LDFT), which is not only globally but also locally invariable. Then, the binary space partitioning (BSP) tree is used to partition the geometrically invariant LDFT space. In the BSP tree, the location of each pixel is fixed under global transform, local transform, and cropping. Lastly, the watermarking sequence is embedded bit by bit into each leaf node of the BSP tree by using the logarithmic quantization index modulation watermarking embedding method. Simulation results show that the proposed watermarking scheme can survive numerous kinds of distortions, including common image-processing attacks, local and global DAs, and noninvertible cropping.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, Mark Walter

    Sandia National Laboratories is currently developing new processing and data communication architectures for use in future satellite payloads. These architectures will leverage the flexibility and performance of state-of-the-art static-random-access-memory-based Field Programmable Gate Arrays (FPGAs). One such FPGA is the radiation-hardened version of the Virtex-5 being developed by Xilinx. However, not all features of this FPGA are being radiation-hardened by design and could still be susceptible to on-orbit upsets. One such feature is the embedded hard-core PPC440 processor. Since this processor is implemented in the FPGA as a hard-core, traditional mitigation approaches such as Triple Modular Redundancy (TMR) are not availablemore » to improve the processor's on-orbit reliability. The goal of this work is to investigate techniques that can help mitigate the embedded hard-core PPC440 processor within the Virtex-5 FPGA other than TMR. Implementing various mitigation schemes reliably within the PPC440 offers a powerful reconfigurable computing resource to these node-based processing architectures. This document summarizes the work done on the cache mitigation scheme for the embedded hard-core PPC440 processor within the Virtex-5 FPGAs, and describes in detail the design of the cache mitigation scheme and the testing conducted at the radiation effects facility on the Texas A&M campus.« less

  9. An intercomparison of multidecadal observational and reanalysis data sets for global total ozone trends and variability analysis

    NASA Astrophysics Data System (ADS)

    Bai, Kaixu; Chang, Ni-Bin; Shi, Runhe; Yu, Huijia; Gao, Wei

    2017-07-01

    A four-step adaptive ozone trend estimation scheme is proposed by integrating multivariate linear regression (MLR) and ensemble empirical mode decomposition (EEMD) to analyze the long-term variability of total column ozone from a set of four observational and reanalysis total ozone data sets, including the rarely explored ERA-Interim total ozone reanalysis, from 1979 to 2009. Consistency among the four data sets was first assessed, indicating a mean relative difference of 1% and root-mean-square error around 2% on average, with respect to collocated ground-based total ozone observations. Nevertheless, large drifts with significant spatiotemporal inhomogeneity were diagnosed in ERA-Interim after 1995. To emphasize long-term trends, natural ozone variations associated with the solar cycle, quasi-biennial oscillation, volcanic aerosols, and El Niño-Southern Oscillation were modeled with MLR and then removed from each total ozone record, respectively, before performing EEMD analyses. The resulting rates of change estimated from the proposed scheme captured the long-term ozone variability well, with an inflection time of 2000 clearly detected. The positive rates of change after 2000 suggest that the ozone layer seems to be on a healing path, but the results are still inadequate to conclude an actual recovery of the ozone layer, and more observational evidence is needed. Further investigations suggest that biases embedded in total ozone records may significantly impact ozone trend estimations by resulting in large uncertainty or even negative rates of change after 2000.

  10. Human health improvement in Sub-Saharan Africa through integrated management of arthropod transmitted diseases and natural resources.

    PubMed

    Baumgärtner, J; Bieri, M; Buffoni, G; Gilioli, G; Gopalan, H; Greiling, J; Tikubet, G; Van Schayk, I

    2001-01-01

    A concept of an ecosystem approach to human health improvement in Sub-Saharan Africa is presented here. Three factors mainly affect the physical condition of the human body: the abiotic environment, vector-transmitted diseases, and natural resources. Our concept relies on ecological principles embedded in a social context and identifies three sets of subsystems for study and management: human disease subsystems, natural resource subsystems, and decision-support subsystems. To control human diseases and to secure food from resource subsystems including livestock or crops, integrated preventive approaches are preferred over exclusively curative and sectorial approaches. Environmental sustainability - the basis for managing matter and water flows - contributes to a healthy human environment and constitutes the basis for social sustainability. For planning and implementation of the human health improvement scheme, participatory decision-support subsystems adapted to the local conditions need to be designed through institutional arrangements. The applicability of this scheme is demonstrated in urban and rural Ethiopia.

  11. Experience with 3-D composite grids

    NASA Technical Reports Server (NTRS)

    Benek, J. A.; Donegan, T. L.; Suhs, N. E.

    1987-01-01

    Experience with the three-dimensional (3-D), chimera grid embedding scheme is described. Applications of the inviscid version to a multiple-body configuration, a wind/body/tail configuration, and an estimate of wind tunnel wall interference are described. Applications to viscous flows include a 3-D cavity and another multi-body configuration. A variety of grid generators is used, and several embedding strategies are described.

  12. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  13. Multiple grid problems on concurrent-processing computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.

    1986-01-01

    Three computer codes were studied which make use of concurrent processing computer architectures in computational fluid dynamics (CFD). The three parallel codes were tested on a two processor multiple-instruction/multiple-data (MIMD) facility at NASA Ames Research Center, and are suggested for efficient parallel computations. The first code is a well-known program which makes use of the Beam and Warming, implicit, approximate factored algorithm. This study demonstrates the parallelism found in a well-known scheme and it achieved speedups exceeding 1.9 on the two processor MIMD test facility. The second code studied made use of an embedded grid scheme which is used to solve problems having complex geometries. The particular application for this study considered an airfoil/flap geometry in an incompressible flow. The scheme eliminates some of the inherent difficulties found in adapting approximate factorization techniques onto MIMD machines and allows the use of chaotic relaxation and asynchronous iteration techniques. The third code studied is an application of overset grids to a supersonic blunt body problem. The code addresses the difficulties encountered when using embedded grids on a compressible, and therefore nonlinear, problem. The complex numerical boundary system associated with overset grids is discussed and several boundary schemes are suggested. A boundary scheme based on the method of characteristics achieved the best results.

  14. A pseudospectra-based approach to non-normal stability of embedded boundary methods

    NASA Astrophysics Data System (ADS)

    Rapaka, Narsimha; Samtaney, Ravi

    2017-11-01

    We present non-normal linear stability of embedded boundary (EB) methods employing pseudospectra and resolvent norms. Stability of the discrete linear wave equation is characterized in terms of the normalized distance of the EB to the nearest ghost node (α) in one and two dimensions. An important objective is that the CFL condition based on the Cartesian grid spacing remains unaffected by the EB. We consider various discretization methods including both central and upwind-biased schemes. Stability is guaranteed when α <=αmax ranges between 0.5 and 0.77 depending on the discretization scheme. Also, the stability characteristics remain the same in both one and two dimensions. Sharper limits on the sufficient conditions for stability are obtained based on the pseudospectral radius (the Kreiss constant) than the restrictive limits based on the usual singular value decomposition analysis. We present a simple and robust reclassification scheme for the ghost cells (``hybrid ghost cells'') to ensure Lax stability of the discrete systems. This has been tested successfully for both low and high order discretization schemes with transient growth of at most O (1). Moreover, we present a stable, fourth order EB reconstruction scheme. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1394-01.

  15. A level set approach for shock-induced α-γ phase transition of RDX

    NASA Astrophysics Data System (ADS)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  16. Generic picture of the emission properties of III-nitride polariton laser diodes: Steady state and current modulation response

    NASA Astrophysics Data System (ADS)

    Iorsh, Ivan; Glauser, Marlene; Rossbach, Georg; Levrat, Jacques; Cobet, Munise; Butté, Raphaël; Grandjean, Nicolas; Kaliteevski, Mikhail A.; Abram, Richard A.; Kavokin, Alexey V.

    2012-09-01

    The main emission characteristics of electrically driven polariton lasers based on planar GaN microcavities with embedded InGaN quantum wells are studied theoretically. The polariton emission dependence on pump current density is first modeled using a set of semiclassical Boltzmann equations for the exciton polaritons that are coupled to the rate equation describing the electron-hole plasma population. Two experimentally relevant pumping geometries are considered, namely the direct injection of electrons and holes into the strongly coupled microcavity region and intracavity optical pumping via an embedded light-emitting diode. The theoretical framework allows the determination of the minimum threshold current density Jthr,min as a function of lattice temperature and exciton-cavity photon detuning for the two pumping schemes. A Jthr,min value of 5 and 6 A cm-2 is derived for the direct injection scheme and for the intracavity optical pumping one, respectively, at room temperature at the optimum detuning. Then an approximate quasianalytical model is introduced to derive solutions for both the steady-state and high-speed current modulation. This analysis makes it possible to show that the exciton population, which acts as a reservoir for the stimulated relaxation process, gets clamped once the condensation threshold is crossed, a behavior analogous to what happens in conventional laser diodes with the carrier density above threshold. Finally, the modulation transfer function is calculated for both pumping geometries and the corresponding cutoff frequency is determined.

  17. On the Difference Between Additive and Subtractive QM/MM Calculations

    PubMed Central

    Cao, Lili; Ryde, Ulf

    2018-01-01

    The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e., the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic, and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended. PMID:29666794

  18. On the difference between additive and subtractive QM/MM calculations

    NASA Astrophysics Data System (ADS)

    Cao, Lili; Ryde, Ulf

    2018-04-01

    The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e. the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended.

  19. Verifying genuine high-order entanglement.

    PubMed

    Li, Che-Ming; Chen, Kai; Reingruber, Andreas; Chen, Yueh-Nan; Pan, Jian-Wei

    2010-11-19

    High-order entanglement embedded in multipartite multilevel quantum systems (qudits) with many degrees of freedom (DOFs) plays an important role in quantum foundation and quantum engineering. Verifying high-order entanglement without the restriction of system complexity is a critical need in any experiments on general entanglement. Here, we introduce a scheme to efficiently detect genuine high-order entanglement, such as states close to genuine qudit Bell, Greenberger-Horne-Zeilinger, and cluster states as well as multilevel multi-DOF hyperentanglement. All of them can be identified with two local measurement settings per DOF regardless of the qudit or DOF number. The proposed verifications together with further utilities such as fidelity estimation could pave the way for experiments by reducing dramatically the measurement overhead.

  20. JP3D compressed-domain watermarking of volumetric medical data sets

    NASA Astrophysics Data System (ADS)

    Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian

    2010-01-01

    Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.

  1. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  2. A consensus embedding approach for segmentation of high resolution in vivo prostate magnetic resonance imagery

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Rosen, Mark; Madabhushi, Anant

    2008-03-01

    Current techniques for localization of prostatic adenocarcinoma (CaP) via blinded trans-rectal ultrasound biopsy are associated with a high false negative detection rate. While high resolution endorectal in vivo Magnetic Resonance (MR) prostate imaging has been shown to have improved contrast and resolution for CaP detection over ultrasound, similarity in intensity characteristics between benign and cancerous regions on MR images contribute to a high false positive detection rate. In this paper, we present a novel unsupervised segmentation method that employs manifold learning via consensus schemes for detection of cancerous regions from high resolution 1.5 Tesla (T) endorectal in vivo prostate MRI. A significant contribution of this paper is a method to combine multiple weak, lower-dimensional representations of high dimensional feature data in a way analogous to classifier ensemble schemes, and hence create a stable and accurate reduced dimensional representation. After correcting for MR image intensity artifacts, such as bias field inhomogeneity and intensity non-standardness, our algorithm extracts over 350 3D texture features at every spatial location in the MR scene at multiple scales and orientations. Non-linear dimensionality reduction schemes such as Locally Linear Embedding (LLE) and Graph Embedding (GE) are employed to create multiple low dimensional data representations of this high dimensional texture feature space. Our novel consensus embedding method is used to average object adjacencies from within the multiple low dimensional projections so that class relationships are preserved. Unsupervised consensus clustering is then used to partition the objects in this consensus embedding space into distinct classes. Quantitative evaluation on 18 1.5 T prostate MR data against corresponding histology obtained from the multi-site ACRIN trials show a sensitivity of 92.65% and a specificity of 82.06%, which suggests that our method is successfully able to detect suspicious regions in the prostate.

  3. Cryptanalysis of Chatterjee-Sarkar Hierarchical Identity-Based Encryption Scheme at PKC 06

    NASA Astrophysics Data System (ADS)

    Park, Jong Hwan; Lee, Dong Hoon

    In 2006, Chatterjee and Sarkar proposed a hierarchical identity-based encryption (HIBE) scheme which can support an unbounded number of identity levels. This property is particularly useful in providing forward secrecy by embedding time components within hierarchical identities. In this paper we show that their scheme does not provide the claimed property. Our analysis shows that if the number of identity levels becomes larger than the value of a fixed public parameter, an unintended receiver can reconstruct a new valid ciphertext and decrypt the ciphertext using his or her own private key. The analysis is similarly applied to a multi-receiver identity-based encryption scheme presented as an application of Chatterjee and Sarkar's HIBE scheme.

  4. Performance of the Goddard Multiscale Modeling Framework with Goddard Ice Microphysical Schemes

    NASA Technical Reports Server (NTRS)

    Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.

    2016-01-01

    The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.

  5. Asymmetric distances for binary embeddings.

    PubMed

    Gordo, Albert; Perronnin, Florent; Gong, Yunchao; Lazebnik, Svetlana

    2014-01-01

    In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes that binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances that are applicable to a wide variety of embedding techniques including locality sensitive hashing (LSH), locality sensitive binary codes (LSBC), spectral hashing (SH), PCA embedding (PCAE), PCAE with random rotations (PCAE-RR), and PCAE with iterative quantization (PCAE-ITQ). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques.

  6. Self-Similar Subsets of Symmetric Cantor Sets

    NASA Astrophysics Data System (ADS)

    Zeng, Ying

    This paper concerns the affine embeddings of general symmetric Cantor sets. Under certain condition, we show that if a self-similar set F can be affinely embedded into a symmetric Cantor set E, then their contractions are rationally commensurable. Our result supports Conjecture 1.2 in [D. J. Feng, W. Huang and H. Rao, Affine embeddings and intersections of Cantor sets, J. Math. Pures Appl. 102 (2014) 1062-1079].

  7. Palmprint Based Multidimensional Fuzzy Vault Scheme

    PubMed Central

    Liu, Hailun; Sun, Dongmei; Xiong, Ke; Qiu, Zhengding

    2014-01-01

    Fuzzy vault scheme (FVS) is one of the most popular biometric cryptosystems for biometric template protection. However, error correcting code (ECC) proposed in FVS is not appropriate to deal with real-valued biometric intraclass variances. In this paper, we propose a multidimensional fuzzy vault scheme (MDFVS) in which a general subspace error-tolerant mechanism is designed and embedded into FVS to handle intraclass variances. Palmprint is one of the most important biometrics; to protect palmprint templates; a palmprint based MDFVS implementation is also presented. Experimental results show that the proposed scheme not only can deal with intraclass variances effectively but also could maintain the accuracy and meanwhile enhance security. PMID:24892094

  8. An embedded formula of the Chebyshev collocation method for stiff problems

    NASA Astrophysics Data System (ADS)

    Piao, Xiangfan; Bu, Sunyoung; Kim, Dojin; Kim, Philsu

    2017-12-01

    In this study, we have developed an embedded formula of the Chebyshev collocation method for stiff problems, based on the zeros of the generalized Chebyshev polynomials. A new strategy for the embedded formula, using a pair of methods to estimate the local truncation error, as performed in traditional embedded Runge-Kutta schemes, is proposed. The method is performed in such a way that not only the stability region of the embedded formula can be widened, but by allowing the usage of larger time step sizes, the total computational costs can also be reduced. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have an 8th order convergence and it exhibits A-stability. Through several numerical experimental results, we have demonstrated that the proposed method is numerically more efficient, compared to several existing implicit methods.

  9. A joint asymmetric watermarking and image encryption scheme

    NASA Astrophysics Data System (ADS)

    Boato, G.; Conotter, V.; De Natale, F. G. B.; Fontanari, C.

    2008-02-01

    Here we introduce a novel watermarking paradigm designed to be both asymmetric, i.e., involving a private key for embedding and a public key for detection, and commutative with a suitable encryption scheme, allowing both to cipher watermarked data and to mark encrypted data without interphering with the detection process. In order to demonstrate the effectiveness of the above principles, we present an explicit example where the watermarking part, based on elementary linear algebra, and the encryption part, exploiting a secret random permutation, are integrated in a commutative scheme.

  10. H.264/AVC digital fingerprinting based on spatio-temporal just noticeable distortion

    NASA Astrophysics Data System (ADS)

    Ait Saadi, Karima; Bouridane, Ahmed; Guessoum, Abderrezak

    2014-01-01

    This paper presents a robust adaptive embedding scheme using a modified Spatio-Temporal noticeable distortion (JND) model that is designed for tracing the distribution of the H.264/AVC video content and protecting them from unauthorized redistribution. The Embedding process is performed during coding process in selected macroblocks type Intra 4x4 within I-Frame. The method uses spread-spectrum technique in order to obtain robustness against collusion attacks and the JND model to dynamically adjust the embedding strength and control the energy of the embedded fingerprints so as to ensure their imperceptibility. Linear and non linear collusion attacks are performed to show the robustness of the proposed technique against collusion attacks while maintaining visual quality unchanged.

  11. Protection of Health Imagery by Region Based Lossless Reversible Watermarking Scheme

    PubMed Central

    Priya, R. Lakshmi; Sadasivam, V.

    2015-01-01

    Providing authentication and integrity in medical images is a problem and this work proposes a new blind fragile region based lossless reversible watermarking technique to improve trustworthiness of medical images. The proposed technique embeds the watermark using a reversible least significant bit embedding scheme. The scheme combines hashing, compression, and digital signature techniques to create a content dependent watermark making use of compressed region of interest (ROI) for recovery of ROI as reported in literature. The experiments were carried out to prove the performance of the scheme and its assessment reveals that ROI is extracted in an intact manner and PSNR values obtained lead to realization that the presented scheme offers greater protection for health imageries. PMID:26649328

  12. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhbardeh, Alireza; Jacobs, Michael A.; Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), andmore » diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data. In the synthetic data, the authors demonstrated the performance of the NLDR method compared with conventional linear DR methods. The NLDR approach enabled successful segmentation of the structures, whereas, in most cases, PCA and MDS failed. The NLDR approach was able to segment different breast tissue types with a high accuracy and the embedded image of the breast MRI data demonstrated fuzzy boundaries between the different types of breast tissue, i.e., fatty, glandular, and tissue with lesions (>86%). Conclusions: The proposed hybrid NLDR methods were able to segment clinical breast data with a high accuracy and construct an embedded image that visualized the contribution of different radiological parameters.« less

  13. Cross diffusion and exponential space dependent heat source impacts in radiated three-dimensional (3D) flow of Casson fluid by heated surface

    NASA Astrophysics Data System (ADS)

    Zaigham Zia, Q. M.; Ullah, Ikram; Waqas, M.; Alsaedi, A.; Hayat, T.

    2018-03-01

    This research intends to elaborate Soret-Dufour characteristics in mixed convective radiated Casson liquid flow by exponentially heated surface. Novel features of exponential space dependent heat source are introduced. Appropriate variables are implemented for conversion of partial differential frameworks into a sets of ordinary differential expressions. Homotopic scheme is employed for construction of analytic solutions. Behavior of various embedding variables on velocity, temperature and concentration distributions are plotted graphically and analyzed in detail. Besides, skin friction coefficients and heat and mass transfer rates are also computed and interpreted. The results signify the pronounced characteristics of temperature corresponding to convective and radiation variables. Concentration bears opposite response for Soret and Dufour variables.

  14. ROSA: Resource-Oriented Service Management Schemes for Web of Things in a Smart Home.

    PubMed

    Liao, Chun-Feng; Chen, Peng-Yu

    2017-09-21

    A Pervasive-computing-enriched smart home environment, which contains many embedded and tiny intelligent devices and sensors coordinated by service management mechanisms, is capable of anticipating intentions of occupants and providing appropriate services accordingly. Although there are a wealth of research achievements in recent years, the degree of market acceptance is still low. The main reason is that most of the devices and services in such environments depend on particular platform or technology, making it hard to develop an application by composing the devices or services. Meanwhile, the concept of Web of Things (WoT) is becoming popular recently. Based on WoT, the developers can build applications based on popular web tools or technologies. Consequently, the objective of this paper is to propose a set of novel WoT-driven plug-and-play service management schemes for a smart home called Resource-Oriented Service Administration (ROSA). We have implemented an application prototype, and experiments are performed to show the effectiveness of the proposed approach. The results of this research can be a foundation for realizing the vision of "end user programmable smart environments".

  15. Four-body correlation embedded in antisymmetrized geminal power wave function.

    PubMed

    Kawasaki, Airi; Sugino, Osamu

    2016-12-28

    We extend the Coleman's antisymmetrized geminal power (AGP) to develop a wave function theory that can incorporate up to four-body correlation in a region of strong correlation. To facilitate the variational determination of the wave function, the total energy is rewritten in terms of the traces of geminals. This novel trace formula is applied to a simple model system consisting of one dimensional Hubbard ring with a site of strong correlation. Our scheme significantly improves the result obtained by the AGP-configuration interaction scheme of Uemura et al. and also achieves more efficient compression of the degrees of freedom of the wave function. We regard the result as a step toward a first-principles wave function theory for a strongly correlated point defect or adsorbate embedded in an AGP-based mean-field medium.

  16. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  17. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  18. Authenticity preservation with histogram-based reversible data hiding and quadtree concepts.

    PubMed

    Huang, Hsiang-Cheh; Fang, Wai-Chi

    2011-01-01

    With the widespread use of identification systems, establishing authenticity with sensors has become an important research issue. Among the schemes for making authenticity verification based on information security possible, reversible data hiding has attracted much attention during the past few years. With its characteristics of reversibility, the scheme is required to fulfill the goals from two aspects. On the one hand, at the encoder, the secret information needs to be embedded into the original image by some algorithms, such that the output image will resemble the input one as much as possible. On the other hand, at the decoder, both the secret information and the original image must be correctly extracted and recovered, and they should be identical to their embedding counterparts. Under the requirement of reversibility, for evaluating the performance of the data hiding algorithm, the output image quality, named imperceptibility, and the number of bits for embedding, called capacity, are the two key factors to access the effectiveness of the algorithm. Besides, the size of side information for making decoding possible should also be evaluated. Here we consider using the characteristics of original images for developing our method with better performance. In this paper, we propose an algorithm that has the ability to provide more capacity than conventional algorithms, with similar output image quality after embedding, and comparable side information produced. Simulation results demonstrate the applicability and better performance of our algorithm.

  19. Simultaneous Control of Multispecies Particle Transport and Segregation in Driven Lattices

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Aritra K.; Liebchen, Benno; Schmelcher, Peter

    2018-05-01

    We provide a generic scheme to separate the particles of a mixture by their physical properties like mass, friction, or size. The scheme employs a periodically shaken two-dimensional dissipative lattice and hinges on a simultaneous transport of particles in species-specific directions. This selective transport is achieved by controlling the late-time nonlinear particle dynamics, via the attractors embedded in the phase space and their bifurcations. To illustrate the spectrum of possible applications of the scheme, we exemplarily demonstrate the separation of polydisperse colloids and mixtures of cold thermal alkali atoms in optical lattices.

  20. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  1. Detection of LSB+/-1 steganography based on co-occurrence matrix and bit plane clipping

    NASA Astrophysics Data System (ADS)

    Abolghasemi, Mojtaba; Aghaeinia, Hassan; Faez, Karim; Mehrabi, Mohammad Ali

    2010-01-01

    Spatial LSB+/-1 steganography changes smooth characteristics between adjoining pixels of the raw image. We present a novel steganalysis method for LSB+/-1 steganography based on feature vectors derived from the co-occurrence matrix in the spatial domain. We investigate how LSB+/-1 steganography affects the bit planes of an image and show that it changes more least significant bit (LSB) planes of it. The co-occurrence matrix is derived from an image in which some of its most significant bit planes are clipped. By this preprocessing, in addition to reducing the dimensions of the feature vector, the effects of embedding were also preserved. We compute the co-occurrence matrix in different directions and with different dependency and use the elements of the resulting co-occurrence matrix as features. This method is sensitive to the data embedding process. We use a Fisher linear discrimination (FLD) classifier and test our algorithm on different databases and embedding rates. We compare our scheme with the current LSB+/-1 steganalysis methods. It is shown that the proposed scheme outperforms the state-of-the-art methods in detecting the LSB+/-1 steganographic method for grayscale images.

  2. Prediction of breast cancer risk using a machine learning approach embedded with a locality preserving projection algorithm.

    PubMed

    Heidari, Morteza; Khuzani, Abolfazl Zargari; Hollingsworth, Alan B; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin

    2018-01-30

    In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.

  3. Prediction of breast cancer risk using a machine learning approach embedded with a locality preserving projection algorithm

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Hollingsworth, Alan B.; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin

    2018-02-01

    In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.

  4. Frameworks and Tools for High-Confidence Design of Adaptive, Distributed Embedded Control Systems. Multi-University Research Initiative on High-Confidence Design for Distributed Embedded Systems

    DTIC Science & Technology

    2009-01-01

    controllers (currently using the Robostix+Gumstix pair ). The interface between the plant simulator and the controller is ‘hard real-time’, and the xPC box... simulation ) on aerobatic maneuver design for the STARMAC quadrotor helicopter testbed. In related work, we have developed a new optimization scheme...for scheduling hybrid systems, and have demonstrated the results on an autonomous car simulation testbed. We are focusing efforts this summer for

  5. Experiences as an embedded librarian in online courses.

    PubMed

    Konieczny, Alison

    2010-01-01

    Embedded librarianship gives librarians a prime opportunity to have a direct, positive impact in a clinical setting, classroom setting, or within a working group by providing integrated services that cater to the group's needs. Extending embedded librarian services beyond the various physical settings and into online classrooms is an exceptional way for librarians to engage online learners. This group of students is growing rapidly in numbers and could benefit greatly from having library services and resources incorporated into their classes. The author's services as an embedded librarian in fully online courses at a medium-sized university will be discussed, as will strategies, lessons learned, and opportunities for engaging in this realm. To develop a foundation of knowledge on embedded librarianship, an overview of this topic is provided.

  6. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  7. Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.

  8. Modeling and Inverse Controller Design for an Unmanned Aerial Vehicle Based on the Self-Organizing Map

    NASA Technical Reports Server (NTRS)

    Cho, Jeongho; Principe, Jose C.; Erdogmus, Deniz; Motter, Mark A.

    2005-01-01

    The next generation of aircraft will have dynamics that vary considerably over the operating regime. A single controller will have difficulty to meet the design specifications. In this paper, a SOM-based local linear modeling scheme of an unmanned aerial vehicle (UAV) is developed to design a set of inverse controllers. The SOM selects the operating regime depending only on the embedded output space information and avoids normalization of the input data. Each local linear model is associated with a linear controller, which is easy to design. Switching of the controllers is done synchronously with the active local linear model that tracks the different operating conditions. The proposed multiple modeling and control strategy has been successfully tested in a simulator that models the LoFLYTE UAV.

  9. Measurement and reactive burn modeling of the shock to detonation transition for the HMX based explosive LX-14

    NASA Astrophysics Data System (ADS)

    Jones, J. D.; Ma, Xia; Clements, B. E.; Gibson, L. L.; Gustavsen, R. L.

    2017-06-01

    Gas-gun driven plate-impact techniques were used to study the shock to detonation transition in LX-14 (95.5 weight % HMX, 4.5 weight % estane binder). The transition was recorded using embedded electromagnetic particle velocity gauges. Initial shock pressures, P, ranged from 2.5 to 8 GPa and the resulting distances to detonation, xD, were in the range 1.9 to 14 mm. Numerical simulations using the SURF reactive burn scheme coupled with a linear US -up / Mie-Grueneisen equation of state for the reactant and a JWL equation of state for the products, match the experimental data well. Comparison of simulation with experiment as well as the ``best fit'' parameter set for the simulations is presented.

  10. Preserving privacy of online digital physiological signals using blind and reversible steganography.

    PubMed

    Shiu, Hung-Jr; Lin, Bor-Sing; Huang, Chien-Hung; Chiang, Pei-Ying; Lei, Chin-Laung

    2017-11-01

    Physiological signals such as electrocardiograms (ECG) and electromyograms (EMG) are widely used to diagnose diseases. Presently, the Internet offers numerous cloud storage services which enable digital physiological signals to be uploaded for convenient access and use. Numerous online databases of medical signals have been built. The data in them must be processed in a manner that preserves patients' confidentiality. A reversible error-correcting-coding strategy will be adopted to transform digital physiological signals into a new bit-stream that uses a matrix in which is embedded the Hamming code to pass secret messages or private information. The shared keys are the matrix and the version of the Hamming code. An online open database, the MIT-BIH arrhythmia database, was used to test the proposed algorithms. The time-complexity, capacity and robustness are evaluated. Comparisons of several evaluations subject to related work are also proposed. This work proposes a reversible, low-payload steganographic scheme for preserving the privacy of physiological signals. An (n,  m)-hamming code is used to insert (n - m) secret bits into n bits of a cover signal. The number of embedded bits per modification is higher than in comparable methods, and the computational power is efficient and the scheme is secure. Unlike other Hamming-code based schemes, the proposed scheme is both reversible and blind. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Satisfaction rates with the current Special Type Consultation (STC) reimbursement scheme among General Practitioners - A Mixed Methods Study.

    PubMed

    Kiely, A; O'Meara, S; Fitzgerald, N; Regan, A M; Durcan, P; McGuire, G; Kelly, M E

    2017-03-10

    The Special Type Consultation (STC) scheme is a fee-for-service reimbursement scheme for General Practitioners (GPs) in Ireland. Introduced in 1989, the scheme includes specified patient services involving the application of a learned skill, e.g. suturing. This study aims to establish the extent to which GPs believe this scheme is appropriate for current General Practice. This is an embedded mixed-methods study combining quantitative data on GPs working experience of and qualitative data on GPs attitudes towards the scheme. Data were collected by means of an anonymous postal questionnaire. The response rate was 60.4% (n=159.) Twenty-nine percent (n=46) disagreed and 65% (n=104) strongly disagreed that the current list of special items is satisfactory. Two overriding themes were identified: economics and advancement of the STC process. This study demonstrates an overwhelming consensus among GPs that the current STC scheme is outdated and in urgent need of revision to reflect modern General Practice.

  12. Learning linear transformations between counting-based and prediction-based word embeddings

    PubMed Central

    Hayashi, Kohei; Kawarabayashi, Ken-ichi

    2017-01-01

    Despite the growing interest in prediction-based word embedding learning methods, it remains unclear as to how the vector spaces learnt by the prediction-based methods differ from that of the counting-based methods, or whether one can be transformed into the other. To study the relationship between counting-based and prediction-based embeddings, we propose a method for learning a linear transformation between two given sets of word embeddings. Our proposal contributes to the word embedding learning research in three ways: (a) we propose an efficient method to learn a linear transformation between two sets of word embeddings, (b) using the transformation learnt in (a), we empirically show that it is possible to predict distributed word embeddings for novel unseen words, and (c) empirically it is possible to linearly transform counting-based embeddings to prediction-based embeddings, for frequent words, different POS categories, and varying degrees of ambiguities. PMID:28926629

  13. Direct Replacement of Arbitrary Grid-Overlapping by Non-Structured Grid

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing

    1994-01-01

    A new approach that uses nonstructured mesh to replace the arbitrarily overlapped structured regions of embedded grids is presented. The present methodology uses the Chimera composite overlapping mesh system so that the physical domain of the flowfield is subdivided into regions which can accommodate easily-generated grid for complex configuration. In addition, a Delaunay triangulation technique generates nonstructured triangular mesh which wraps over the interconnecting region of embedded grids. It is designed that the present approach, termed DRAGON grid, has three important advantages: eliminating some difficulties of the Chimera scheme, such as the orphan points and/or bad quality of interpolation stencils; making grid communication in a fully conservative way; and implementation into three dimensions is straightforward. A computer code based on a time accurate, finite volume, high resolution scheme for solving the compressible Navier-Stokes equations has been further developed to include both the Chimera overset grid and the nonstructured mesh schemes. For steady state problems, the local time stepping accelerates convergence based on a Courant - Friedrichs - Leury (CFL) number near the local stability limit. Numerical tests on representative steady and unsteady supersonic inviscid flows with strong shock waves are demonstrated.

  14. Power impact of loop buffer schemes for biomedical wireless sensor nodes.

    PubMed

    Artes, Antonio; Ayala, Jose L; Catthoor, Francky

    2012-11-06

    Instruction memory organisations are pointed out as one of the major sources of energy consumption in embedded systems. As these systems are characterised by restrictive resources and a low-energy budget, any enhancement in this component allows not only to decrease the energy consumption but also to have a better distribution of the energy budget throughout the system. Loop buffering is an effective scheme to reduce energy consumption in instruction memory organisations. In this paper, the loop buffer concept is applied in real-life embedded applications that are widely used in biomedical Wireless Sensor Nodes, to show which scheme of loop buffer is more suitable for applications with certain behaviour. Post-layout simulations demonstrate that a trade-off exists between the complexity of the loop buffer architecture and the energy savings of utilising it. Therefore, the use of loop buffer architectures in order to optimise the instruction memory organisation from the energy efficiency point of view should be evaluated carefully, taking into account two factors: (1) the percentage of the execution time of the application that is related to the execution of the loops, and (2) the distribution of the execution time percentage over each one of the loops that form the application.

  15. Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.

    PubMed

    Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant

    2013-01-01

    Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.

  16. Counterfactual entanglement distribution without transmitting any particles.

    PubMed

    Guo, Qi; Cheng, Liu-Yong; Chen, Li; Wang, Hong-Fu; Zhang, Shou

    2014-04-21

    To date, all schemes for entanglement distribution needed to send entangled particles or a separable mediating particle among distant participants. Here, we propose a counterfactual protocol for entanglement distribution against the traditional forms, that is, two distant particles can be entangled with no physical particles travel between the two remote participants. We also present an alternative scheme for realizing the counterfactual photonic entangled state distribution using Michelson-type interferometer and self-assembled GaAs/InAs quantum dot embedded in a optical microcavity. The numerical analysis about the effect of experimental imperfections on the performance of the scheme shows that the entanglement distribution may be implementable with high fidelity.

  17. Coherence and visibility for vectorial light.

    PubMed

    Luis, Alfredo

    2010-08-01

    Two-path interference of transversal vectorial waves is embedded within a larger scheme: this is four-path interference between four scalar waves. This comprises previous approaches to coherence between vectorial waves and restores the equivalence between correlation-based coherence and visibility.

  18. The Quality of the Embedding Potential Is Decisive for Minimal Quantum Region Size in Embedding Calculations: The Case of the Green Fluorescent Protein.

    PubMed

    Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Martínez, Todd J; Kongsted, Jacob

    2017-12-12

    The calculation of spectral properties for photoactive proteins is challenging because of the large cost of electronic structure calculations on large systems. Mixed quantum mechanical (QM) and molecular mechanical (MM) methods are typically employed to make such calculations computationally tractable. This study addresses the connection between the minimal QM region size and the method used to model the MM region in the calculation of absorption properties-here exemplified for calculations on the green fluorescent protein. We find that polarizable embedding is necessary for a qualitatively correct description of the MM region, and that this enables the use of much smaller QM regions compared to fixed charge electrostatic embedding. Furthermore, absorption intensities converge very slowly with system size and inclusion of effective external field effects in the MM region through polarizabilities is therefore very important. Thus, this embedding scheme enables accurate prediction of intensities for systems that are too large to be treated fully quantum mechanically.

  19. Private Yet Abuse Resistant Open Publishing

    NASA Astrophysics Data System (ADS)

    Danezis, George; Laurie, Ben

    We present the problem of abusive, off-topic or repetitive postings on open publishing websites, and the difficulties associated with filtering them out. We propose a scheme that extracts enough information to allow for filtering, based on users being embedded in a social network. Our system maintains the privacy of the poster, and does not require full identification to work well. We present a concrete realization using constructions based on discrete logarithms, and a sketch of how our scheme could be implemented in a centralized fashion.

  20. Electrostatically Embedded Many-Body Approximation for Systems of Water, Ammonia, and Sulfuric Acid and the Dependence of Its Performance on Embedding Charges.

    PubMed

    Leverentz, Hannah R; Truhlar, Donald G

    2009-06-09

    This work tests the capability of the electrostatically embedded many-body (EE-MB) method to calculate accurate (relative to conventional calculations carried out at the same level of electronic structure theory and with the same basis set) binding energies of mixed clusters (as large as 9-mers) consisting of water, ammonia, sulfuric acid, and ammonium and bisulfate ions. This work also investigates the dependence of the accuracy of the EE-MB approximation on the type and origin of the charges used for electrostatically embedding these clusters. The conclusions reached are that for all of the clusters and sets of embedding charges studied in this work, the electrostatically embedded three-body (EE-3B) approximation is capable of consistently yielding relative errors of less than 1% and an average relative absolute error of only 0.3%, and that the performance of the EE-MB approximation does not depend strongly on the specific set of embedding charges used. The electrostatically embedded pairwise approximation has errors about an order of magnitude larger than EE-3B. This study also explores the question of why the accuracy of the EE-MB approximation shows such little dependence on the types of embedding charges employed.

  1. Deriving evaluation indicators for knowledge transfer and dialogue processes in the context of climate research

    NASA Astrophysics Data System (ADS)

    Treffeisen, Renate; Grosfeld, Klaus; Kuhlmann, Franziska

    2017-12-01

    Knowledge transfer and dialogue processes in the field of climate science have captured intensive attention in recent years as being an important part of research activities. Therefore, the demand and pressure to develop a set of indicators for the evaluation of different activities in this field have increased, too. Research institutes are being asked more and more to build up structures in order to map these activities and, thus, are obliged to demonstrate the success of these efforts. This paper aims to serve as an input to stimulate further reflection on the field of evaluation of knowledge transfer and dialogue processes in the context of climate sciences. The work performed in this paper is embedded in the efforts of the German Helmholtz Association in the research field of earth and environment and is driven by the need to apply suitable indicators for knowledge transfer and dialogue processes in climate research center evaluations. We carry out a comparative analysis of three long-term activities and derive a set of indicators for measuring their output and outcome by balancing the wide diversity and range of activity contents as well as the different tools to realize them. The case examples are based on activities which are part of the regional Helmholtz Climate Initiative Regional Climate Change (REKLIM) and the Climate Office for Polar Regions and Sea Level Rise at the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research. Both institutional units have been working on a wide range of different knowledge transfer and dialogue processes since 2008/2009. We demonstrate that indicators for the evaluation must be based on the unique objectives of the individual activities and the framework they are embedded in (e.g., research foci which provide the background for the performed knowledge transfer and dialogue processes) but can partly be classified in a principle two-dimensional scheme. This scheme might serve as a usable basis for climate research center evaluation in the future. It, furthermore, underlines the need for further development of proper mechanisms to evaluate scientific centers, in particular with regard to knowledge transfer and dialogue processes.

  2. Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model

    NASA Astrophysics Data System (ADS)

    Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon

    Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.

  3. Teleportation of a Toffoli gate among distant solid-state qubits with quantum dots embedded in optical microcavities

    PubMed Central

    Hu, Shi; Cui, Wen-Xue; Wang, Dong-Yang; Bai, Cheng-Hua; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou

    2015-01-01

    Teleportation of unitary operations can be viewed as a quantum remote control. The remote realization of robust multiqubit logic gates among distant long-lived qubit registers is a key challenge for quantum computation and quantum information processing. Here we propose a simple and deterministic scheme for teleportation of a Toffoli gate among three spatially separated electron spin qubits in optical microcavities by using local linear optical operations, an auxiliary electron spin, two circularly-polarized entangled photon pairs, photon measurements, and classical communication. We assess the feasibility of the scheme and show that the scheme can be achieved with high average fidelity under the current technology. The scheme opens promising perspectives for constructing long-distance quantum communication and quantum computation networks with solid-state qubits. PMID:26225781

  4. Teleportation of a Toffoli gate among distant solid-state qubits with quantum dots embedded in optical microcavities.

    PubMed

    Hu, Shi; Cui, Wen-Xue; Wang, Dong-Yang; Bai, Cheng-Hua; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou

    2015-07-30

    Teleportation of unitary operations can be viewed as a quantum remote control. The remote realization of robust multiqubit logic gates among distant long-lived qubit registers is a key challenge for quantum computation and quantum information processing. Here we propose a simple and deterministic scheme for teleportation of a Toffoli gate among three spatially separated electron spin qubits in optical microcavities by using local linear optical operations, an auxiliary electron spin, two circularly-polarized entangled photon pairs, photon measurements, and classical communication. We assess the feasibility of the scheme and show that the scheme can be achieved with high average fidelity under the current technology. The scheme opens promising perspectives for constructing long-distance quantum communication and quantum computation networks with solid-state qubits.

  5. Multiple Embedded Processors for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  6. ROSA: Resource-Oriented Service Management Schemes for Web of Things in a Smart Home

    PubMed Central

    Chen, Peng-Yu

    2017-01-01

    A Pervasive-computing-enriched smart home environment, which contains many embedded and tiny intelligent devices and sensors coordinated by service management mechanisms, is capable of anticipating intentions of occupants and providing appropriate services accordingly. Although there are a wealth of research achievements in recent years, the degree of market acceptance is still low. The main reason is that most of the devices and services in such environments depend on particular platform or technology, making it hard to develop an application by composing the devices or services. Meanwhile, the concept of Web of Things (WoT) is becoming popular recently. Based on WoT, the developers can build applications based on popular web tools or technologies. Consequently, the objective of this paper is to propose a set of novel WoT-driven plug-and-play service management schemes for a smart home called Resource-Oriented Service Administration (ROSA). We have implemented an application prototype, and experiments are performed to show the effectiveness of the proposed approach. The results of this research can be a foundation for realizing the vision of “end user programmable smart environments”. PMID:28934159

  7. Generalized centripetal force law and quantization of motion constrained on 2D surfaces

    NASA Astrophysics Data System (ADS)

    Liu, Q. H.; Zhang, J.; Lian, D. K.; Hu, L. D.; Li, Z.

    2017-03-01

    For a particle of mass μ moves on a 2D surface f(x) = 0 embedded in 3D Euclidean space of coordinates x, there is an open and controversial problem whether the Dirac's canonical quantization scheme for the constrained motion allows for the geometric potential that has been experimentally confirmed. We note that the Dirac's scheme hypothesizes that the symmetries indicated by classical brackets among positions x and momenta p and Hamiltonian Hc remain in quantum mechanics, i.e., the following Dirac brackets [ x ,Hc ] D and [ p ,Hc ] D holds true after quantization, in addition to the fundamental ones [ x , x ] D, [ x , p ] D and [ p , p ] D. This set of hypotheses implies that the Hamiltonian operator is simultaneously determined during the quantization. The quantum mechanical relations corresponding to the classical mechanical ones p / μ =[ x ,Hc ] D directly give the geometric momenta. The time t derivative of the momenta p ˙ =[ p ,Hc ] D in classical mechanics is in fact the generalized centripetal force law for particle on the 2D surface, which in quantum mechanics permits both the geometric momenta and the geometric potential.

  8. Robust optical signal-to-noise ratio monitoring scheme using a phase-modulator-embedded fiber loop mirror.

    PubMed

    Ku, Yuen-Ching; Chan, Chun-Kit; Chen, Lian-Kuan

    2007-06-15

    We propose and experimentally demonstrate a novel in-band optical signal-to-noise ratio (OSNR) monitoring technique using a phase-modulator-embedded fiber loop mirror. This technique measures the in-band OSNR accurately by observing the output power of a fiber loop mirror filter, where the transmittance is adjusted by an embedded phase modulator driven by a low-frequency periodic signal. The measurement errors are less than 0.5 dB for an OSNR between 0 and 40 dB in a 10 Gbit/s non-return-to-zero system. This technique was also shown experimentally to have high robustness against various system impairments and high feasibility to be deployed in practical implementation.

  9. Analysis of supersonic combustion flow fields with embedded subsonic regions

    NASA Technical Reports Server (NTRS)

    Dash, S.; Delguidice, P.

    1972-01-01

    The viscous characteristic analysis for supersonic chemically reacting flows was extended to include provisions for analyzing embedded subsonic regions. The numerical method developed to analyze this mixed subsonic-supersonic flow fields is described. The boundary conditions are discussed related to the supersonic-subsonic and subsonic-supersonic transition, as well as a heuristic description of several other numerical schemes for analyzing this problem. An analysis of shock waves generated either by pressure mismatch between the injected fluid and surrounding flow or by chemical heat release is also described.

  10. Design and Smartphone-Based Implementation of a Chaotic Video Communication Scheme via WAN Remote Transmission

    NASA Astrophysics Data System (ADS)

    Lin, Zhuosheng; Yu, Simin; Li, Chengqing; Lü, Jinhu; Wang, Qianxue

    This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.

  11. Watermarking scheme based on singular value decomposition and homomorphic transform

    NASA Astrophysics Data System (ADS)

    Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu

    2017-10-01

    A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.

  12. Spherical hashing: binary code embedding with hyperspheres.

    PubMed

    Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui

    2015-11-01

    Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.

  13. Stratified charge rotary engine critical technology enablement. Volume 2: Appendixes

    NASA Technical Reports Server (NTRS)

    Irion, C. E.; Mount, R. E.

    1992-01-01

    This second volume of appendixes is a companion to Volume 1 of this report which summarizes results of a critical technology enablement effort with the stratified charge rotary engine (SCRE) focusing on a power section of 0.67 liters (40 cu. in.) per rotor in single and two rotor versions. The work is a continuation of prior NASA Contracts NAS3-23056 and NAS3-24628. Technical objectives are multi-fuel capability, including civil and military jet fuel and DF-2, fuel efficiency of 0.355 Lbs/BHP-Hr. at best cruise condition above 50 percent power, altitude capability of up to 10Km (33,000 ft.) cruise, 2000 hour TBO and reduced coolant heat rejection. Critical technologies for SCRE's that have the potential for competitive performance and cost in a representative light-aircraft environment were examined. Objectives were: the development and utilization of advanced analytical tools, i.e. higher speed and enhanced three dimensional combustion modeling; identification of critical technologies; development of improved instrumentation; and to isolate and quantitatively identify the contribution to performance and efficiency of critical components or subsystems. A family of four-stage third-order explicit Runge-Kutta schemes is derived that required only two locations and has desirable stability characteristics. Error control is achieved by embedding a second-order scheme within the four-stage procedure. Certain schemes are identified that are as efficient and accurate as conventional embedded schemes of comparable order and require fewer storage locations.

  14. Doubly stratified MHD tangent hyperbolic nanofluid flow due to permeable stretched cylinder

    NASA Astrophysics Data System (ADS)

    Nagendramma, V.; Leelarathnam, A.; Raju, C. S. K.; Shehzad, S. A.; Hussain, T.

    2018-06-01

    An investigation is exhibited to analyze the presence of heat source and sink in doubly stratified MHD incompressible tangent hyperbolic fluid due to stretching of cylinder embedded in porous space under nanoparticles. To develop the mathematical model of tangent hyperbolic nanofluid, movement of Brownian and thermophoretic are accounted. The established equations of continuity, momentum, thermal and solutal boundary layers are reassembled into sets of non-linear expressions. These assembled expressions are executed with the help of Runge-Kutta scheme with MATLAB. The impacts of sundry parameters are illustrated graphically and the engineering interest physical quantities like skin friction, Nusselt and Sherwood number are examined by computing numerical values. It is clear that the power-law index parameter and curvature parameter shows favorable effect on momentum boundary layer thickness whereas Weissennberg number reveals inimical influence.

  15. A Novel Certificateless Signature Scheme for Smart Objects in the Internet-of-Things.

    PubMed

    Yeh, Kuo-Hui; Su, Chunhua; Choo, Kim-Kwang Raymond; Chiu, Wayne

    2017-05-01

    Rapid advances in wireless communications and pervasive computing technologies have resulted in increasing interest and popularity of Internet-of-Things (IoT) architecture, ubiquitously providing intelligence and convenience to our daily life. In IoT-based network environments, smart objects are embedded everywhere as ubiquitous things connected in a pervasive manner. Ensuring security for interactions between these smart things is significantly more important, and a topic of ongoing interest. In this paper, we present a certificateless signature scheme for smart objects in IoT-based pervasive computing environments. We evaluate the utility of the proposed scheme in IoT-oriented testbeds, i.e., Arduino Uno and Raspberry PI 2. Experiment results present the practicability of the proposed scheme. Moreover, we revisit the scheme of Wang et al. (2015) and revealed that a malicious super type I adversary can easily forge a legitimate signature to cheat any receiver as he/she wishes in the scheme. The superiority of the proposed certificateless signature scheme over relevant studies is demonstrated in terms of the summarized security and performance comparisons.

  16. A Novel Certificateless Signature Scheme for Smart Objects in the Internet-of-Things

    PubMed Central

    Yeh, Kuo-Hui; Su, Chunhua; Choo, Kim-Kwang Raymond; Chiu, Wayne

    2017-01-01

    Rapid advances in wireless communications and pervasive computing technologies have resulted in increasing interest and popularity of Internet-of-Things (IoT) architecture, ubiquitously providing intelligence and convenience to our daily life. In IoT-based network environments, smart objects are embedded everywhere as ubiquitous things connected in a pervasive manner. Ensuring security for interactions between these smart things is significantly more important, and a topic of ongoing interest. In this paper, we present a certificateless signature scheme for smart objects in IoT-based pervasive computing environments. We evaluate the utility of the proposed scheme in IoT-oriented testbeds, i.e., Arduino Uno and Raspberry PI 2. Experiment results present the practicability of the proposed scheme. Moreover, we revisit the scheme of Wang et al. (2015) and revealed that a malicious super type I adversary can easily forge a legitimate signature to cheat any receiver as he/she wishes in the scheme. The superiority of the proposed certificateless signature scheme over relevant studies is demonstrated in terms of the summarized security and performance comparisons. PMID:28468313

  17. Simultaneous transmission of wired and wireless signals based on double sideband carrier suppression

    NASA Astrophysics Data System (ADS)

    Bitew, Mekuanint Agegnehu; Shiu, Run-Kai; Peng, Peng-Chun; Wang, Cheng-Hao; Chen, Yan-Ming

    2017-11-01

    In this paper, we proposed and experimentally demonstrated simultaneous transmission of wired and wireless signals based on double sideband optical carrier suppression. By properly adjusting the bias point of the dual-output mach-zehnder modulator (MZM), a central carrier in one output port and a pair of first-order sidebands in another output port are generated. The pair of first-order sidebands are fed into a second MZM to generate second-order order sidebands. A wired signal is embedded on the central carrier while a wireless signal is embedded on the second-order sidebands. Unlike other schemes, we did not use optical filter to separate the carrier from the optical sidebands. The measured bit error rate (BER) and eye-diagrams after a 25 km single-mode-fiber (SMF) transmission proved that the proposed scheme is successful for both wired and wireless signals transmission. Moreover, the power penalty at the BER of 10-9 is 0.3 and 0.7 dB for wired and wireless signals, respectively.

  18. A semi-blind logo watermarking scheme for color images by comparison and modification of DFT coefficients

    NASA Astrophysics Data System (ADS)

    Kusyk, Janusz; Eskicioglu, Ahmet M.

    2005-10-01

    Digital watermarking is considered to be a major technology for the protection of multimedia data. Some of the important applications are broadcast monitoring, copyright protection, and access control. In this paper, we present a semi-blind watermarking scheme for embedding a logo in color images using the DFT domain. After computing the DFT of the luminance layer of the cover image, the magnitudes of DFT coefficients are compared, and modified. A given watermark is embedded in three frequency bands: Low, middle, and high. Our experiments show that the watermarks extracted from the lower frequencies have the best visual quality for low pass filtering, adding Gaussian noise, JPEG compression, resizing, rotation, and scaling, and the watermarks extracted from the higher frequencies have the best visual quality for cropping, intensity adjustment, histogram equalization, and gamma correction. Extractions from the fragmented and translated image are identical to extractions from the unattacked watermarked image. The collusion and rewatermarking attacks do not provide the hacker with useful tools.

  19. Quantifying Information Gain from Dynamic Downscaling Experiments

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Peters-Lidard, C. D.

    2015-12-01

    Dynamic climate downscaling experiments are designed to produce information at higher spatial and temporal resolutions. Such additional information is generated from the low-resolution initial and boundary conditions via the predictive power of the physical laws. However, errors and uncertainties in the initial and boundary conditions can be propagated and even amplified to the downscaled simulations. Additionally, the limit of predictability in nonlinear dynamical systems will also damper the information gain, even if the initial and boundary conditions were error-free. Thus it is critical to quantitatively define and measure the amount of information increase from dynamic downscaling experiments, to better understand and appreciate their potentials and limitations. We present a scheme to objectively measure the information gain from such experiments. The scheme is based on information theory, and we argue that if a downscaling experiment is to exhibit value, it has to produce more information than what can be simply inferred from information sources already available. These information sources include the initial and boundary conditions, the coarse resolution model in which the higher-resolution models are embedded, and the same set of physical laws. These existing information sources define an "information threshold" as a function of the spatial and temporal resolution, and this threshold serves as a benchmark to quantify the information gain from the downscaling experiments, or any other approaches. For a downscaling experiment to shown any value, the information has to be above this threshold. A recent NASA-supported downscaling experiment is used as an example to illustrate the application of this scheme.

  20. A New Quantum Watermarking Based on Quantum Wavelet Transforms

    NASA Astrophysics Data System (ADS)

    Heidari, Shahrokh; Naseri, Mosayeb; Gheibi, Reza; Baghfalaki, Masoud; Rasoul Pourarian, Mohammad; Farouk, Ahmed

    2017-06-01

    Quantum watermarking is a technique to embed specific information, usually the owner’s identification, into quantum cover data such for copyright protection purposes. In this paper, a new scheme for quantum watermarking based on quantum wavelet transforms is proposed which includes scrambling, embedding and extracting procedures. The invisibility and robustness performances of the proposed watermarking method is confirmed by simulation technique. The invisibility of the scheme is examined by the peak-signal-to-noise ratio (PSNR) and the histogram calculation. Furthermore the robustness of the scheme is analyzed by the Bit Error Rate (BER) and the Correlation Two-Dimensional (Corr 2-D) calculation. The simulation results indicate that the proposed watermarking scheme indicate not only acceptable visual quality but also a good resistance against different types of attack. Supported by Kermanshah Branch, Islamic Azad University, Kermanshah, Iran

  1. A new scheme of the time-domain fluorescence tomography for a semi-infinite turbid medium

    NASA Astrophysics Data System (ADS)

    Prieto, Kernel; Nishimura, Goro

    2017-04-01

    A new scheme for reconstruction of a fluorophore target embedded in a semi-infinite medium was proposed and evaluated. In this scheme, we neglected the presence of the fluorophore target for the excitation light and used an analytical solution of the time-dependent radiative transfer equation (RTE) for the excitation light in a homogeneous semi-infinite media instead of solving the RTE numerically in the forward calculation. The inverse problem for imaging the fluorophore target was solved using the Landweber-Kaczmarz method with the concept of the adjoint fields. Numerical experiments show that the proposed scheme provides acceptable results of the reconstructed shape and location of the target. The computation times of the solution of the forward problem and the whole reconstruction process were reduced by about 40 and 15%, respectively.

  2. Power Impact of Loop Buffer Schemes for Biomedical Wireless Sensor Nodes

    PubMed Central

    Artes, Antonio; Ayala, Jose L.; Catthoor, Francky

    2012-01-01

    Instruction memory organisations are pointed out as one of the major sources of energy consumption in embedded systems. As these systems are characterised by restrictive resources and a low-energy budget, any enhancement in this component allows not only to decrease the energy consumption but also to have a better distribution of the energy budget throughout the system. Loop buffering is an effective scheme to reduce energy consumption in instruction memory organisations. In this paper, the loop buffer concept is applied in real-life embedded applications that are widely used in biomedical Wireless Sensor Nodes, to show which scheme of loop buffer is more suitable for applications with certain behaviour. Post-layout simulations demonstrate that a trade-off exists between the complexity of the loop buffer architecture and the energy savings of utilising it. Therefore, the use of loop buffer architectures in order to optimise the instruction memory organisation from the energy efficiency point of view should be evaluated carefully, taking into account two factors: (1) the percentage of the execution time of the application that is related to the execution of the loops, and (2) the distribution of the execution time percentage over each one of the loops that form the application. PMID:23202202

  3. An efficient quantum scheme for Private Set Intersection

    NASA Astrophysics Data System (ADS)

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.

  4. A channel differential EZW coding scheme for EEG data compression.

    PubMed

    Dehkordi, Vahid R; Daou, Hoda; Labeau, Fabrice

    2011-11-01

    In this paper, a method is proposed to compress multichannel electroencephalographic (EEG) signals in a scalable fashion. Correlation between EEG channels is exploited through clustering using a k-means method. Representative channels for each of the clusters are encoded individually while other channels are encoded differentially, i.e., with respect to their respective cluster representatives. The compression is performed using the embedded zero-tree wavelet encoding adapted to 1-D signals. Simulations show that the scalable features of the scheme lead to a flexible quality/rate tradeoff, without requiring detailed EEG signal modeling.

  5. Accuracy of a teleported trapped field state inside a single bimodal cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Queiros, Iara P. de; Cardoso, W. B.; Souza, Simone

    2007-09-15

    We propose a simplified scheme to teleport a superposition of coherent states from one mode to another of the same bimodal lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity that can be achieved, demonstrating accurate teleportation if the mean photon number of each mode is at most 1.5. Our scheme applies as well for teleportation of coherent states from one mode of a cavity to another mode of a second cavity, when both cavities are embedded in a common reservoir.

  6. A flow-control mechanism for distributed systems

    NASA Technical Reports Server (NTRS)

    Maitan, J.

    1991-01-01

    A new approach to the rate-based flow control in store-and-forward networks is evaluated. Existing methods display oscillations in the presence of transport delays. The proposed scheme is based on the explicit use of an embedded dynamic model of a store-and-forward buffer in a controller's feedback loop. It is shown that the use of the model eliminates the oscillations caused by the transport delays. The paper presents simulation examples and assesses the applicability of the scheme in the new generation of high-speed photonic networks where transport delays must be considered.

  7. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  8. Embedded correlated wavefunction schemes: theory and applications.

    PubMed

    Libisch, Florian; Huang, Chen; Carter, Emily A

    2014-09-16

    Conspectus Ab initio modeling of matter has become a pillar of chemical research: with ever-increasing computational power, simulations can be used to accurately predict, for example, chemical reaction rates, electronic and mechanical properties of materials, and dynamical properties of liquids. Many competing quantum mechanical methods have been developed over the years that vary in computational cost, accuracy, and scalability: density functional theory (DFT), the workhorse of solid-state electronic structure calculations, features a good compromise between accuracy and speed. However, approximate exchange-correlation functionals limit DFT's ability to treat certain phenomena or states of matter, such as charge-transfer processes or strongly correlated materials. Furthermore, conventional DFT is purely a ground-state theory: electronic excitations are beyond its scope. Excitations in molecules are routinely calculated using time-dependent DFT linear response; however applications to condensed matter are still limited. By contrast, many-electron wavefunction methods aim for a very accurate treatment of electronic exchange and correlation. Unfortunately, the associated computational cost renders treatment of more than a handful of heavy atoms challenging. On the other side of the accuracy spectrum, parametrized approaches like tight-binding can treat millions of atoms. In view of the different (dis-)advantages of each method, the simulation of complex systems seems to force a compromise: one is limited to the most accurate method that can still handle the problem size. For many interesting problems, however, compromise proves insufficient. A possible solution is to break up the system into manageable subsystems that may be treated by different computational methods. The interaction between subsystems may be handled by an embedding formalism. In this Account, we review embedded correlated wavefunction (CW) approaches and some applications. We first discuss our density functional embedding theory, which is formally exact. We show how to determine the embedding potential, which replaces the interaction between subsystems, at the DFT level. CW calculations are performed using a fixed embedding potential, that is, a non-self-consistent embedding scheme. We demonstrate this embedding theory for two challenging electron transfer phenomena: (1) initial oxidation of an aluminum surface and (2) hot-electron-mediated dissociation of hydrogen molecules on a gold surface. In both cases, the interaction between gas molecules and metal surfaces were treated by sophisticated CW techniques, with the remainder of the extended metal surface being treated by DFT. Our embedding approach overcomes the limitations of conventional Kohn-Sham DFT in describing charge transfer, multiconfigurational character, and excited states. From these embedding simulations, we gained important insights into fundamental processes that are crucial aspects of fuel cell catalysis (i.e., O2 reduction at metal surfaces) and plasmon-mediated photocatalysis by metal nanoparticles. Moreover, our findings agree very well with experimental observations, while offering new views into the chemistry. We finally discuss our recently formulated potential-functional embedding theory that provides a seamless, first-principles way to include back-action onto the environment from the embedded region.

  9. Salivary hormone and immune responses to three resistance exercise schemes in elite female athletes.

    PubMed

    Nunes, João A; Crewther, Blair T; Ugrinowitsch, Carlos; Tricoli, Valmor; Viveiros, Luís; de Rose, Dante; Aoki, Marcelo S

    2011-08-01

    This study examined the salivary hormone and immune responses of elite female athletes to 3 different resistance exercise schemes. Fourteen female basketball players each performed an endurance scheme (ES-4 sets of 12 reps, 60% of 1 repetition maximum (1RM) load, 1-minute rest periods), a strength-hypertrophy scheme (SHS-1 set of 5RM, 1 set of 4RM, 1 set of 3RM, 1 set of 2RM, and 1set of 1RM with 3-minute rest periods, followed by 3 sets of 10RM with 2-minute rest periods) and a power scheme (PS-3 sets of 10 reps, 50% 1RM load, 3-minute rest periods) using the same exercises (bench press, squat, and biceps curl). Saliva samples were collected at 07:30 hours, pre-exercise (Pre) at 09:30 hours, postexercise (Post), and at 17:30 hours. Matching samples were also taken on a nonexercising control day. The samples were analyzed for testosterone, cortisol (C), and immunoglobulin A concentrations. The total volume of load lifted differed among the 3 schemes (SHS > ES > PS, p < 0.05). Postexercise C concentrations increased after all schemes, compared to control values (p < 0.05). In the SHS, the postexercise C response was also greater than pre-exercise data (p < 0.05). The current findings confirm that high-volume resistance exercise schemes can stimulate greater C secretion because of higher metabolic demand. In terms of practical applications, acute changes in C may be used to evaluate the metabolic demands of different resistance exercise schemes, or as a tool for monitoring training strain.

  10. Development of an embedded thin-film strain-gauge-based SHM network into 3D-woven composite structure for wind turbine blades

    NASA Astrophysics Data System (ADS)

    Zhao, Dongning; Rasool, Shafqat; Forde, Micheal; Weafer, Bryan; Archer, Edward; McIlhagger, Alistair; McLaughlin, James

    2017-04-01

    Recently, there has been increasing demand in developing low-cost, effective structure health monitoring system to be embedded into 3D-woven composite wind turbine blades to determine structural integrity and presence of defects. With measuring the strain and temperature inside composites at both in-situ blade resin curing and in-service stages, we are developing a novel scheme to embed a resistive-strain-based thin-metal-film sensory into the blade spar-cap that is made of composite laminates to determine structural integrity and presence of defects. Thus, with fiberglass, epoxy, and a thinmetal- film sensing element, a three-part, low-cost, smart composite laminate is developed. Embedded strain sensory inside composite laminate prototype survived after laminate curing process. The internal strain reading from embedded strain sensor under three-point-bending test standard is comparable. It proves that our proposed method will provide another SHM alternative to reduce sensing costs during the renewable green energy generation.

  11. LiveInventor: An Interactive Development Environment for Robot Autonomy

    NASA Technical Reports Server (NTRS)

    Neveu, Charles; Shirley, Mark

    2003-01-01

    LiveInventor is an interactive development environment for robot autonomy developed at NASA Ames Research Center. It extends the industry-standard OpenInventor graphics library and scenegraph file format to include kinetic and kinematic information, a physics-simulation library, an embedded Scheme interpreter, and a distributed communication system.

  12. Test Information Targeting Strategies for Adaptive Multistage Testing Designs.

    ERIC Educational Resources Information Center

    Luecht, Richard M.; Burgin, William

    Adaptive multistage testlet (MST) designs appear to be gaining popularity for many large-scale computer-based testing programs. These adaptive MST designs use a modularized configuration of preconstructed testlets and embedded score-routing schemes to prepackage different forms of an adaptive test. The conditional information targeting (CIT)…

  13. Improvements to embedded shock wave calculations for transonic flow-applications to wave drag and pressure rise predictions

    NASA Technical Reports Server (NTRS)

    Seebass, A. R.

    1974-01-01

    The numerical solution of a single, mixed, nonlinear equation with prescribed boundary data is discussed. A second order numerical procedure for solving the nonlinear equation and a shock fitting scheme was developed to treat the discontinuities that appear in the solution.

  14. A Novel Quantum Image Steganography Scheme Based on LSB

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Luo, Jia; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen

    2018-06-01

    Based on the NEQR representation of quantum images and least significant bit (LSB) scheme, a novel quantum image steganography scheme is proposed. The sizes of the cover image and the original information image are assumed to be 4 n × 4 n and n × n, respectively. Firstly, the bit-plane scrambling method is used to scramble the original information image. Then the scrambled information image is expanded to the same size of the cover image by using the key only known to the operator. The expanded image is scrambled to be a meaningless image with the Arnold scrambling. The embedding procedure and extracting procedure are carried out by K 1 and K 2 which are under control of the operator. For validation of the presented scheme, the peak-signal-to-noise ratio (PSNR), the capacity, the security of the images and the circuit complexity are analyzed.

  15. Security-enhanced chaos communication with time-delay signature suppression and phase encryption.

    PubMed

    Xue, Chenpeng; Jiang, Ning; Lv, Yunxin; Wang, Chao; Li, Guilan; Lin, Shuqing; Qiu, Kun

    2016-08-15

    A security-enhanced chaos communication scheme with time delay signature (TDS) suppression and phase-encrypted feedback light is proposed, in virtue of dual-loop feedback with independent high-speed phase modulation. We numerically investigate the property of TDS suppression in the intensity and phase space and quantitatively discuss security of the proposed system by calculating the bit error rate of eavesdroppers who try to crack the system by directly filtering the detected signal or by using a similar semiconductor laser to synchronize the link signal and extract the data. The results show that TDS embedded in the chaotic carrier can be well suppressed by properly setting the modulation frequency, which can keep the time delay a secret from the eavesdropper. Moreover, because the feedback light is encrypted, without the accurate time delay and key, the eavesdropper cannot reconstruct the symmetric operation conditions and decode the correct data.

  16. Locality preserving non-negative basis learning with graph embedding.

    PubMed

    Ghanbari, Yasser; Herrington, John; Gur, Ruben C; Schultz, Robert T; Verma, Ragini

    2013-01-01

    The high dimensionality of connectivity networks necessitates the development of methods identifying the connectivity building blocks that not only characterize the patterns of brain pathology but also reveal representative population patterns. In this paper, we present a non-negative component analysis framework for learning localized and sparse sub-network patterns of connectivity matrices by decomposing them into two sets of discriminative and reconstructive bases. In order to obtain components that are designed towards extracting population differences, we exploit the geometry of the population by using a graphtheoretical scheme that imposes locality-preserving properties as well as maintaining the underlying distance between distant nodes in the original and the projected space. The effectiveness of the proposed framework is demonstrated by applying it to two clinical studies using connectivity matrices derived from DTI to study a population of subjects with ASD, as well as a developmental study of structural brain connectivity that extracts gender differences.

  17. Mølmer-Sørensen entangling gate for cavity QED systems

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroki; Nevado, Pedro; Keller, Matthias

    2017-10-01

    The Mølmer-Sørensen gate is a state-of-the-art entangling gate in ion trap quantum computing where the gate fidelity can exceed 99%. Here we propose an analogous implementation in the setting of cavity QED. The cavity photon mode acts as the bosonic degree of freedom in the gate in contrast to that played by the phonon mode in ion traps. This is made possible by utilising cavity assisted Raman transitions interconnecting the logical qubit states embedded in a four-level energy structure, making the ‘anti-Jaynes-Cummings’ term available under the rotating-wave approximation. We identify practical sources of infidelity and discuss their effects on the gate performance. Our proposal not only demonstrates an alternative entangling gate scheme but also sheds new light on the relationship between ion traps and cavity QED, in the sense that many techniques developed in the former are transferable to the latter through our framework.

  18. Classification-Based Spatial Error Concealment for Visual Communications

    NASA Astrophysics Data System (ADS)

    Chen, Meng; Zheng, Yefeng; Wu, Min

    2006-12-01

    In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.

  19. Applications and assessment of QM:QM electronic embedding using generalized asymmetric Mulliken atomic charges.

    PubMed

    Parandekar, Priya V; Hratchian, Hrant P; Raghavachari, Krishnan

    2008-10-14

    Hybrid QM:QM (quantum mechanics:quantum mechanics) and QM:MM (quantum mechanics:molecular mechanics) methods are widely used to calculate the electronic structure of large systems where a full quantum mechanical treatment at a desired high level of theory is computationally prohibitive. The ONIOM (our own N-layer integrated molecular orbital molecular mechanics) approximation is one of the more popular hybrid methods, where the total molecular system is divided into multiple layers, each treated at a different level of theory. In a previous publication, we developed a novel QM:QM electronic embedding scheme within the ONIOM framework, where the model system is embedded in the external Mulliken point charges of the surrounding low-level region to account for the polarization of the model system wave function. Therein, we derived and implemented a rigorous expression for the embedding energy as well as analytic gradients that depend on the derivatives of the external Mulliken point charges. In this work, we demonstrate the applicability of our QM:QM method with point charge embedding and assess its accuracy. We study two challenging systems--zinc metalloenzymes and silicon oxide cages--and demonstrate that electronic embedding shows significant improvement over mechanical embedding. We also develop a modified technique for the energy and analytic gradients using a generalized asymmetric Mulliken embedding method involving an unequal splitting of the Mulliken overlap populations to offer improvement in situations where the Mulliken charges may be deficient.

  20. Digital interface of electronic transformers based on embedded system

    NASA Astrophysics Data System (ADS)

    Shang, Qiufeng; Qi, Yincheng

    2008-10-01

    Benefited from digital interface of electronic transformers, information sharing and system integration in substation can be realized. An embedded system-based digital output scheme of electronic transformers is proposed. The digital interface is designed with S3C44B0X 32bit RISC microprocessor as the hardware platform. The μCLinux operation system (OS) is transplanted on ARM7 (S3C44B0X). Applying Ethernet technology as the communication mode in the substation automation system is a new trend. The network interface chip RTL8019AS is adopted. Data transmission is realized through the in-line TCP/IP protocol of uClinux embedded OS. The application result and character analysis show that the design can meet the real-time and reliability requirements of IEC60044-7/8 electronic voltage/current instrument transformer standards.

  1. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    NASA Astrophysics Data System (ADS)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  2. Embedding research to improve program implementation in Latin America and the Caribbean.

    PubMed

    Tran, Nhan; Langlois, Etienne V; Reveiz, Ludovic; Varallyay, Ilona; Elias, Vanessa; Mancuso, Arielle; Becerra-Posada, Francisco; Ghaffar, Abdul

    2017-06-08

    In the last 10 years, implementation research has come to play a critical role in improving the implementation of already-proven health interventions by promoting the systematic uptake of research findings and other evidence-based strategies into routine practice. The Alliance for Health Policy and Systems Research and the Pan American Health Organization implemented a program of embedded implementation research to support health programs in Latin America and the Caribbean (LAC) in 2014-2015. A total of 234 applications were received from 28 countries in the Americas. The Improving Program Implementation through Embedded Research (iPIER) scheme supported 12 implementation research projects led by health program implementers from nine LAC countries: Argentina, Bolivia, Brazil, Chile, Colombia, Mexico, Panama, Peru, and Saint Lucia. Through this experience, we learned that the "insider" perspective, which implementers bring to the research proposal, is particularly important in identifying research questions that focus on the systems failures that often manifest in barriers to implementation. This paper documents the experience of and highlights key conclusions about the conduct of embedded implementation research. The iPIER experience has shown great promise for embedded research models that place implementers at the helm of implementation research initiatives.

  3. Streamflow Prediction based on Chaos Theory

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, X.; Babovic, V. M.

    2015-12-01

    Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.

  4. Mechanical and Vibration Testing of Carbon Fiber Composite Material with Embedded Piezoelectric Sensors

    NASA Technical Reports Server (NTRS)

    Duffy, Kirsten P.; Lerch, Bradley A.; Wilmoth, Nathan G.; Kray, Nicholas; Gemeinhardt, Gregory

    2012-01-01

    Piezoelectric materials have been proposed as a means of decreasing turbomachinery blade vibration either through a passive damping scheme, or as part of an active vibration control system. For polymer matrix fiber composite (PMFC) blades, the piezoelectric elements could be embedded within the blade material, protecting the brittle piezoceramic material from the airflow and from debris. Before implementation of a piezoelectric element within a PMFC blade, the effect on PMFC mechanical properties needs to be understood. This study attempts to determine how the inclusion of a packaged piezoelectric patch affects the material properties of the PMFC. Composite specimens with embedded piezoelectric patches were tested in four-point bending, short beam shear, and flatwise tension configurations. Results show that the embedded piezoelectric material does decrease the strength of the composite material, especially in flatwise tension, attributable to failure at the interface or within the piezoelectric element itself. In addition, the sensing properties of the post-cured embedded piezoelectric materials were tested, and performed as expected. The piezoelectric materials include a non-flexible patch incorporating solid piezoceramic material, and two flexible patch types incorporating piezoelectric fibers. The piezoceramic material used in these patches was Navy Type-II PZT.

  5. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  6. Embedded feature ranking for ensemble MLP classifiers.

    PubMed

    Windeatt, Terry; Duangsoithong, Rakkrit; Smith, Raymond

    2011-06-01

    A feature ranking scheme for multilayer perceptron (MLP) ensembles is proposed, along with a stopping criterion based upon the out-of-bootstrap estimate. To solve multi-class problems feature ranking is combined with modified error-correcting output coding. Experimental results on benchmark data demonstrate the versatility of the MLP base classifier in removing irrelevant features.

  7. Linking Science and Design and Technology

    ERIC Educational Resources Information Center

    Lunt, Julie; Lawrence, Liz

    2010-01-01

    Making connections between subjects to enhance learning and demonstrate relevance and application is not new, especially in primary education. Teaching through topics or themes has swung in and out of fashion and the use of literacy, numeracy and ICT skills across the curriculum is embedded in many schemes of work. However, the proposed New…

  8. Watermarking of ultrasound medical images in teleradiology using compressed watermark

    PubMed Central

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  9. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  10. Flatness-based embedded adaptive fuzzy control of turbocharged diesel engines

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Siano, Pierluigi; Arsie, Ivan

    2014-10-01

    In this paper nonlinear embedded control for turbocharged Diesel engines is developed with the use of Differential flatness theory and adaptive fuzzy control. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. To compensate for modeling errors and external disturbances an adaptive fuzzy control scheme is implemanted making use of the transformed dynamical system of the diesel engine that is obtained through the application of differential flatness theory. Since only the system's output is measurable the complete state vector has to be reconstructed with the use of a state observer. It is shown that a suitable learning law can be defined for neuro-fuzzy approximators, which are part of the controller, so as to preserve the closed-loop system stability. With the use of Lyapunov stability analysis it is proven that the proposed observer-based adaptive fuzzy control scheme results in H∞ tracking performance.

  11. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  12. An embedded mesh method using piecewise constant multipliers with stabilization: mathematical and numerical aspects

    DOE PAGES

    Puso, M. A.; Kokko, E.; Settgast, R.; ...

    2014-10-22

    An embedded mesh method using piecewise constant multipliers originally proposed by Puso et al. (CMAME, 2012) is analyzed here to determine effects of the pressure stabilization term and small cut cells. The approach is implemented for transient dynamics using the central difference scheme for the time discretization. It is shown that the resulting equations of motion are a stable linear system with a condition number independent of mesh size. Furthermore, we show that the constraints and the stabilization terms can be recast as non-proportional damping such that the time integration of the scheme is provably stable with a critical timemore » step computed from the undamped equations of motion. Effects of small cuts are discussed throughout the presentation. A mesh study is conducted to evaluate the effects of the stabilization on the discretization error and conditioning and is used to recommend an optimal value for stabilization scaling parameter. Several nonlinear problems are also analyzed and compared with comparable conforming mesh results. Finally, we show several demanding problems highlighting the robustness of the proposed approach.« less

  13. Hiding Techniques for Dynamic Encryption Text based on Corner Point

    NASA Astrophysics Data System (ADS)

    Abdullatif, Firas A.; Abdullatif, Alaa A.; al-Saffar, Amna

    2018-05-01

    Hiding technique for dynamic encryption text using encoding table and symmetric encryption method (AES algorithm) is presented in this paper. The encoding table is generated dynamically from MSB of the cover image points that used as the first phase of encryption. The Harris corner point algorithm is applied on cover image to generate the corner points which are used to generate dynamic AES key to second phase of text encryption. The embedded process in the LSB for the image pixels except the Harris corner points for more robust. Experimental results have demonstrated that the proposed scheme have embedding quality, error-free text recovery, and high value in PSNR.

  14. Optimized diffusion gradient orientation schemes for corrupted clinical DTI data sets.

    PubMed

    Dubois, J; Poupon, C; Lethimonnier, F; Le Bihan, D

    2006-08-01

    A method is proposed for generating schemes of diffusion gradient orientations which allow the diffusion tensor to be reconstructed from partial data sets in clinical DT-MRI, should the acquisition be corrupted or terminated before completion because of patient motion. A general energy-minimization electrostatic model was developed in which the interactions between orientations are weighted according to their temporal order during acquisition. In this report, two corruption scenarios were specifically considered for generating relatively uniform schemes of 18 and 60 orientations, with useful subsets of 6 and 15 orientations. The sets and subsets were compared to conventional sets through their energy, condition number and rotational invariance. Schemes of 18 orientations were tested on a volunteer. The optimized sets were similar to uniform sets in terms of energy, condition number and rotational invariance, whether the complete set or only a subset was considered. Diffusion maps obtained in vivo were close to those for uniform sets whatever the acquisition time was. This was not the case with conventional schemes, whose subset uniformity was insufficient. With the proposed approach, sets of orientations responding to several corruption scenarios can be generated, which is potentially useful for imaging uncooperative patients or infants.

  15. Planar reorientation of a free-free beam in space using embedded electromechanical actuators

    NASA Technical Reports Server (NTRS)

    Kolmanovsky, Ilya V.; Mcclamroch, N. Harris

    1993-01-01

    It is demonstrated that the planar reorientation of a free-free beam in zero gravity space can be accomplished by periodically changing the shape of the beam using embedded electromechanical actuators. The dynamics which determine the shape of the free-free beam is assumed to be characterized by the Euler-Bernoulli equation, including material damping, with appropriate boundary conditions. The coupling between the rigid body motion and the flexible motion is explained using the angular momentum expression which includes rotatory inertia and kinematically exact effects. A control scheme is proposed where the embedded actuators excite the flexible motion of the beam so that it rotates in the desired sense with respect to a fixed inertial reference. Relations are derived which relate the average rotation rate to the amplitudes and the frequencies of the periodic actuation signal and the properties of the beam. These reorientation maneuvers can be implemented by using feedback control.

  16. PNS predictions for supersonic/hypersonic flows over finned missile configurations

    NASA Technical Reports Server (NTRS)

    Bhutta, Bilal A.; Lewis, Clark H.

    1992-01-01

    Finned missile design entails accurate and computationally fast numerical techniques for predicting viscous flows over complex lifting configurations at small to moderate angles of attack and over Mach 3 to 15; these flows are often characterized by strong embedded shocks, so that numerical algorithms are also required to capture embedded shocks. The recent real-gas Flux Vector Splitting technique is here extended to investigate the Mach 3 flow over a typical finned missile configuration with/without side fin deflections. Elliptic grid-generation techniques for Mach 15 flows are shown to be inadequate for Mach 3 flows over finned configurations and need to be modified. Fin-deflection studies indicate that even small amounts of missile fin deflection can substantially modify vehicle aerodynamics. This 3D parabolized Navier-Stokes scheme is also extended into an efficient embedded algorithm for studying small axially separated flow regions due to strong fin and control surface deflections.

  17. Ultrafast fingerprint indexing for embedded systems

    NASA Astrophysics Data System (ADS)

    Zhou, Ru; Sin, Sang Woo; Li, Dongju; Isshiki, Tsuyoshi; Kunieda, Hiroaki

    2011-10-01

    A novel core-based fingerprint indexing scheme for embedded systems is presented in this paper. Our approach is enabled by our new precise and fast core-detection algorithm with the direction map. It introduces the feature of CMP (core minutiae pair), which describes the coordinates of minutiae and the direction of ridges associated with the minutiae based on the uniquely defined core coordinates. Since each CMP is identical against the shift and rotation of the fingerprint image, the CMP comparison between a template and an input image can be performed without any alignment. The proposed indexing algorithm based on CMP is suitable for embedded systems because the tremendous speed up and the memory reduction are achieved. In fact, the experiments with the fingerprint database FVC2002 show that its speed for the identifications becomes about 40 times faster than conventional approaches, even though the database includes fingerprints with no core.

  18. Validation of the Lung Subtyping Panel in Multiple Fresh-Frozen and Formalin-Fixed, Paraffin-Embedded Lung Tumor Gene Expression Data Sets.

    PubMed

    Faruki, Hawazin; Mayhew, Gregory M; Fan, Cheng; Wilkerson, Matthew D; Parker, Scott; Kam-Morgan, Lauren; Eisenberg, Marcia; Horten, Bruce; Hayes, D Neil; Perou, Charles M; Lai-Goldman, Myla

    2016-06-01

    Context .- A histologic classification of lung cancer subtypes is essential in guiding therapeutic management. Objective .- To complement morphology-based classification of lung tumors, a previously developed lung subtyping panel (LSP) of 57 genes was tested using multiple public fresh-frozen gene-expression data sets and a prospectively collected set of formalin-fixed, paraffin-embedded lung tumor samples. Design .- The LSP gene-expression signature was evaluated in multiple lung cancer gene-expression data sets totaling 2177 patients collected from 4 platforms: Illumina RNAseq (San Diego, California), Agilent (Santa Clara, California) and Affymetrix (Santa Clara) microarrays, and quantitative reverse transcription-polymerase chain reaction. Gene centroids were calculated for each of 3 genomic-defined subtypes: adenocarcinoma, squamous cell carcinoma, and neuroendocrine, the latter of which encompassed both small cell carcinoma and carcinoid. Classification by LSP into 3 subtypes was evaluated in both fresh-frozen and formalin-fixed, paraffin-embedded tumor samples, and agreement with the original morphology-based diagnosis was determined. Results .- The LSP-based classifications demonstrated overall agreement with the original clinical diagnosis ranging from 78% (251 of 322) to 91% (492 of 538 and 869 of 951) in the fresh-frozen public data sets and 84% (65 of 77) in the formalin-fixed, paraffin-embedded data set. The LSP performance was independent of tissue-preservation method and gene-expression platform. Secondary, blinded pathology review of formalin-fixed, paraffin-embedded samples demonstrated concordance of 82% (63 of 77) with the original morphology diagnosis. Conclusions .- The LSP gene-expression signature is a reproducible and objective method for classifying lung tumors and demonstrates good concordance with morphology-based classification across multiple data sets. The LSP panel can supplement morphologic assessment of lung cancers, particularly when classification by standard methods is challenging.

  19. LSB-Based Steganography Using Reflected Gray Code

    NASA Astrophysics Data System (ADS)

    Chen, Chang-Chu; Chang, Chin-Chen

    Steganography aims to hide secret data into an innocuous cover-medium for transmission and to make the attacker cannot recognize the presence of secret data easily. Even the stego-medium is captured by the eavesdropper, the slight distortion is hard to be detected. The LSB-based data hiding is one of the steganographic methods, used to embed the secret data into the least significant bits of the pixel values in a cover image. In this paper, we propose an LSB-based scheme using reflected-Gray code, which can be applied to determine the embedded bit from secret information. Following the transforming rule, the LSBs of stego-image are not always equal to the secret bits and the experiment shows that the differences are up to almost 50%. According to the mathematical deduction and experimental results, the proposed scheme has the same image quality and payload as the simple LSB substitution scheme. In fact, our proposed data hiding scheme in the case of G1 (one bit Gray code) system is equivalent to the simple LSB substitution scheme.

  20. Density-Dependent Formulation of Dispersion-Repulsion Interactions in Hybrid Multiscale Quantum/Molecular Mechanics (QM/MM) Models.

    PubMed

    Curutchet, Carles; Cupellini, Lorenzo; Kongsted, Jacob; Corni, Stefano; Frediani, Luca; Steindal, Arnfinn Hykkerud; Guido, Ciro A; Scalmani, Giovanni; Mennucci, Benedetta

    2018-03-13

    Mixed multiscale quantum/molecular mechanics (QM/MM) models are widely used to explore the structure, reactivity, and electronic properties of complex chemical systems. Whereas such models typically include electrostatics and potentially polarization in so-called electrostatic and polarizable embedding approaches, respectively, nonelectrostatic dispersion and repulsion interactions are instead commonly described through classical potentials despite their quantum mechanical origin. Here we present an extension of the Tkatchenko-Scheffler semiempirical van der Waals (vdW TS ) scheme aimed at describing dispersion and repulsion interactions between quantum and classical regions within a QM/MM polarizable embedding framework. Starting from the vdW TS expression, we define a dispersion and a repulsion term, both of them density-dependent and consistently based on a Lennard-Jones-like potential. We explore transferable atom type-based parametrization strategies for the MM parameters, based on either vdW TS calculations performed on isolated fragments or on a direct estimation of the parameters from atomic polarizabilities taken from a polarizable force field. We investigate the performance of the implementation by computing self-consistent interaction energies for the S22 benchmark set, designed to represent typical noncovalent interactions in biological systems, in both equilibrium and out-of-equilibrium geometries. Overall, our results suggest that the present implementation is a promising strategy to include dispersion and repulsion in multiscale QM/MM models incorporating their explicit dependence on the electronic density.

  1. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme

    PubMed Central

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values. PMID:26963093

  2. A Secure and Robust Compressed Domain Video Steganography for Intra- and Inter-Frames Using Embedding-Based Byte Differencing (EBBD) Scheme.

    PubMed

    Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah

    2016-01-01

    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.

  3. High-order upwind schemes for the wave equation on overlapping grids: Maxwell's equations in second-order form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.

    High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemannmore » problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. Finally, the upwind scheme is shown to be robust and provide high-order accuracy.« less

  4. High-order upwind schemes for the wave equation on overlapping grids: Maxwell's equations in second-order form

    DOE PAGES

    Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.

    2017-09-28

    High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemannmore » problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. Finally, the upwind scheme is shown to be robust and provide high-order accuracy.« less

  5. High-order upwind schemes for the wave equation on overlapping grids: Maxwell's equations in second-order form

    NASA Astrophysics Data System (ADS)

    Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.

    2018-01-01

    High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw [1] how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemann problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. The upwind scheme is shown to be robust and provide high-order accuracy.

  6. A robust embedded vision system feasible white balance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  7. Automatically produced FRP beams with embedded FOS in complex geometry: process, material compatibility, micromechanical analysis, and performance tests

    NASA Astrophysics Data System (ADS)

    Gabler, Markus; Tkachenko, Viktoriya; Küppers, Simon; Kuka, Georg G.; Habel, Wolfgang R.; Milwich, Markus; Knippers, Jan

    2012-04-01

    The main goal of the presented work was to evolve a multifunctional beam composed out of fiber reinforced plastics (FRP) and an embedded optical fiber with various fiber Bragg grating sensors (FBG). These beams are developed for the use as structural member for bridges or industrial applications. It is now possible to realize large scale cross sections, the embedding is part of a fully automated process and jumpers can be omitted in order to not negatively influence the laminate. The development includes the smart placement and layout of the optical fibers in the cross section, reliable strain transfer, and finally the coupling of the embedded fibers after production. Micromechanical tests and analysis were carried out to evaluate the performance of the sensor. The work was funded by the German ministry of economics and technology (funding scheme ZIM). Next to the authors of this contribution, Melanie Book with Röchling Engineering Plastics KG (Haren/Germany; Katharina Frey with SAERTEX GmbH & Co. KG (Saerbeck/Germany) were part of the research group.

  8. Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network

    PubMed Central

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-01-01

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions. PMID:25171121

  9. Image-based environmental monitoring sensor application using an embedded wireless sensor network.

    PubMed

    Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh

    2014-08-28

    This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.

  10. Parametric embedding for class visualization.

    PubMed

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  11. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less

  12. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    NASA Astrophysics Data System (ADS)

    Spackman, Peter R.; Karton, Amir

    2015-05-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.

  13. Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data.

    PubMed

    Xie, Qingqing; Wang, Liangmin

    2016-11-25

    With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead.

  14. Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bajaj, Ruchika; Bedi, Punam; Pal, S. K.

    Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.

  15. Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data †

    PubMed Central

    Xie, Qingqing; Wang, Liangmin

    2016-01-01

    With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead. PMID:27897984

  16. Images as embedding maps and minimal surfaces: Movies, color, and volumetric medical images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimmel, R.; Malladi, R.; Sochen, N.

    A general geometrical framework for image processing is presented. The authors consider intensity images as surfaces in the (x,I) space. The image is thereby a two dimensional surface in three dimensional space for gray level images. The new formulation unifies many classical schemes, algorithms, and measures via choices of parameters in a {open_quote}master{close_quotes} geometrical measure. More important, it is a simple and efficient tool for the design of natural schemes for image enhancement, segmentation, and scale space. Here the authors give the basic motivation and apply the scheme to enhance images. They present the concept of an image as amore » surface in dimensions higher than the three dimensional intuitive space. This will help them handle movies, color, and volumetric medical images.« less

  17. Evaluating and Improving Cloud Processes in the Multi-Scale Modeling Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackerman, Thomas P.

    2015-03-01

    The research performed under this grant was intended to improve the embedded cloud model in the Multi-scale Modeling Framework (MMF) for convective clouds by using a 2-moment microphysics scheme rather than the single moment scheme used in all the MMF runs to date. The technical report and associated documents describe the results of testing the cloud resolving model with fixed boundary conditions and evaluation of model results with data. The overarching conclusion is that such model evaluations are problematic because errors in the forcing fields control the results so strongly that variations in parameterization values cannot be usefully constrained

  18. Connes' embedding problem and Tsirelson's problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junge, M.; Palazuelos, C.; Navascues, M.

    2011-01-15

    We show that Tsirelson's problem concerning the set of quantum correlations and Connes' embedding problem on finite approximations in von Neumann algebras (known to be equivalent to Kirchberg's QWEP conjecture) are essentially equivalent. Specifically, Tsirelson's problem asks whether the set of bipartite quantum correlations generated between tensor product separated systems is the same as the set of correlations between commuting C{sup *}-algebras. Connes' embedding problem asks whether any separable II{sub 1} factor is a subfactor of the ultrapower of the hyperfinite II{sub 1} factor. We show that an affirmative answer to Connes' question implies a positive answer to Tsirelson's. Conversely,more » a positive answer to a matrix valued version of Tsirelson's problem implies a positive one to Connes' problem.« less

  19. A minimization method on the basis of embedding the feasible set and the epigraph

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.

    2016-11-01

    We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.

  20. An Energy-Based Approach for Detection and Characterization of Subtle Entities Within Laser Scanning Point-Clouds

    NASA Astrophysics Data System (ADS)

    Arav, Reuma; Filin, Sagi

    2016-06-01

    Airborne laser scans present an optimal tool to describe geomorphological features in natural environments. However, a challenge arises in the detection of such phenomena, as they are embedded in the topography, tend to blend into their surroundings and leave only a subtle signature within the data. Most object-recognition studies address mainly urban environments and follow a general pipeline where the data are partitioned into segments with uniform properties. These approaches are restricted to man-made domain and are capable to handle limited features that answer a well-defined geometric form. As natural environments present a more complex set of features, the common interpretation of the data is still manual at large. In this paper, we propose a data-aware detection scheme, unbound to specific domains or shapes. We define the recognition question as an energy optimization problem, solved by variational means. Our approach, based on the level-set method, characterizes geometrically local surfaces within the data, and uses these characteristics as potential field for minimization. The main advantage here is that it allows topological changes of the evolving curves, such as merging and breaking. We demonstrate the proposed methodology on the detection of collapse sinkholes.

  1. FDE-vdW: A van der Waals inclusive subsystem density-functional theory.

    PubMed

    Kevorkyants, Ruslan; Eshuis, Henk; Pavanello, Michele

    2014-07-28

    We present a formally exact van der Waals inclusive electronic structure theory, called FDE-vdW, based on the Frozen Density Embedding formulation of subsystem Density-Functional Theory. In subsystem DFT, the energy functional is composed of subsystem additive and non-additive terms. We show that an appropriate definition of the long-range correlation energy is given by the value of the non-additive correlation functional. This functional is evaluated using the fluctuation-dissipation theorem aided by a formally exact decomposition of the response functions into subsystem contributions. FDE-vdW is derived in detail and several approximate schemes are proposed, which lead to practical implementations of the method. We show that FDE-vdW is Casimir-Polder consistent, i.e., it reduces to the generalized Casimir-Polder formula for asymptotic inter-subsystems separations. Pilot calculations of binding energies of 13 weakly bound complexes singled out from the S22 set show a dramatic improvement upon semilocal subsystem DFT, provided that an appropriate exchange functional is employed. The convergence of FDE-vdW with basis set size is discussed, as well as its dependence on the choice of associated density functional approximant.

  2. FDE-vdW: A van der Waals inclusive subsystem density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevorkyants, Ruslan; Pavanello, Michele, E-mail: m.pavanello@rutgers.edu; Eshuis, Henk

    2014-07-28

    We present a formally exact van der Waals inclusive electronic structure theory, called FDE-vdW, based on the Frozen Density Embedding formulation of subsystem Density-Functional Theory. In subsystem DFT, the energy functional is composed of subsystem additive and non-additive terms. We show that an appropriate definition of the long-range correlation energy is given by the value of the non-additive correlation functional. This functional is evaluated using the fluctuation–dissipation theorem aided by a formally exact decomposition of the response functions into subsystem contributions. FDE-vdW is derived in detail and several approximate schemes are proposed, which lead to practical implementations of the method.more » We show that FDE-vdW is Casimir-Polder consistent, i.e., it reduces to the generalized Casimir-Polder formula for asymptotic inter-subsystems separations. Pilot calculations of binding energies of 13 weakly bound complexes singled out from the S22 set show a dramatic improvement upon semilocal subsystem DFT, provided that an appropriate exchange functional is employed. The convergence of FDE-vdW with basis set size is discussed, as well as its dependence on the choice of associated density functional approximant.« less

  3. Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen E.; Humble, Travis S.

    2017-04-01

    Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. In an effort to reduce the complexity of the minor embedding problem, we introduce the minor set cover (MSC) of a known graph G: a subset of graph minors which contain any remaining minor of the graph as a subgraph. Any graph that can be embedded into G will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, which is a complete bipartite graph. We show that the complete bipartite graph K_{N,N} has a MSC of N minors, from which K_{N+1} is identified as the largest clique minor of K_{N,N}. The case of determining the largest clique minor of hardware with faults is briefly discussed but remains an open question.

  4. Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets

    DOE PAGES

    Hamilton, Kathleen E.; Humble, Travis S.

    2017-02-23

    Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. We introduce the minor set cover (MSC) of a known graph GG : a subset of graph minors which contain any remaining minor of the graph as a subgraph, in an effort to reduce the complexity of the minor embedding problem. Any graph that can be embedded into GG will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, whichmore » is a complete bipartite graph. Furthermore, we show that the complete bipartite graph K N,N has a MSC of N minors, from which K N+1 is identified as the largest clique minor of K N,N. In the case of determining the largest clique minor of hardware with faults we briefly discussed this open question.« less

  5. Employing Augmented-Reality-Embedded Instruction to Disperse the Imparities of Individual Differences in Earth Science Learning

    NASA Astrophysics Data System (ADS)

    Chen, Cheng-ping; Wang, Chang-Hwa

    2015-12-01

    Studies have proven that merging hands-on and online learning can result in an enhanced experience in learning science. In contrast to traditional online learning, multiple in-classroom activities may be involved in an augmented-reality (AR)-embedded e-learning process and thus could reduce the effects of individual differences. Using a three-stage AR-embedded instructional process, we conducted an experiment to investigate the influences of individual differences on learning earth science phenomena of "day, night, and seasons" for junior highs. The mixed-methods sequential explanatory design was employed. In the quantitative phase, factors of learning styles and ICT competences were examined alongside with the overall learning achievement. Independent t tests and ANCOVAs were employed to achieve inferential statistics. The results showed that overall learning achievement was significant for the AR-embedded instruction. Nevertheless, neither of the two learner factors exhibited significant effect on learning achievement. In the qualitative phase, we analyzed student interview records, and a wide variation on student's preferred instructional stages were revealed. These findings could provide an alternative rationale for developing ICT-supported instruction, as our three-stage AR-embedded comprehensive e-learning scheme could enhance instruction adaptiveness to disperse the imparities of individual differences between learners.

  6. ADAPTIVE METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS VIA NATURAL EMBEDDINGS AND REJECTION SAMPLING WITH MEMORY.

    PubMed

    Rackauckas, Christopher; Nie, Qing

    2017-01-01

    Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.

  7. ADAPTIVE METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS VIA NATURAL EMBEDDINGS AND REJECTION SAMPLING WITH MEMORY

    PubMed Central

    Rackauckas, Christopher

    2017-01-01

    Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs. PMID:29527134

  8. Working better together: joint leadership development for doctors and managers.

    PubMed

    Kelly, Nicola

    2014-01-01

    Traditionally, there have been tensions between frontline healthcare professionals and managers, with well-known stereotypes of difficult consultants and pen-pushing managers. Many junior doctors have limited management experience and have often never even met a manager prior to taking on a consultant role. Based on a successful programme pioneered by Dr Robert Klaber (Imperial, London) we have set-up an innovative scheme for Birmingham Children's Hospital, pairing junior doctors and managers to learn and work together. Our aim was to cultivate positive attitudes and understanding between the two groups, break down inter-professional barriers, and to provide practical leadership experience and education. We recruited 60 managers and doctors to participate in shadowing, conversation, and quality improvement projects. Thought-provoking online materials, blogs, socials, and popular monthly workshops consisting of patient-focused debate and discussion around key leadership themes, have helped to support learning and cement shared values. Formal evaluation has demonstrated an improvement in how participants perceive their knowledge and ability based on key NHS Leadership Framework competencies. Participant feedback has been extremely positive, and everyone plans to continue to incorporate Paired Learning into their continuing professional development. We are now embedding Paired Learning in the on-going educational programme offered at Birmingham Children's Hospital, whilst looking at extending the scheme to include different professional groups and other trusts across the region and nationally.

  9. Nonlocal dynamics of dissipative phononic fluids

    NASA Astrophysics Data System (ADS)

    Nemati, Navid; Lee, Yoonkyung E.; Lafarge, Denis; Duclos, Aroune; Fang, Nicholas

    2017-06-01

    We describe the nonlocal effective properties of a two-dimensional dissipative phononic crystal made by periodic arrays of rigid and motionless cylinders embedded in a viscothermal fluid such as air. The description is based on a nonlocal theory of sound propagation in stationary random fluid/rigid media that was proposed by Lafarge and Nemati [Wave Motion 50, 1016 (2013), 10.1016/j.wavemoti.2013.04.007]. This scheme arises from a deep analogy with electromagnetism and a set of physics-based postulates including, particularly, the action-response procedures, whereby the effective density and bulk modulus are determined. Here, we revisit this approach, and clarify further its founding physical principles through presenting it in a unified formulation together with the two-scale asymptotic homogenization theory that is interpreted as the local limit. Strong evidence is provided to show that the validity of the principles and postulates within the nonlocal theory extends to high-frequency bands, well beyond the long-wavelength regime. In particular, we demonstrate that up to the third Brillouin zone including the Bragg scattering, the complex and dispersive phase velocity of the least-attenuated wave in the phononic crystal which is generated by our nonlocal scheme agrees exactly with that reproduced by a direct approach based on the Bloch theorem and multiple scattering method. In high frequencies, the effective wave and its associated parameters are analyzed by treating the phononic crystal as a random medium.

  10. Hybrid Numerical-Analytical Scheme for Calculating Elastic Wave Diffraction in Locally Inhomogeneous Waveguides

    NASA Astrophysics Data System (ADS)

    Glushkov, E. V.; Glushkova, N. V.; Evdokimov, A. A.

    2018-01-01

    Numerical simulation of traveling wave excitation, propagation, and diffraction in structures with local inhomogeneities (obstacles) is computationally expensive due to the need for mesh-based approximation of extended domains with the rigorous account for the radiation conditions at infinity. Therefore, hybrid numerical-analytic approaches are being developed based on the conjugation of a numerical solution in a local vicinity of the obstacle and/or source with an explicit analytic representation in the remaining semi-infinite external domain. However, in standard finite-element software, such a coupling with the external field, moreover, in the case of multimode expansion, is generally not provided. This work proposes a hybrid computational scheme that allows realization of such a conjugation using a standard software. The latter is used to construct a set of numerical solutions used as the basis for the sought solution in the local internal domain. The unknown expansion coefficients on this basis and on normal modes in the semi-infinite external domain are then determined from the conditions of displacement and stress continuity at the boundary between the two domains. We describe the implementation of this approach in the scalar and vector cases. To evaluate the reliability of the results and the efficiency of the algorithm, we compare it with a semianalytic solution to the problem of traveling wave diffraction by a horizontal obstacle, as well as with a finite-element solution obtained for a limited domain artificially restricted using absorbing boundaries. As an example, we consider the incidence of a fundamental antisymmetric Lamb wave onto surface and partially submerged elastic obstacles. It is noted that the proposed hybrid scheme can also be used to determine the eigenfrequencies and eigenforms of resonance scattering, as well as the characteristics of traveling waves in embedded waveguides.

  11. Objects Architecture: A Comprehensive Design Approach for Real-Time, Distributed, Fault-Tolerant, Reactive Operating Systems.

    DTIC Science & Technology

    1987-09-01

    real - time operating system should be efficient from the real-time point...5,8]) system naming scheme. 3.2 Protecting Objects Real-time embedded systems usually neglect protection mechanisms. However, a real - time operating system cannot...allocation mechanism should adhere to application constraints. This strong relationship between a real - time operating system and the application

  12. Using Structured Chemistry Examinations (SCHemEs) as an Assessment Method to Improve Undergraduate Students' Generic, Practical, and Laboratory-Based Skills

    ERIC Educational Resources Information Center

    Kirton, Stewart B.; Al-Ahmad, Abdullah; Fergus, Suzanne

    2014-01-01

    Increase in tuition fees means there will be renewed pressure on universities to provide "value for money" courses that provide extensive training in both subject-specific and generic skills. For graduates of chemistry this includes embedding the generic, practical, and laboratory-based skills associated with industrial research as an…

  13. Covert Half Duplex Data Link Using Radar-Embedded Communications With Various Modulation Schemes

    DTIC Science & Technology

    2017-12-01

    27 Figure 4.1 Comparison of Theoretical, Radar-Pulse-Only Matched Filter , and the Radar...CommunicationsMatched Filtered PD Curves versus SNR for RCR = 0 dB. . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Figure 4.2 Comparison of Theoretical, Radar...Pulse-Only Matched Filter , and the Radar-CommunicationsMatched Filtered PD Curves versus SNR for RCR = [3, 6, 10] dB

  14. Future Horizons: Moral Learning and the Socially Embedded Synaptic Self

    ERIC Educational Resources Information Center

    Sankey, Derek

    2011-01-01

    During the 40-year time-span of the "JME", four leading meta-narratives concerned with who we are and our place in the natural scheme of things have increasingly run up against their own inherent limitations; even as the planet is being pushed beyond sustainability. Indeed, we seem to be on the verge of another "Copernican revolution" that will…

  15. Generation of large scale GHZ states with the interactions of photons and quantum-dot spins

    NASA Astrophysics Data System (ADS)

    Miao, Chun; Fang, Shu-Dong; Dong, Ping; Yang, Ming; Cao, Zhuo-Liang

    2018-03-01

    We present a deterministic scheme for generating large scale GHZ states in a cavity-quantum dot system. A singly charged quantum dot is embedded in a double-sided optical microcavity with partially reflective top and bottom mirrors. The GHZ-type Bell spin state can be created and two n-spin GHZ states can be perfectly fused to a 2n-spin GHZ state with the help of n ancilla single-photon pulses. The implementation of the current scheme only depends on the photon detection and its need not to operate multi-qubit gates and multi-qubit measurements. Discussions about the effect of the cavity loss, side leakage and exciton cavity coupling strength for the fidelity of generated states show that the fidelity can remain high enough by controlling system parameters. So the current scheme is simple and feasible in experiment.

  16. Quantum watermarking scheme through Arnold scrambling and LSB steganography

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping

    2017-09-01

    Based on the NEQR of quantum images, a new quantum gray-scale image watermarking scheme is proposed through Arnold scrambling and least significant bit (LSB) steganography. The sizes of the carrier image and the watermark image are assumed to be 2n× 2n and n× n, respectively. Firstly, a classical n× n sized watermark image with 8-bit gray scale is expanded to a 2n× 2n sized image with 2-bit gray scale. Secondly, through the module of PA-MOD N, the expanded watermark image is scrambled to a meaningless image by the Arnold transform. Then, the expanded scrambled image is embedded into the carrier image by the steganography method of LSB. Finally, the time complexity analysis is given. The simulation experiment results show that our quantum circuit has lower time complexity, and the proposed watermarking scheme is superior to others.

  17. Direct measurement of nonlocal entanglement of two-qubit spin quantum states.

    PubMed

    Cheng, Liu-Yong; Yang, Guo-Hui; Guo, Qi; Wang, Hong-Fu; Zhang, Shou

    2016-01-18

    We propose efficient schemes of direct concurrence measurement for two-qubit spin and photon-polarization entangled states via the interaction between single-photon pulses and nitrogen-vacancy (NV) centers in diamond embedded in optical microcavities. For different entangled-state types, diversified quantum devices and operations are designed accordingly. The initial unknown entangled states are possessed by two spatially separated participants, and nonlocal spin (polarization) entanglement can be measured with the aid of detection probabilities of photon (NV center) states. This non-demolition entanglement measurement manner makes initial entangled particle-pair avoid complete annihilation but evolve into corresponding maximally entangled states. Moreover, joint inter-qubit operation or global qubit readout is not required for the presented schemes and the final analyses inform favorable performance under the current parameters conditions in laboratory. The unique advantages of spin qubits assure our schemes wide potential applications in spin-based solid quantum information and computation.

  18. Generalized watermarking attack based on watermark estimation and perceptual remodulation

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Sviatoslav V.; Pereira, Shelby; Herrigel, Alexander; Baumgartner, Nazanin; Pun, Thierry

    2000-05-01

    Digital image watermarking has become a popular technique for authentication and copyright protection. For verifying the security and robustness of watermarking algorithms, specific attacks have to be applied to test them. In contrast to the known Stirmark attack, which degrades the quality of the image while destroying the watermark, this paper presents a new approach which is based on the estimation of a watermark and the exploitation of the properties of Human Visual System (HVS). The new attack satisfies two important requirements. First, image quality after the attack as perceived by the HVS is not worse than the quality of the stego image. Secondly, the attack uses all available prior information about the watermark and cover image statistics to perform the best watermark removal or damage. The proposed attack is based on a stochastic formulation of the watermark removal problem, considering the embedded watermark as additive noise with some probability distribution. The attack scheme consists of two main stages: (1) watermark estimation and partial removal by a filtering based on a Maximum a Posteriori (MAP) approach; (2) watermark alteration and hiding through addition of noise to the filtered image, taking into account the statistics of the embedded watermark and exploiting HVS characteristics. Experiments on a number of real world and computer generated images show the high efficiency of the proposed attack against known academic and commercial methods: the watermark is completely destroyed in all tested images without altering the image quality. The approach can be used against watermark embedding schemes that operate either in coordinate domain, or transform domains like Fourier, DCT or wavelet.

  19. The Weighting Is The Hardest Part: On The Behavior of the Likelihood Ratio Test and the Score Test Under a Data-Driven Weighting Scheme in Sequenced Samples

    PubMed Central

    Minică, Camelia C.; Genovese, Giulio; Hultman, Christina M.; Pool, René; Vink, Jacqueline M.; Neale, Michael C.; Dolan, Conor V.; Neale, Benjamin M.

    2017-01-01

    Sequence-based association studies are at a critical inflexion point with the increasing availability of exome-sequencing data. A popular test of association is the sequence kernel association test (SKAT). Weights are embedded within SKAT to reflect the hypothesized contribution of the variants to the trait variance. Because the true weights are generally unknown, and so are subject to misspecification, we examined the efficiency of a data-driven weighting scheme. We propose the use of a set of theoretically defensible weighting schemes, of which, we assume, the one that gives the largest test statistic is likely to capture best the allele frequency-functional effect relationship. We show that the use of alternative weights obviates the need to impose arbitrary frequency thresholds in sequence data association analyses. As both the score test and the likelihood ratio test (LRT) may be used in this context, and may differ in power, we characterize the behavior of both tests. We found that the two tests have equal power if the set of weights resembled the correct ones. However, if the weights are badly specified, the LRT shows superior power (due to its robustness to misspecification). With this data-driven weighting procedure the LRT detected significant signal in genes located in regions already confirmed as associated with schizophrenia – the PRRC2A (P=1.020E-06) and the VARS2 (P=2.383E-06) – in the Swedish schizophrenia case-control cohort of 11,040 individuals with exome-sequencing data. The score test is currently preferred for its computational efficiency and power. Indeed, assuming correct specification, in some circumstances the score test is the most powerful. However, LRT has the advantageous properties of being generally more robust and more powerful under weight misspecification. This is an important result given that, arguably, misspecified models are likely to be the rule rather than the exception in weighting-based approaches. PMID:28238293

  20. A Secret 3D Model Sharing Scheme with Reversible Data Hiding Based on Space Subdivision

    NASA Astrophysics Data System (ADS)

    Tsai, Yuan-Yu

    2016-03-01

    Secret sharing is a highly relevant research field, and its application to 2D images has been thoroughly studied. However, secret sharing schemes have not kept pace with the advances of 3D models. With the rapid development of 3D multimedia techniques, extending the application of secret sharing schemes to 3D models has become necessary. In this study, an innovative secret 3D model sharing scheme for point geometries based on space subdivision is proposed. Each point in the secret point geometry is first encoded into a series of integer values that fall within [0, p - 1], where p is a predefined prime number. The share values are derived by substituting the specified integer values for all coefficients of the sharing polynomial. The surface reconstruction and the sampling concepts are then integrated to derive a cover model with sufficient model complexity for each participant. Finally, each participant has a separate 3D stego model with embedded share values. Experimental results show that the proposed technique supports reversible data hiding and the share values have higher levels of privacy and improved robustness. This technique is simple and has proven to be a feasible secret 3D model sharing scheme.

  1. Privacy-Aware Image Encryption Based on Logistic Map and Data Hiding

    NASA Astrophysics Data System (ADS)

    Sun, Jianglin; Liao, Xiaofeng; Chen, Xin; Guo, Shangwei

    The increasing need for image communication and storage has created a great necessity for securely transforming and storing images over a network. Whereas traditional image encryption algorithms usually consider the security of the whole plain image, region of interest (ROI) encryption schemes, which are of great importance in practical applications, protect the privacy regions of plain images. Existing ROI encryption schemes usually adopt approximate techniques to detect the privacy region and measure the quality of encrypted images; however, their performance is usually inconsistent with a human visual system (HVS) and is sensitive to statistical attacks. In this paper, we propose a novel privacy-aware ROI image encryption (PRIE) scheme based on logistical mapping and data hiding. The proposed scheme utilizes salient object detection to automatically, adaptively and accurately detect the privacy region of a given plain image. After private pixels have been encrypted using chaotic cryptography, the significant bits are embedded into the nonprivacy region of the plain image using data hiding. Extensive experiments are conducted to illustrate the consistency between our automatic ROI detection and HVS. Our experimental results also demonstrate that the proposed scheme exhibits satisfactory security performance.

  2. Synoptic reporting in tumor pathology: advantages of a web-based system.

    PubMed

    Qu, Zhenhong; Ninan, Shibu; Almosa, Ahmed; Chang, K G; Kuruvilla, Supriya; Nguyen, Nghia

    2007-06-01

    The American College of Surgeons Commission on Cancer (ACS-CoC) mandates that pathology reports at ACS-CoC-approved cancer programs include all scientifically validated data elements for each site and tumor specimen. The College of American Pathologists (CAP) has produced cancer checklists in static text formats to assist reporting. To be inclusive, the CAP checklists are pages long, requiring extensive text editing and multiple intermediate steps. We created a set of dynamic tumor-reporting templates, using Microsoft Active Server Page (ASP.NET), with drop-down list and data-compile features, and added a reminder function to indicate missing information. Users can access this system on the Internet, prepare the tumor report by selecting relevant data from drop-down lists with an embedded tumor staging scheme, and directly transfer the final report into a laboratory information system by using the copy-and-paste function. By minimizing extensive text editing and eliminating intermediate steps, this system can reduce reporting errors, improve work efficiency, and increase compliance.

  3. Brain-computer interface using P300 and virtual reality: a gaming approach for treating ADHD.

    PubMed

    Rohani, Darius Adam; Sorensen, Helge B D; Puthusserypady, Sadasivan

    2014-01-01

    This paper presents a novel brain-computer interface (BCI) system aiming at the rehabilitation of attention-deficit/hyperactive disorder in children. It uses the P300 potential in a series of feedback games to improve the subjects' attention. We applied a support vector machine (SVM) using temporal and template-based features to detect these P300 responses. In an experimental setup using five subjects, an average error below 30% was achieved. To make it more challenging the BCI system has been embedded inside an immersive 3D virtual reality (VR) classroom with simulated distractions, which was created by combining a low-cost infrared camera and an "off-axis perspective projection" algorithm. This system is intended for kids by operating with four electrodes, as well as a non-intrusive VR setting. With the promising results, and considering the simplicity of the scheme, we hope to encourage future studies to adapt the techniques presented in this study.

  4. The Persuasive Effect of Social Network Feedback on Mediated Communication: A Case Study in a Real Organization.

    PubMed

    Varotto, Alessandra; Gamberini, Luciano; Spagnolli, Anna; Martino, Francesco; Giovannardi, Isabella

    2016-03-01

    This study focuses on social feedback, namely on information on the outcome of users' online activity indirectly generated by other users, and investigates in a real setting whether it can affect subsequent activity and, if so, whether participants are aware of that. SkyPas, an application that calculates, transmits, and displays social feedback, was embedded in a common instant messaging service (Skype(™)) and used during a 7-week trial by 24 office workers at a large business organization. The trial followed an ABA scheme in which the B phase was the feedback provision phase. Results show that social feedback affects users' communication activity (participation, inward communication, outward communication, and reciprocity), sometimes even after the feedback provision phase. At the same time, users were poorly aware of this effect, showing a discrepancy between self-reported and observational measures. These results are then discussed in terms of design transparency and task compatibility.

  5. T and D-Bench--Innovative Combined Support for Education and Research in Computer Architecture and Embedded Systems

    ERIC Educational Resources Information Center

    Soares, S. N.; Wagner, F. R.

    2011-01-01

    Teaching and Design Workbench (T&D-Bench) is a framework aimed at education and research in the areas of computer architecture and embedded systems. It includes a set of features not found in other educational environments. This set of features is the result of an original combination of design requirements for T&D-Bench: that the…

  6. A parallel finite-difference method for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Swisshelm, Julie M.

    1989-01-01

    A finite-difference scheme for solving complex three-dimensional aerodynamic flow on parallel-processing supercomputers is presented. The method consists of a basic flow solver with multigrid convergence acceleration, embedded grid refinements, and a zonal equation scheme. Multitasking and vectorization have been incorporated into the algorithm. Results obtained include multiprocessed flow simulations from the Cray X-MP and Cray-2. Speedups as high as 3.3 for the two-dimensional case and 3.5 for segments of the three-dimensional case have been achieved on the Cray-2. The entire solver attained a factor of 2.7 improvement over its unitasked version on the Cray-2. The performance of the parallel algorithm on each machine is analyzed.

  7. Polarization-dependent photon switch in a one-dimensional coupled-resonator waveguide.

    PubMed

    Zhang, Zhe-Yong; Dong, Yu-Li; Zhang, Sheng-Li; Zhu, Shi-Qun

    2013-09-09

    Polarization-dependent photon switch is one of the most important ingredients in building future large-scale all-optical quantum network. We present a scheme for a single-photon switch in a one-dimensional coupled-resonator waveguide, where N(a) Λ-type three-level atoms are individually embedded in each of the resonator. By tuning the interaction between atom and field, we show that an initial incident photon with a certain polarization can be transformed into its orthogonal polarization state. Finally, we use the fidelity as a figure of merit and numerically evaluate the performance of our photon switch scheme in varieties of system parameters, such as number of atoms, energy detuning and dipole couplings.

  8. Embedding operational research into national disease control programme: lessons from 10 years of experience in Indonesia

    PubMed Central

    Mahendradhata, Yodi; Probandari, Ari; Widjanarko, Bagoes; Riono, Pandu; Mustikawati, Dyah; Tiemersma, Edine W.; Alisjahbana, Bachti

    2014-01-01

    There is growing recognition that operational research (OR) should be embedded into national disease control programmes. However, much of the current OR capacity building schemes are still predominantly driven by international agencies with limited integration into national disease control programmes. We demonstrated that it is possible to achieve a more sustainable capacity building effort across the country by establishing an OR group within the national tuberculosis (TB) control programme in Indonesia. Key challenges identified include long-term financial support, limited number of scientific publications, and difficulties in documenting impact on programmatic performance. External evaluation has expressed concerns in regard to utilisation of OR in policy making. Efforts to address this concern have been introduced recently and led to indications of increased utilisation of research evidence in policy making by the national TB control programme. Embedding OR in national disease control programmes is key in establishing an evidence-based disease control programme. PMID:25361728

  9. Configurations of leadership practices in hospital units.

    PubMed

    Meier, Ninna

    2015-01-01

    The purpose of this paper is to explore how leadership is practiced across four different hospital units. The study is a comparative case study of four hospital units, based on detailed observations of the everyday work practices, interactions and interviews with ten interdisciplinary clinical managers. Comparing leadership as configurations of practices across four different clinical settings, the author shows how flexible and often shared leadership practices were embedded in and central to the core clinical work in all units studied here, especially in more unpredictable work settings. Practices of symbolic work and emotional support to staff were particularly important when patients were severely ill. Based on a study conducted with qualitative methods, these results cannot be expected to apply in all clinical settings. Future research is invited to extend the findings presented here by exploring leadership practices from a micro-level perspective in additional health care contexts: particularly the embedded and emergent nature of such practices. This paper shows leadership practices to be primarily embedded in the clinical work and often shared across organizational or professional boundaries. This paper demonstrated how leadership practices are embedded in the everyday work in hospital units. Moreover, the analysis shows how configurations of leadership practices varied in four different clinical settings, thus contributing with contextual accounts of leadership as practice, and suggested "configurations of practice" as a way to carve out similarities and differences in leadership practices across settings.

  10. Image-adaptive and robust digital wavelet-domain watermarking for images

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Zhang, Liping

    2018-03-01

    We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.

  11. Electromagnetic diode based on photonic crystal cavity with embedded highly dispersive meta-interface

    NASA Astrophysics Data System (ADS)

    Chen, Yongqiang; Dong, Lijuan; Xu, Xiaohu; Jiang, Jun; Shi, Yunlong

    2017-12-01

    In this paper, we propose a scheme for subwavelength electromagnetic diodes by employing a photonic crystal (PC) cavity with embedded electromagnetically induced-transparency (EIT)-like highly dispersive meta-interface. A nonreciprocal response, with 21.5 dB transmission light contrast and 12.3 dBm working power, is conceptually demonstrated in a microstrip transmission line system with asymmetric absorption and nonlinear medium inclusion. Such high-contrast transmission and relatively low-threshold diode action stem from the composite PC-EIT mechanism. This mechanism not only possesses a large quality factor and strong localization of fields but also does not enlarge the device volume and drastically reduce transmittance. Our findings should be beneficial for the design of new and practical metamaterial-enabled nonlinear devices.

  12. Embedding Learning How to Learn in School Policy: The Challenge for Leadership

    ERIC Educational Resources Information Center

    Swaffield, Sue; MacBeath, John

    2006-01-01

    Achieving lasting and deep-seated change in schools through the embedding of a new set of practices with associated values is a familiar goal. This paper draws upon interviews with school coordinators and head teachers participating in the Learning How to Learn Project to explore the nature of embedding and the related challenges for leadership.…

  13. Renormalization scheme dependence of high-order perturbative QCD predictions

    NASA Astrophysics Data System (ADS)

    Ma, Yang; Wu, Xing-Gang

    2018-02-01

    Conventionally, one adopts typical momentum flow of a physical observable as the renormalization scale for its perturbative QCD (pQCD) approximant. This simple treatment leads to renormalization scheme-and-scale ambiguities due to the renormalization scheme and scale dependence of the strong coupling and the perturbative coefficients do not exactly cancel at any fixed order. It is believed that those ambiguities will be softened by including more higher-order terms. In the paper, to show how the renormalization scheme dependence changes when more loop terms have been included, we discuss the sensitivity of pQCD prediction on the scheme parameters by using the scheme-dependent {βm ≥2}-terms. We adopt two four-loop examples, e+e-→hadrons and τ decays into hadrons, for detailed analysis. Our results show that under the conventional scale setting, by including more-and-more loop terms, the scheme dependence of the pQCD prediction cannot be reduced as efficiently as that of the scale dependence. Thus a proper scale-setting approach should be important to reduce the scheme dependence. We observe that the principle of minimum sensitivity could be such a scale-setting approach, which provides a practical way to achieve optimal scheme and scale by requiring the pQCD approximate be independent to the "unphysical" theoretical conventions.

  14. Statistical evaluation of synchronous spike patterns extracted by frequent item set mining

    PubMed Central

    Torre, Emiliano; Picado-Muiño, David; Denker, Michael; Borgelt, Christian; Grün, Sonja

    2013-01-01

    We recently proposed frequent itemset mining (FIM) as a method to perform an optimized search for patterns of synchronous spikes (item sets) in massively parallel spike trains. This search outputs the occurrence count (support) of individual patterns that are not trivially explained by the counts of any superset (closed frequent item sets). The number of patterns found by FIM makes direct statistical tests infeasible due to severe multiple testing. To overcome this issue, we proposed to test the significance not of individual patterns, but instead of their signatures, defined as the pairs of pattern size z and support c. Here, we derive in detail a statistical test for the significance of the signatures under the null hypothesis of full independence (pattern spectrum filtering, PSF) by means of surrogate data. As a result, injected spike patterns that mimic assembly activity are well detected, yielding a low false negative rate. However, this approach is prone to additionally classify patterns resulting from chance overlap of real assembly activity and background spiking as significant. These patterns represent false positives with respect to the null hypothesis of having one assembly of given signature embedded in otherwise independent spiking activity. We propose the additional method of pattern set reduction (PSR) to remove these false positives by conditional filtering. By employing stochastic simulations of parallel spike trains with correlated activity in form of injected spike synchrony in subsets of the neurons, we demonstrate for a range of parameter settings that the analysis scheme composed of FIM, PSF and PSR allows to reliably detect active assemblies in massively parallel spike trains. PMID:24167487

  15. Subject-specific and pose-oriented facial features for face recognition across poses.

    PubMed

    Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping

    2012-10-01

    Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.

  16. A semi-implicit level set method for multiphase flows and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri; Maitre, Emmanuel

    2016-06-01

    In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.

  17. Rhetoric and Reality: Using ICT To Enhance Pupil Learning--Harry Potter and the Warley Woods Mystery--Case Study 2.

    ERIC Educational Resources Information Center

    Nichol, Jon; Watson, Kate; Waites, Graham

    2003-01-01

    This case study of grade 7, 11-12 year old students in the United Kingdom embedded ICT (information and communication technology) within an existing scheme of work to develop skills and processes involved in historical investigation, using characters from the Harry Potter books. Also examined teachers' teaching styles and beliefs and values.…

  18. Quantum-like model of unconscious–conscious dynamics

    PubMed Central

    Khrennikov, Andrei

    2015-01-01

    We present a quantum-like model of sensation–perception dynamics (originated in Helmholtz theory of unconscious inference) based on the theory of quantum apparatuses and instruments. We illustrate our approach with the model of bistable perception of a particular ambiguous figure, the Schröder stair. This is a concrete model for unconscious and conscious processing of information and their interaction. The starting point of our quantum-like journey was the observation that perception dynamics is essentially contextual which implies impossibility of (straightforward) embedding of experimental statistical data in the classical (Kolmogorov, 1933) framework of probability theory. This motivates application of nonclassical probabilistic schemes. And the quantum formalism provides a variety of the well-approved and mathematically elegant probabilistic schemes to handle results of measurements. The theory of quantum apparatuses and instruments is the most general quantum scheme describing measurements and it is natural to explore it to model the sensation–perception dynamics. In particular, this theory provides the scheme of indirect quantum measurements which we apply to model unconscious inference leading to transition from sensations to perceptions. PMID:26283979

  19. Energy efficiency of task allocation for embedded JPEG systems.

    PubMed

    Fan, Yang-Hsin; Wu, Jan-Ou; Wang, San-Fu

    2014-01-01

    Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin.

  20. Energy Efficiency of Task Allocation for Embedded JPEG Systems

    PubMed Central

    2014-01-01

    Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin. PMID:24982983

  1. A family of four stages embedded explicit six-step methods with eliminated phase-lag and its derivatives for the numerical solution of the second order problems

    NASA Astrophysics Data System (ADS)

    Simos, T. E.

    2017-11-01

    A family of four stages high algebraic order embedded explicit six-step methods, for the numerical solution of second order initial or boundary-value problems with periodical and/or oscillating solutions, are studied in this paper. The free parameters of the new proposed methods are calculated solving the linear system of equations which is produced by requesting the vanishing of the phase-lag of the methods and the vanishing of the phase-lag's derivatives of the schemes. For the new obtained methods we investigate: • Its local truncation error (LTE) of the methods.• The asymptotic form of the LTE obtained using as model problem the radial Schrödinger equation.• The comparison of the asymptotic forms of LTEs for several methods of the same family. This comparison leads to conclusions on the efficiency of each method of the family.• The stability and the interval of periodicity of the obtained methods of the new family of embedded finite difference pairs.• The applications of the new obtained family of embedded finite difference pairs to the numerical solution of several second order problems like the radial Schrödinger equation, astronomical problems etc. The above applications lead to conclusion on the efficiency of the methods of the new family of embedded finite difference pairs.

  2. Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Sethian, James A.

    1997-01-01

    Borrowing from techniques developed for conservation law equations, numerical schemes which discretize the Hamilton-Jacobi (H-J), level set, and Eikonal equations on triangulated domains are presented. The first scheme is a provably monotone discretization for certain forms of the H-J equations. Unfortunately, the basic scheme lacks proper Lipschitz continuity of the numerical Hamiltonian. By employing a virtual edge flipping technique, Lipschitz continuity of the numerical flux is restored on acute triangulations. Next, schemes are introduced and developed based on the weaker concept of positive coefficient approximations for homogeneous Hamiltonians. These schemes possess a discrete maximum principle on arbitrary triangulations and naturally exhibit proper Lipschitz continuity of the numerical Hamiltonian. Finally, a class of Petrov-Galerkin approximations are considered. These schemes are stabilized via a least-squares bilinear form. The Petrov-Galerkin schemes do not possess a discrete maximum principle but generalize to high order accuracy.

  3. Natural Resource Management Schemes as Entry Points for Integrated Landscape Approaches: Evidence from Ghana and Burkina Faso.

    PubMed

    Foli, Samson; Ros-Tonen, Mirjam A F; Reed, James; Sunderland, Terry

    2018-07-01

    In recognition of the failures of sectoral approaches to overcome global challenges of biodiversity loss, climate change, food insecurity and poverty, scientific discourse on biodiversity conservation and sustainable development is shifting towards integrated landscape governance arrangements. Current landscape initiatives however very much depend on external actors and funding, raising the question of whether, and how, and under what conditions, locally embedded resource management schemes can serve as entry points for the implementation of integrated landscape approaches. This paper assesses the entry point potential for three established natural resource management schemes in West Africa that target landscape degradation with involvement of local communities: the Chantier d'Aménagement Forestier scheme encompassing forest management sites across Burkina Faso and the Modified Taungya System and community wildlife resource management initiatives in Ghana. Based on a review of the current literature, we analyze the extent to which design principles that define a landscape approach apply to these schemes. We found that the CREMA meets most of the desired criteria, but that its scale may be too limited to guarantee effective landscape governance, hence requiring upscaling. Conversely, the other two initiatives are strongly lacking in their design principles on fundamental components regarding integrated approaches, continual learning, and capacity building. Monitoring and evaluation bodies and participatory learning and negotiation platforms could enhance the schemes' alignment with integrated landscape approaches.

  4. Multicomponent density functional theory embedding formulation.

    PubMed

    Culpitt, Tanner; Brorsen, Kurt R; Pak, Michael V; Hammes-Schiffer, Sharon

    2016-07-28

    Multicomponent density functional theory (DFT) methods have been developed to treat two types of particles, such as electrons and nuclei, quantum mechanically at the same level. In the nuclear-electronic orbital (NEO) approach, all electrons and select nuclei, typically key protons, are treated quantum mechanically. For multicomponent DFT methods developed within the NEO framework, electron-proton correlation functionals based on explicitly correlated wavefunctions have been designed and used in conjunction with well-established electronic exchange-correlation functionals. Herein a general theory for multicomponent embedded DFT is developed to enable the accurate treatment of larger systems. In the general theory, the total electronic density is separated into two subsystem densities, denoted as regular and special, and different electron-proton correlation functionals are used for these two electronic densities. In the specific implementation, the special electron density is defined in terms of spatially localized Kohn-Sham electronic orbitals, and electron-proton correlation is included only for the special electron density. The electron-proton correlation functional depends on only the special electron density and the proton density, whereas the electronic exchange-correlation functional depends on the total electronic density. This scheme includes the essential electron-proton correlation, which is a relatively local effect, as well as the electronic exchange-correlation for the entire system. This multicomponent DFT-in-DFT embedding theory is applied to the HCN and FHF(-) molecules in conjunction with two different electron-proton correlation functionals and three different electronic exchange-correlation functionals. The results illustrate that this approach provides qualitatively accurate nuclear densities in a computationally tractable manner. The general theory is also easily extended to other types of partitioning schemes for multicomponent systems.

  5. Multicomponent density functional theory embedding formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Culpitt, Tanner; Brorsen, Kurt R.; Pak, Michael V.

    Multicomponent density functional theory (DFT) methods have been developed to treat two types of particles, such as electrons and nuclei, quantum mechanically at the same level. In the nuclear-electronic orbital (NEO) approach, all electrons and select nuclei, typically key protons, are treated quantum mechanically. For multicomponent DFT methods developed within the NEO framework, electron-proton correlation functionals based on explicitly correlated wavefunctions have been designed and used in conjunction with well-established electronic exchange-correlation functionals. Herein a general theory for multicomponent embedded DFT is developed to enable the accurate treatment of larger systems. In the general theory, the total electronic density ismore » separated into two subsystem densities, denoted as regular and special, and different electron-proton correlation functionals are used for these two electronic densities. In the specific implementation, the special electron density is defined in terms of spatially localized Kohn-Sham electronic orbitals, and electron-proton correlation is included only for the special electron density. The electron-proton correlation functional depends on only the special electron density and the proton density, whereas the electronic exchange-correlation functional depends on the total electronic density. This scheme includes the essential electron-proton correlation, which is a relatively local effect, as well as the electronic exchange-correlation for the entire system. This multicomponent DFT-in-DFT embedding theory is applied to the HCN and FHF{sup −} molecules in conjunction with two different electron-proton correlation functionals and three different electronic exchange-correlation functionals. The results illustrate that this approach provides qualitatively accurate nuclear densities in a computationally tractable manner. The general theory is also easily extended to other types of partitioning schemes for multicomponent systems.« less

  6. A robust color image watermarking algorithm against rotation attacks

    NASA Astrophysics Data System (ADS)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  7. Optimal solutions for the evolution of a social obesity epidemic model

    NASA Astrophysics Data System (ADS)

    Sikander, Waseem; Khan, Umar; Mohyud-Din, Syed Tauseef

    2017-06-01

    In this work, a novel modification in the traditional homotopy perturbation method (HPM) is proposed by embedding an auxiliary parameter in the boundary condition. The scheme is used to carry out a mathematical evaluation of the social obesity epidemic model. The incidence of excess weight and obesity in adulthood population and prediction of its behavior in the coming years is analyzed by using a modified algorithm. The proposed method increases the convergence of the approximate analytical solution over the domain of the problem. Furthermore, a convenient way is considered for choosing an optimal value of auxiliary parameters via minimizing the total residual error. The graphical comparison of the obtained results with the standard HPM explicitly reveals the accuracy and efficiency of the developed scheme.

  8. Discrete cosine transform and hash functions toward implementing a (robust-fragile) watermarking scheme

    NASA Astrophysics Data System (ADS)

    Al-Mansoori, Saeed; Kunhu, Alavi

    2013-10-01

    This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.

  9. Nonlinear hierarchical multiscale modeling of cortical bone considering its nanoscale microstructure.

    PubMed

    Ghanbari, J; Naghdabadi, R

    2009-07-22

    We have used a hierarchical multiscale modeling scheme for the analysis of cortical bone considering it as a nanocomposite. This scheme consists of definition of two boundary value problems, one for macroscale, and another for microscale. The coupling between these scales is done by using the homogenization technique. At every material point in which the constitutive model is needed, a microscale boundary value problem is defined using a macroscopic kinematical quantity and solved. Using the described scheme, we have studied elastic properties of cortical bone considering its nanoscale microstructural constituents with various mineral volume fractions. Since the microstructure of bone consists of mineral platelet with nanometer size embedded in a protein matrix, it is similar to the microstructure of soft matrix nanocomposites reinforced with hard nanostructures. Considering a representative volume element (RVE) of the microstructure of bone as the microscale problem in our hierarchical multiscale modeling scheme, the global behavior of bone is obtained under various macroscopic loading conditions. This scheme may be suitable for modeling arbitrary bone geometries subjected to a variety of loading conditions. Using the presented method, mechanical properties of cortical bone including elastic moduli and Poisson's ratios in two major directions and shear modulus is obtained for different mineral volume fractions.

  10. Preconditioned steepest descent methods for some nonlinear elliptic equations involving p-Laplacian terms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Wenqiang, E-mail: wfeng1@vols.utk.edu; Salgado, Abner J., E-mail: asalgad1@utk.edu; Wang, Cheng, E-mail: cwang1@umassd.edu

    We describe and analyze preconditioned steepest descent (PSD) solvers for fourth and sixth-order nonlinear elliptic equations that include p-Laplacian terms on periodic domains in 2 and 3 dimensions. The highest and lowest order terms of the equations are constant-coefficient, positive linear operators, which suggests a natural preconditioning strategy. Such nonlinear elliptic equations often arise from time discretization of parabolic equations that model various biological and physical phenomena, in particular, liquid crystals, thin film epitaxial growth and phase transformations. The analyses of the schemes involve the characterization of the strictly convex energies associated with the equations. We first give a generalmore » framework for PSD in Hilbert spaces. Based on certain reasonable assumptions of the linear pre-conditioner, a geometric convergence rate is shown for the nonlinear PSD iteration. We then apply the general theory to the fourth and sixth-order problems of interest, making use of Sobolev embedding and regularity results to confirm the appropriateness of our pre-conditioners for the regularized p-Lapacian problems. Our results include a sharper theoretical convergence result for p-Laplacian systems compared to what may be found in existing works. We demonstrate rigorously how to apply the theory in the finite dimensional setting using finite difference discretization methods. Numerical simulations for some important physical application problems – including thin film epitaxy with slope selection and the square phase field crystal model – are carried out to verify the efficiency of the scheme.« less

  11. Preconditioned steepest descent methods for some nonlinear elliptic equations involving p-Laplacian terms

    NASA Astrophysics Data System (ADS)

    Feng, Wenqiang; Salgado, Abner J.; Wang, Cheng; Wise, Steven M.

    2017-04-01

    We describe and analyze preconditioned steepest descent (PSD) solvers for fourth and sixth-order nonlinear elliptic equations that include p-Laplacian terms on periodic domains in 2 and 3 dimensions. The highest and lowest order terms of the equations are constant-coefficient, positive linear operators, which suggests a natural preconditioning strategy. Such nonlinear elliptic equations often arise from time discretization of parabolic equations that model various biological and physical phenomena, in particular, liquid crystals, thin film epitaxial growth and phase transformations. The analyses of the schemes involve the characterization of the strictly convex energies associated with the equations. We first give a general framework for PSD in Hilbert spaces. Based on certain reasonable assumptions of the linear pre-conditioner, a geometric convergence rate is shown for the nonlinear PSD iteration. We then apply the general theory to the fourth and sixth-order problems of interest, making use of Sobolev embedding and regularity results to confirm the appropriateness of our pre-conditioners for the regularized p-Lapacian problems. Our results include a sharper theoretical convergence result for p-Laplacian systems compared to what may be found in existing works. We demonstrate rigorously how to apply the theory in the finite dimensional setting using finite difference discretization methods. Numerical simulations for some important physical application problems - including thin film epitaxy with slope selection and the square phase field crystal model - are carried out to verify the efficiency of the scheme.

  12. Student Loans Schemes in Mauritius: Experience, Analysis and Scenarios

    ERIC Educational Resources Information Center

    Mohadeb, Praveen

    2006-01-01

    This study makes a comprehensive review of the situation of student loans schemes in Mauritius, and makes recommendations, based on best practices, for setting up a national scheme that attempts to avoid weaknesses identified in some of the loans schemes of other countries. It suggests that such a scheme would be cost-effective and beneficial both…

  13. A telehealth architecture for networked embedded systems: a case study in in vivo health monitoring.

    PubMed

    Dabiri, Foad; Massey, Tammara; Noshadi, Hyduke; Hagopian, Hagop; Lin, C K; Tan, Robert; Schmidt, Jacob; Sarrafzadeh, Majid

    2009-05-01

    The improvement in processor performance through continuous breakthroughs in transistor technology has resulted in the proliferation of lightweight embedded systems. Advances in wireless technology and embedded systems have enabled remote healthcare and telemedicine. While medical examinations could previously extract only localized symptoms through snapshots, now continuous monitoring can discretely analyze how a patient's lifestyle affects his/her physiological conditions and if additional symptoms occur under various stimuli. We demonstrate how medical applications in particular benefit from a hierarchical networking scheme that will improve the quantity and quality of ubiquitous data collection. Our Telehealth networking infrastructure provides flexibility in terms of functionality and the type of applications that it supports. We specifically present a case study that demonstrates the effectiveness of our networked embedded infrastructure in an in vivo pressure application. Experimental results of the in vivo system demonstrate how it can wirelessly transmit pressure readings measuring from 0 to 1.5 lbf/in (2) with an accuracy of 0.02 lbf/in (2). The challenges in biocompatible packaging, transducer drift, power management, and in vivo signal transmission are also discussed. This research brings researchers a step closer to continuous, real-time systemic monitoring that will allow one to analyze the dynamic human physiology.

  14. Locally linear embedding: dimension reduction of massive protostellar spectra

    NASA Astrophysics Data System (ADS)

    Ward, J. L.; Lumsden, S. L.

    2016-09-01

    We present the results of the application of locally linear embedding (LLE) to reduce the dimensionality of dereddened and continuum subtracted near-infrared spectra using a combination of models and real spectra of massive protostars selected from the Red MSX Source survey data base. A brief comparison is also made with two other dimension reduction techniques; principal component analysis (PCA) and Isomap using the same set of spectra as well as a more advanced form of LLE, Hessian locally linear embedding. We find that whilst LLE certainly has its limitations, it significantly outperforms both PCA and Isomap in classification of spectra based on the presence/absence of emission lines and provides a valuable tool for classification and analysis of large spectral data sets.

  15. A new chaotic communication scheme based on adaptive synchronization.

    PubMed

    Xiang-Jun, Wu

    2006-12-01

    A new chaotic communication scheme using adaptive synchronization technique of two unified chaotic systems is proposed. Different from the existing secure communication methods, the transmitted signal is modulated into the parameter of chaotic systems. The adaptive synchronization technique is used to synchronize two identical chaotic systems embedded in the transmitter and the receiver. It is assumed that the parameter of the receiver system is unknown. Based on the Lyapunov stability theory, an adaptive control law is derived to make the states of two identical unified chaotic systems with unknown system parameters asymptotically synchronized; thus the parameter of the receiver system is identified. Then the recovery of the original information signal in the receiver is successfully achieved on the basis of the estimated parameter. It is noticed that the time required for recovering the information signal and the accuracy of the recovered signal very sensitively depends on the frequency of the information signal. Numerical results have verified the effectiveness of the proposed scheme.

  16. Characterization of a Method for Inverse Heat Conduction Using Real and Simulated Thermocouple Data

    NASA Technical Reports Server (NTRS)

    Pizzo, Michelle E.; Glass, David E.

    2017-01-01

    It is often impractical to instrument the external surface of high-speed vehicles due to the aerothermodynamic heating. Temperatures can instead be measured internal to the structure using embedded thermocouples, and direct and inverse methods can then be used to estimate temperature and heat flux on the external surface. Two thermocouples embedded at different depths are required to solve direct and inverse problems, and filtering schemes are used to reduce noise in the measured data. Accuracy in the estimated surface temperature and heat flux is dependent on several factors. Factors include the thermocouple location through the thickness of a material, the sensitivity of the surface solution to the error in the specified location of the embedded thermocouples, and the sensitivity to the error in thermocouple data. The effect of these factors on solution accuracy is studied using the methodology discussed in the work of Pizzo, et. al.1 A numerical study is performed to determine if there is an optimal depth at which to embed one thermocouple through the thickness of a material assuming that a second thermocouple is installed on the back face. Solution accuracy will be discussed for a range of embedded thermocouple depths. Moreover, the sensitivity of the surface solution to (a) the error in the specified location of the embedded thermocouple and to (b) the error in the thermocouple data are quantified using numerical simulation, and the results are discussed.

  17. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme.

    PubMed

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases.

  18. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme

    PubMed Central

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    abstract The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases. PMID:26646356

  19. Data embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.

    1997-01-01

    A method of embedding auxiliary information into a set of host data, such as a photograph, television signal, facsimile transmission, or identification card. All such host data contain intrinsic noise, allowing pixels in the host data which are nearly identical and which have values differing by less than the noise value to be manipulated and replaced with auxiliary data. As the embedding method does not change the elemental values of the host data, the auxiliary data do not noticeably affect the appearance or interpretation of the host data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user.

  20. Data embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.

    1997-08-19

    A method is disclosed for embedding auxiliary information into a set of host data, such as a photograph, television signal, facsimile transmission, or identification card. All such host data contain intrinsic noise, allowing pixels in the host data which are nearly identical and which have values differing by less than the noise value to be manipulated and replaced with auxiliary data. As the embedding method does not change the elemental values of the host data, the auxiliary data do not noticeably affect the appearance or interpretation of the host data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. 19 figs.

  1. Design and control of an embedded vision guided robotic fish with multiple control surfaces.

    PubMed

    Yu, Junzhi; Wang, Kai; Tan, Min; Zhang, Jianwei

    2014-01-01

    This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface.

  2. Manifold Embedding and Semantic Segmentation for Intraoperative Guidance With Hyperspectral Brain Imaging.

    PubMed

    Ravi, Daniele; Fabelo, Himar; Callic, Gustavo Marrero; Yang, Guang-Zhong

    2017-09-01

    Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.

  3. Embedded importance watermarking for image verification in radiology

    NASA Astrophysics Data System (ADS)

    Osborne, Domininc; Rogers, D.; Sorell, M.; Abbott, Derek

    2004-03-01

    Digital medical images used in radiology are quite different to everyday continuous tone images. Radiology images require that all detailed diagnostic information can be extracted, which traditionally constrains digital medical images to be of large size and stored without loss of information. In order to transmit diagnostic images over a narrowband wireless communication link for remote diagnosis, lossy compression schemes must be used. This involves discarding detailed information and compressing the data, making it more susceptible to error. The loss of image detail and incidental degradation occurring during transmission have potential legal accountability issues, especially in the case of the null diagnosis of a tumor. The work proposed here investigates techniques for verifying the voracity of medical images - in particular, detailing the use of embedded watermarking as an objective means to ensure that important parts of the medical image can be verified. We propose a result to show how embedded watermarking can be used to differentiate contextual from detailed information. The type of images that will be used include spiral hairline fractures and small tumors, which contain the essential diagnostic high spatial frequency information.

  4. Nanofluidic Device with Embedded Nanopore

    NASA Astrophysics Data System (ADS)

    Zhang, Yuning; Reisner, Walter

    2014-03-01

    Nanofluidic based devices are robust methods for biomolecular sensing and single DNA manipulation. Nanopore-based DNA sensing has attractive features that make it a leading candidate as a single-molecule DNA sequencing technology. Nanochannel based extension of DNA, combined with enzymatic or denaturation-based barcoding schemes, is already a powerful approach for genome analysis. We believe that there is revolutionary potential in devices that combine nanochannels with nanpore detectors. In particular, due to the fast translocation of a DNA molecule through a standard nanopore configuration, there is an unfavorable trade-off between signal and sequence resolution. With a combined nanochannel-nanopore device, based on embedding a nanopore inside a nanochannel, we can in principle gain independent control over both DNA translocation speed and sensing signal, solving the key draw-back of the standard nanopore configuration. We demonstrate that we can detect - using fluorescent microscopy - successful translocation of DNA from the nanochannel out through the nanopore, a possible method to 'select' a given barcode for further analysis. We also show that in equilibrium DNA will not escape through an embedded sub-persistence length nanopore until a certain voltage bias is added.

  5. A Graph Theory Practice on Transformed Image: A Random Image Steganography

    PubMed Central

    Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan

    2013-01-01

    Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857

  6. Design and Control of an Embedded Vision Guided Robotic Fish with Multiple Control Surfaces

    PubMed Central

    Wang, Kai; Tan, Min; Zhang, Jianwei

    2014-01-01

    This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface. PMID:24688413

  7. A Student Experiment Method for Learning the Basics of Embedded Software Technologies Including Hardware/Software Co-design

    NASA Astrophysics Data System (ADS)

    Kambe, Hidetoshi; Mitsui, Hiroyasu; Endo, Satoshi; Koizumi, Hisao

    The applications of embedded system technologies have spread widely in various products, such as home appliances, cellular phones, automobiles, industrial machines and so on. Due to intensified competition, embedded software has expanded its role in realizing sophisticated functions, and new development methods like a hardware/software (HW/SW) co-design for uniting HW and SW development have been researched. The shortfall of embedded SW engineers was estimated to be approximately 99,000 in the year 2006, in Japan. Embedded SW engineers should understand HW technologies and system architecture design as well as SW technologies. However, a few universities offer this kind of education systematically. We propose a student experiment method for learning the basics of embedded system development, which includes a set of experiments for developing embedded SW, developing embedded HW and experiencing HW/SW co-design. The co-design experiment helps students learn about the basics of embedded system architecture design and the flow of designing actual HW and SW modules. We developed these experiments and evaluated them.

  8. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system

    NASA Astrophysics Data System (ADS)

    Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng

    2009-02-01

    This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.

  9. Digital Watermarking of Autonomous Vehicles Imagery and Video Communication

    DTIC Science & Technology

    2005-10-01

    world’s recent events, the increasing need in different domains, those being: spatial, spectral and corn- homeland security and defense is a critical topic...watermarking schemes benefit in that security and analysis is also vital, especially when using there is no need for full or partial decompression, which...are embedded Arguably, the most widely used technique is spread spec- change with each application. Whether it is secure covert trum watermarking (SS

  10. Nonlinear data assimilation using synchronization in a particle filter

    NASA Astrophysics Data System (ADS)

    Rodrigues-Pinheiro, Flavia; Van Leeuwen, Peter Jan

    2017-04-01

    Current data assimilation methods still face problems in strongly nonlinear cases. A promising solution is a particle filter, which provides a representation of the model probability density function by a discrete set of particles. However, the basic particle filter does not work in high-dimensional cases. The performance can be improved by considering the proposal density freedom. A potential choice of proposal density might come from the synchronisation theory, in which one tries to synchronise the model with the true evolution of a system using one-way coupling via the observations. In practice, an extra term is added to the model equations that damps growth of instabilities on the synchronisation manifold. When only part of the system is observed synchronization can be achieved via a time embedding, similar to smoothers in data assimilation. In this work, two new ideas are tested. First, ensemble-based time embedding, similar to an ensemble smoother or 4DEnsVar is used on each particle, avoiding the need for tangent-linear models and adjoint calculations. Tests were performed using Lorenz96 model for 20, 100 and 1000-dimension systems. Results show state-averaged synchronisation errors smaller than observation errors even in partly observed systems, suggesting that the scheme is a promising tool to steer model states to the truth. Next, we combine these efficient particles using an extension of the Implicit Equal-Weights Particle Filter, a particle filter that ensures equal weights for all particles, avoiding filter degeneracy by construction. Promising results will be shown on low- and high-dimensional Lorenz96 models, and the pros and cons of these new ideas will be discussed.

  11. Imperceptible reversible watermarking of radiographic images based on quantum noise masking.

    PubMed

    Pan, Wei; Bouslimi, Dalel; Karasad, Mohamed; Cozic, Michel; Coatrieux, Gouenou

    2018-07-01

    Advances in information and communication technologies boost the sharing and remote access to medical images. Along with this evolution, needs in terms of data security are also increased. Watermarking can contribute to better protect images by dissimulating into their pixels some security attributes (e.g., digital signature, user identifier). But, to take full advantage of this technology in healthcare, one key problem to address is to ensure that the image distortion induced by the watermarking process does not endanger the image diagnosis value. To overcome this issue, reversible watermarking is one solution. It allows watermark removal with the exact recovery of the image. Unfortunately, reversibility does not mean that imperceptibility constraints are relaxed. Indeed, once the watermark removed, the image is unprotected. It is thus important to ensure the invisibility of reversible watermark in order to ensure a permanent image protection. We propose a new fragile reversible watermarking scheme for digital radiographic images, the main originality of which stands in masking a reversible watermark into the image quantum noise (the dominant noise in radiographic images). More clearly, in order to ensure the watermark imperceptibility, our scheme differentiates the image black background, where message embedding is conducted into pixel gray values with the well-known histogram shifting (HS) modulation, from the anatomical object, where HS is applied to wavelet detail coefficients, masking the watermark with the image quantum noise. In order to maintain the watermark embedder and reader synchronized in terms of image partitioning and insertion domain, our scheme makes use of different classification processes that are invariant to message embedding. We provide the theoretical performance limits of our scheme into the image quantum noise in terms of image distortion and message size (i.e. capacity). Experiments conducted on more than 800 12 bits radiographic images of different anatomical structures show that our scheme induces a very low image distortion (PSNR∼ 76.5 dB) for a relatively important capacity (capacity∼ 0.02 bits of message per pixel). The proposed watermarking scheme, while being reversible, preserves the diagnosis value of radiographic images by masking the watermark into the quantum noise. As theoretically and experimentally established our scheme offers a good capacity/image quality compromise that can support different watermarking based security services such as integrity and authenticity control. The watermark can be kept into the image during the interpretation of the image, offering thus a continuous protection. Such a masking strategy can be seen as the first psychovisual model for radiographic images. The reversibility allows the watermark update when necessary. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Controlling the set of carbon-fiber embedded cement with electric current

    DOEpatents

    Mattus, Alfred J.

    2004-06-15

    A method for promoting cement or concrete set on demand for concrete that has been chemically retarded by adding carbon fiber to the concrete, which enables it to become electrically conductive, sodium tartrate retardant, and copper sulfate which forms a copper tartrate complex in alkaline concrete mixes. Using electricity, the concrete mix anodically converts the retarding tartrate to an insoluble polyester polymer. The carbon fibers act as a continuous anode surface with a counter electrode wire embedded in the mix. Upon energizing, the retarding effect of tartrate is defeated by formation of the polyester polymer through condensation esterification thereby allowing the normal set to proceed unimpeded.

  13. Coupling reconstruction and motion estimation for dynamic MRI through optical flow constraint

    NASA Astrophysics Data System (ADS)

    Zhao, Ningning; O'Connor, Daniel; Gu, Wenbo; Ruan, Dan; Basarab, Adrian; Sheng, Ke

    2018-03-01

    This paper addresses the problem of dynamic magnetic resonance image (DMRI) reconstruction and motion estimation jointly. Because of the inherent anatomical movements in DMRI acquisition, reconstruction of DMRI using motion estimation/compensation (ME/MC) has been explored under the compressed sensing (CS) scheme. In this paper, by embedding the intensity based optical flow (OF) constraint into the traditional CS scheme, we are able to couple the DMRI reconstruction and motion vector estimation. Moreover, the OF constraint is employed in a specific coarse resolution scale in order to reduce the computational complexity. The resulting optimization problem is then solved using a primal-dual algorithm due to its efficiency when dealing with nondifferentiable problems. Experiments on highly accelerated dynamic cardiac MRI with multiple receiver coils validate the performance of the proposed algorithm.

  14. Setting monitoring objectives for landscape-size areas

    Treesearch

    Craig M. Olson; Dean Angelides

    2000-01-01

    The setting of objectives for monitoring schemes for landscape-size areas should be a complex task in today's regulatory and sociopolitical atmosphere. The technology available today, the regulatory environment, and the sociopolitical considerations require multiresource inventory and monitoring schemes, whether tile ownership is industrial or for preservation....

  15. Automatic segementation of histological structures in normal and neoplastic mammary gland tissue sections

    NASA Astrophysics Data System (ADS)

    Fernandez-Gonzalez, Rodrigo; Deschamps, Thomas; Idica, Adam; Malladi, Ravikanth; Ortiz de Solorzano, Carlos

    2003-07-01

    In this paper we present a scheme for real time segmentation of histological structures in microscopic images of normal and neoplastic mammary gland sections. Paraffin embedded or frozen tissue blocks are sliced, and sections are stained with hematoxylin and eosin (H&E). The sections are then imaged using conventional bright field microscopy. The background of the images is corrected by arithmetic manipulation using a "phantom." Then we use the fast marching method with a speed function that depends on the brightness gradient of the image to obtain a preliminary approximation to the boundaries of the structures of interest within a region of interest (ROI) of the entire section manually selected by the user. We use the result of the fast marching method as the initial condition for the level set motion equation. We run this last method for a few steps and obtain the final result of the segmentation. These results can be connected from section to section to build a three-dimensional reconstruction of the entire tissue block that we are studying.

  16. GPR measurements of attenuation in concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisenmann, David, E-mail: djeisen@cnde.iastate.edu; Margetan, Frank J., E-mail: djeisen@cnde.iastate.edu; Pavel, Brittney, E-mail: djeisen@cnde.iastate.edu

    2015-03-31

    Ground-penetrating radar (GPR) signals from concrete structures are affected by several phenomenon, including: (1) transmission and reflection coefficients at interfaces; (2) the radiation patterns of the antenna(s) being used; and (3) the material properties of concrete and any embedded objects. In this paper we investigate different schemes for determining the electromagnetic (EM) attenuation of concrete from measured signals obtained using commercially-available GPR equipment. We adapt procedures commonly used in ultrasonic inspections where one compares the relative strengths of two or more signals having different travel paths through the material of interest. After correcting for beam spread (i.e., diffraction), interface phenomena,more » and equipment amplification settings, any remaining signal differences are assumed to be due to attenuation thus allowing the attenuation coefficient (say, in dB of loss per inch of travel) to be estimated. We begin with a brief overview of our approach, and then discuss how diffraction corrections were determined for our two 1.6 GHz GPR antennas. We then present results of attenuation measurements for two types of concrete using both pulse/echo and pitch/catch measurement setups.« less

  17. Sensing Home: A Cost-Effective Design for Smart Home via Heterogeneous Wireless Networks

    PubMed Central

    Fan, Xiaohu; Huang, Hao; Qi, Shipeng; Luo, Xincheng; Zeng, Jing; Xie, Qubo; Xie, Changsheng

    2015-01-01

    The aging population has inspired the marketing of advanced real time devices for home health care, more and more wearable devices and mobile applications, which have emerged in this field. However, to properly collect behavior information, accurately recognize human activities, and deploy the whole system in a real living environment is a challenging task. In this paper, we propose a feasible wireless-based solution to deploy a data collection scheme, activity recognition model, feedback control and mobile integration via heterogeneous networks. We compared and found a suitable algorithm that can be run on cost-efficient embedded devices. Specifically, we use the Super Set Transformation method to map the raw data into a sparse binary matrix. Furthermore, designed front-end devices of low power consumption gather the living data of the habitant via ZigBee to reduce the burden of wiring work. Finally, we evaluated our approach and show it can achieve a theoretical time-slice accuracy of 98%. The mapping solution we propose is compatible with more wearable devices and mobile apps. PMID:26633424

  18. GPR measurements of attenuation in concrete

    NASA Astrophysics Data System (ADS)

    Eisenmann, David; Margetan, Frank J.; Pavel, Brittney

    2015-03-01

    Ground-penetrating radar (GPR) signals from concrete structures are affected by several phenomenon, including: (1) transmission and reflection coefficients at interfaces; (2) the radiation patterns of the antenna(s) being used; and (3) the material properties of concrete and any embedded objects. In this paper we investigate different schemes for determining the electromagnetic (EM) attenuation of concrete from measured signals obtained using commercially-available GPR equipment. We adapt procedures commonly used in ultrasonic inspections where one compares the relative strengths of two or more signals having different travel paths through the material of interest. After correcting for beam spread (i.e., diffraction), interface phenomena, and equipment amplification settings, any remaining signal differences are assumed to be due to attenuation thus allowing the attenuation coefficient (say, in dB of loss per inch of travel) to be estimated. We begin with a brief overview of our approach, and then discuss how diffraction corrections were determined for our two 1.6 GHz GPR antennas. We then present results of attenuation measurements for two types of concrete using both pulse/echo and pitch/catch measurement setups.

  19. Sensing Home: A Cost-Effective Design for Smart Home via Heterogeneous Wireless Networks.

    PubMed

    Fan, Xiaohu; Huang, Hao; Qi, Shipeng; Luo, Xincheng; Zeng, Jing; Xie, Qubo; Xie, Changsheng

    2015-12-03

    The aging population has inspired the marketing of advanced real time devices for home health care, more and more wearable devices and mobile applications, which have emerged in this field. However, to properly collect behavior information, accurately recognize human activities, and deploy the whole system in a real living environment is a challenging task. In this paper, we propose a feasible wireless-based solution to deploy a data collection scheme, activity recognition model, feedback control and mobile integration via heterogeneous networks. We compared and found a suitable algorithm that can be run on cost-efficient embedded devices. Specifically, we use the Super Set Transformation method to map the raw data into a sparse binary matrix. Furthermore, designed front-end devices of low power consumption gather the living data of the habitant via ZigBee to reduce the burden of wiring work. Finally, we evaluated our approach and show it can achieve a theoretical time-slice accuracy of 98%. The mapping solution we propose is compatible with more wearable devices and mobile apps.

  20. Arc-welding quality assurance by means of embedded fiber sensor and spectral processing combining feature selection and neural networks

    NASA Astrophysics Data System (ADS)

    Mirapeix, J.; García-Allende, P. B.; Cobo, A.; Conde, O.; López-Higuera, J. M.

    2007-07-01

    A new spectral processing technique designed for its application in the on-line detection and classification of arc-welding defects is presented in this paper. A non-invasive fiber sensor embedded within a TIG torch collects the plasma radiation originated during the welding process. The spectral information is then processed by means of two consecutive stages. A compression algorithm is first applied to the data allowing real-time analysis. The selected spectral bands are then used to feed a classification algorithm, which will be demonstrated to provide an efficient weld defect detection and classification. The results obtained with the proposed technique are compared to a similar processing scheme presented in a previous paper, giving rise to an improvement in the performance of the monitoring system.

  1. Exploring synchronisation in nonlinear data assimilation

    NASA Astrophysics Data System (ADS)

    Rodrigues-Pinheiro, Flavia; van Leeuwen, Peter Jan

    2016-04-01

    Present-day data assimilation methods are based on linearizations and face serious problems in strongly nonlinear cases such as convection. A promising solution to this problem is a particle filter, which provides a representation of the model probability density function (pdf) by a discrete set of model states, or particles. The basic particle filter uses Bayes's theorem directly, but does not work in high-dimensional cases. The performance can be improved by considering the proposal density freedom. This allows one to change the model equations to bring the particles closer to the observations, resulting in very efficient update schemes at observation times, but extending these schemes between observation times is computationally expensive. Simple solutions like nudging have been shown to be not powerful enough. A potential solution might be synchronization, in which one tries to synchronise the model of a system with the true evolution of the system via the observations. In practice this means that an extra term is added to the model equations that hampers growth of instabilities on the synchronization manifold. Especially the delayed versions, where observations are allowed to influence the state in the past have shown some remarkable successes. Unfortunately, all efforts ignore errors in the observations, and as soon as these are introduced the performance degrades considerably. There is a close connection between time-delayed synchronization and a Kalman Smoother, which does allow for observational (and other) errors. In this presentation we will explore this connection to the full, with a view to extend synchronization to more realistic settings. Specifically performance of the spread of information from observed to unobserved variables is studied in detail. The results indicate that this extended synchronisation is a promising tool to steer the model states towards the observations efficiently. If time permits, we will show initial results of embedding the new synchronization method into a particle filter.

  2. Understanding the many-body expansion for large systems. III. Critical role of four-body terms, counterpoise corrections, and cutoffs.

    PubMed

    Liu, Kuan-Yu; Herbert, John M

    2017-10-28

    Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H 2 O) 37 , four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H 2 O) 20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.

  3. Understanding the many-body expansion for large systems. III. Critical role of four-body terms, counterpoise corrections, and cutoffs

    NASA Astrophysics Data System (ADS)

    Liu, Kuan-Yu; Herbert, John M.

    2017-10-01

    Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H2O)37, four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H2O)20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.

  4. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  5. Adapting Cooperative Learning and Embedding It into Holistic Language Usage.

    ERIC Educational Resources Information Center

    Bailey, Dora L.; Ginnetti, Philip

    Class collaboration and small group composition illustrate the embedding of cooperative learning theory in whole language classroom events. Through this experience all students participate in active learning. The teacher has a weighty role in decision making, setting of the lesson, assigning roles, and monitoring segments of cooperative learning…

  6. NOVEL EMBEDDED CERAMIC ELECTRODE SYSTEM TO ACTIVATE NANOSTRUCTURED TITANIUM DIOXIDE FOR DEGRADATION OF MTBE

    EPA Science Inventory

    A novel reactor combining a flame-deposited nanostructured titanium dioxide film and a set of embedded ceramic electrodes was designed, developed and tested for degradation of methyl tert-butyl ether (MTBE) in water. On applying a voltage to the ceramic electrodes, a surface coro...

  7. Data embedding employing degenerate clusters of data having differences less than noise value

    DOEpatents

    Sanford, II, Maxwell T.; Handel, Theodore G.

    1998-01-01

    A method of embedding auxiliary information into a set of host data, such as a photograph, television signal, facsimile transmission, or identification card. All such host data contain intrinsic noise, allowing pixels in the host data which are nearly identical and which have values differing by less than the noise value to be manipulated and replaced with auxiliary data. As the embedding method does not change the elemental values of the host data, the auxiliary data do not noticeably affect the appearance or interpretation of the host data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user.

  8. The Activation of Embedded Words in Spoken Word Recognition

    PubMed Central

    Zhang, Xujin; Samuel, Arthur G.

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster) indexed activation of the embedded words (e.g., ham). When the listening conditions were optimal, isolated embedded words (e.g., ham) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions. PMID:25593407

  9. The Activation of Embedded Words in Spoken Word Recognition.

    PubMed

    Zhang, Xujin; Samuel, Arthur G

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster ) indexed activation of the embedded words (e.g., ham ). When the listening conditions were optimal, isolated embedded words (e.g., ham ) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster ), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions.

  10. Groundwater connectivity of upland-embedded wetlands in the Prairie Pothole Region

    USGS Publications Warehouse

    Neff, Brian; Rosenberry, Donald O.

    2018-01-01

    Groundwater connections from upland-embedded wetlands to downstream waterbodies remain poorly understood. In principle, water from upland-embedded wetlands situated high in a landscape should flow via groundwater to waterbodies situated lower in the landscape. However, the degree of groundwater connectivity varies across systems due to factors such as geologic setting, hydrologic conditions, and topography. We use numerical models to evaluate the conditions suitable for groundwater connectivity between upland-embedded wetlands and downstream waterbodies in the prairie pothole region of North Dakota (USA). Results show groundwater connectivity between upland-embedded wetlands and other waterbodies is restricted when these wetlands are surrounded by a mounding water table. However, connectivity exists among adjacent upland-embedded wetlands where water–table mounds do not form. In addition, the presence of sand layers greatly facilitates groundwater connectivity of upland-embedded wetlands. Anisotropy can facilitate connectivity via groundwater flow, but only if it becomes unrealistically large. These findings help consolidate previously divergent views on the significance of local and regional groundwater flow in the prairie pothole region.

  11. Subsystem real-time time dependent density functional theory.

    PubMed

    Krishtal, Alisa; Ceresoli, Davide; Pavanello, Michele

    2015-04-21

    We present the extension of Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) to real-time Time Dependent Density Functional Theory (rt-TDDFT). FDE is a DFT-in-DFT embedding method that allows to partition a larger Kohn-Sham system into a set of smaller, coupled Kohn-Sham systems. Additional to the computational advantage, FDE provides physical insight into the properties of embedded systems and the coupling interactions between them. The extension to rt-TDDFT is done straightforwardly by evolving the Kohn-Sham subsystems in time simultaneously, while updating the embedding potential between the systems at every time step. Two main applications are presented: the explicit excitation energy transfer in real time between subsystems is demonstrated for the case of the Na4 cluster and the effect of the embedding on optical spectra of coupled chromophores. In particular, the importance of including the full dynamic response in the embedding potential is demonstrated.

  12. Prediction of tautomer ratios by embedded-cluster integral equation theory

    NASA Astrophysics Data System (ADS)

    Kast, Stefan M.; Heil, Jochen; Güssregen, Stefan; Schmidt, K. Friedemann

    2010-04-01

    The "embedded cluster reference interaction site model" (EC-RISM) approach combines statistical-mechanical integral equation theory and quantum-chemical calculations for predicting thermodynamic data for chemical reactions in solution. The electronic structure of the solute is determined self-consistently with the structure of the solvent that is described by 3D RISM integral equation theory. The continuous solvent-site distribution is mapped onto a set of discrete background charges ("embedded cluster") that represent an additional contribution to the molecular Hamiltonian. The EC-RISM analysis of the SAMPL2 challenge set of tautomers proceeds in three stages. Firstly, the group of compounds for which quantitative experimental free energy data was provided was taken to determine appropriate levels of quantum-chemical theory for geometry optimization and free energy prediction. Secondly, the resulting workflow was applied to the full set, allowing for chemical interpretations of the results. Thirdly, disclosure of experimental data for parts of the compounds facilitated a detailed analysis of methodical issues and suggestions for future improvements of the model. Without specifically adjusting parameters, the EC-RISM model yields the smallest value of the root mean square error for the first set (0.6 kcal mol-1) as well as for the full set of quantitative reaction data (2.0 kcal mol-1) among the SAMPL2 participants.

  13. Spatially Tailored and Functionally Graded Light-Weight Structures for Optimum Mechanical Performance

    DTIC Science & Technology

    2008-01-15

    grading scheme involves embedding particles only in the outer layers of a laminate , achieving maximal increases in bending stiffness with a minimum...by Eq. (19), with d=2. Longitudinal-transverse shear modulus The shear modulus for distortion of the laminate in axes with one direction aligned...The effective Poisson’s ratio νeLT is dictated by the other material constants of the laminate (Hill, 1964; Torquato, 2001): 12 νe LT = ν f + ν

  14. Optical properties of YbMnBi2: A type II Weyl semimetal candidate

    NASA Astrophysics Data System (ADS)

    Pal, A.; Chinotti, M.; Degiorgi, L.; Ren, W. J.; Petrovic, C.

    2018-05-01

    We discuss our recent optical investigation of YbMnBi2, a representative type II Weyl semimetal, by considering a simple scheme for the electronic structure, which can be embedded within a recent theoretical approach for the calculation of the excitation spectrum. Our study allows us disentangling the generic optical fingerprints of Weyl fermions, which are in broad agreement with the theoretical predictions but also challenge the present understanding of their electrodynamic response.

  15. A Novel Texture-Quantization-Based Reversible Multiple Watermarking Scheme Applied to Health Information System.

    PubMed

    Turuk, Mousami; Dhande, Ashwin

    2018-04-01

    The recent innovations in information and communication technologies have appreciably changed the panorama of health information system (HIS). These advances provide new means to process, handle, and share medical images and also augment the medical image security issues in terms of confidentiality, reliability, and integrity. Digital watermarking has emerged as new era that offers acceptable solutions to the security issues in HIS. Texture is a significant feature to detect the embedding sites in an image, which further leads to substantial improvement in the robustness. However, considering the perspective of digital watermarking, this feature has received meager attention in the reported literature. This paper exploits the texture property of an image and presents a novel hybrid texture-quantization-based approach for reversible multiple watermarking. The watermarked image quality has been accessed by peak signal to noise ratio (PSNR), structural similarity measure (SSIM), and universal image quality index (UIQI), and the obtained results are superior to the state-of-the-art methods. The algorithm has been evaluated on a variety of medical imaging modalities (CT, MRA, MRI, US) and robustness has been verified, considering various image processing attacks including JPEG compression. The proposed scheme offers additional security using repetitive embedding of BCH encoded watermarks and ADM encrypted ECG signal. Experimental results achieved a maximum of 22,616 bits hiding capacity with PSNR of 53.64 dB.

  16. Active Flow Separation Control of a Stator Vane Using Surface Injection in a Multistage Compressor Experiment

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Bright, Michelle M.; Prahst, Patricia S.; Strazisar, Anthony J.

    2003-01-01

    Micro-flow control actuation embedded in a stator vane was used to successfully control separation and improve near stall performance in a multistage compressor rig at NASA Glenn. Using specially designed stator vanes configured with internal actuation to deliver pulsating air through slots along the suction surface, a research study was performed to identify performance benefits using this microflow control approach. Pressure profiles and unsteady pressure measurements along the blade surface and at the shroud provided a dynamic look at the compressor during microflow air injection. These pressure measurements lead to a tracking algorithm to identify the onset of separation. The testing included steady air injection at various slot locations along the vane. The research also examined the benefit of pulsed injection and actively controlled air injection along the stator vane. Two types of actuation schemes were studied, including an embedded actuator for on-blade control. Successful application of an online detection and flow control scheme will be discussed. Testing showed dramatic performance benefit for flow reattachment and subsequent improvement in diffusion through the use of pulsed controlled injection. The paper will discuss the experimental setup, the blade configurations, and preliminary CFD results which guided the slot location along the blade. The paper will also show the pressure profiles and unsteady pressure measurements used to track flow control enhancement, and will conclude with the tracking algorithm for adjusting the control.

  17. Efficient Graph-Based Resource Allocation Scheme Using Maximal Independent Set for Randomly- Deployed Small Star Networks

    PubMed Central

    Zhou, Jian; Wang, Lusheng; Wang, Weidong; Zhou, Qingfeng

    2017-01-01

    In future scenarios of heterogeneous and dense networks, randomly-deployed small star networks (SSNs) become a key paradigm, whose system performance is restricted to inter-SSN interference and requires an efficient resource allocation scheme for interference coordination. Traditional resource allocation schemes do not specifically focus on this paradigm and are usually too time consuming in dense networks. In this article, a very efficient graph-based scheme is proposed, which applies the maximal independent set (MIS) concept in graph theory to help divide SSNs into almost interference-free groups. We first construct an interference graph for the system based on a derived distance threshold indicating for any pair of SSNs whether there is intolerable inter-SSN interference or not. Then, SSNs are divided into MISs, and the same resource can be repetitively used by all the SSNs in each MIS. Empirical parameters and equations are set in the scheme to guarantee high performance. Finally, extensive scenarios both dense and nondense are randomly generated and simulated to demonstrate the performance of our scheme, indicating that it outperforms the classical max K-cut-based scheme in terms of system capacity, utility and especially time cost. Its achieved system capacity, utility and fairness can be close to the near-optimal strategy obtained by a time-consuming simulated annealing search. PMID:29113109

  18. Exploring Sampling in the Detection of Multicategory EEG Signals

    PubMed Central

    Siuly, Siuly; Kabir, Enamul; Wang, Hua; Zhang, Yanchun

    2015-01-01

    The paper presents a structure based on samplings and machine leaning techniques for the detection of multicategory EEG signals where random sampling (RS) and optimal allocation sampling (OS) are explored. In the proposed framework, before using the RS and OS scheme, the entire EEG signals of each class are partitioned into several groups based on a particular time period. The RS and OS schemes are used in order to have representative observations from each group of each category of EEG data. Then all of the selected samples by the RS from the groups of each category are combined in a one set named RS set. In the similar way, for the OS scheme, an OS set is obtained. Then eleven statistical features are extracted from the RS and OS set, separately. Finally this study employs three well-known classifiers: k-nearest neighbor (k-NN), multinomial logistic regression with a ridge estimator (MLR), and support vector machine (SVM) to evaluate the performance for the RS and OS feature set. The experimental outcomes demonstrate that the RS scheme well represents the EEG signals and the k-NN with the RS is the optimum choice for detection of multicategory EEG signals. PMID:25977705

  19. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  20. A New Hybrid Scheme for Preventing Channel Interference and Collision in Mobile Networks

    NASA Astrophysics Data System (ADS)

    Kim, Kyungjun; Han, Kijun

    This paper proposes a new hybrid scheme based on a given set of channels for preventing channel interference and collision in mobile networks. The proposed scheme is designed for improving system performance, focusing on enhancement of performance related to path breakage and channel interference. The objective of this scheme is to improve the performance of inter-node communication. Simulation results from this paper show that the new hybrid scheme can reduce a more control message overhead than a conventional random scheme.

  1. A Layered Searchable Encryption Scheme with Functional Components Independent of Encryption Methods

    PubMed Central

    Luo, Guangchun; Qin, Ke

    2014-01-01

    Searchable encryption technique enables the users to securely store and search their documents over the remote semitrusted server, which is especially suitable for protecting sensitive data in the cloud. However, various settings (based on symmetric or asymmetric encryption) and functionalities (ranked keyword query, range query, phrase query, etc.) are often realized by different methods with different searchable structures that are generally not compatible with each other, which limits the scope of application and hinders the functional extensions. We prove that asymmetric searchable structure could be converted to symmetric structure, and functions could be modeled separately apart from the core searchable structure. Based on this observation, we propose a layered searchable encryption (LSE) scheme, which provides compatibility, flexibility, and security for various settings and functionalities. In this scheme, the outputs of the core searchable component based on either symmetric or asymmetric setting are converted to some uniform mappings, which are then transmitted to loosely coupled functional components to further filter the results. In such a way, all functional components could directly support both symmetric and asymmetric settings. Based on LSE, we propose two representative and novel constructions for ranked keyword query (previously only available in symmetric scheme) and range query (previously only available in asymmetric scheme). PMID:24719565

  2. Global-scale regionalization of hydrological model parameters using streamflow data from many small catchments

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard

    2015-04-01

    Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.

  3. Breaking down the barriers of using strong authentication and encryption in resource constrained embedded systems

    NASA Astrophysics Data System (ADS)

    Knobler, Ron; Scheffel, Peter; Jackson, Scott; Gaj, Kris; Kaps, Jens Peter

    2013-05-01

    Various embedded systems, such as unattended ground sensors (UGS), are deployed in dangerous areas, where they are subject to compromise. Since numerous systems contain a network of devices that communicate with each other (often times with commercial off the shelf [COTS] radios), an adversary is able to intercept messages between system devices, which jeopardizes sensitive information transmitted by the system (e.g. location of system devices). Secret key algorithms such as AES are a very common means to encrypt all system messages to a sufficient security level, for which lightweight implementations exist for even very resource constrained devices. However, all system devices must use the appropriate key to encrypt and decrypt messages from each other. While traditional public key algorithms (PKAs), such as RSA and Elliptic Curve Cryptography (ECC), provide a sufficiently secure means to provide authentication and a means to exchange keys, these traditional PKAs are not suitable for very resource constrained embedded systems or systems which contain low reliability communication links (e.g. mesh networks), especially as the size of the network increases. Therefore, most UGS and other embedded systems resort to pre-placed keys (PPKs) or other naïve schemes which greatly reduce the security and effectiveness of the overall cryptographic approach. McQ has teamed with the Cryptographic Engineering Research Group (CERG) at George Mason University (GMU) to develop an approach using revolutionary cryptographic techniques that provides both authentication and encryption, but on resource constrained embedded devices, without the burden of large amounts of key distribution or storage.

  4. Real-Time Distributed Embedded Oscillator Operating Frequency Monitoring

    NASA Technical Reports Server (NTRS)

    Pollock, Julie; Oliver, Brett; Brickner, Christopher

    2012-01-01

    A document discusses the utilization of embedded clocks inside of operating network data links as an auxiliary clock source to satisfy local oscillator monitoring requirements. Modem network interfaces, typically serial network links, often contain embedded clocking information of very tight precision to recover data from the link. This embedded clocking data can be utilized by the receiving device to monitor the local oscillator for tolerance to required specifications, often important in high-integrity fault-tolerant applications. A device can utilize a received embedded clock to determine if the local or the remote device is out of tolerance by using a single link. The local device can determine if it is failing, assuming a single fault model, with two or more active links. Network fabric components, containing many operational links, can potentially determine faulty remote or local devices in the presence of multiple faults. Two methods of implementation are described. In one method, a recovered clock can be directly used to monitor the local clock as a direct replacement of an external local oscillator. This scheme is consistent with a general clock monitoring function whereby clock sources are clocking two counters and compared over a fixed interval of time. In another method, overflow/underflow conditions can be used to detect clock relationships for monitoring. These network interfaces often provide clock compensation circuitry to allow data to be transferred from the received (network) clock domain to the internal clock domain. This circuit could be modified to detect overflow/underflow conditions of the buffering required and report a fast or slow receive clock, respectively.

  5. Nanochannel Device with Embedded Nanopore: a New Approach for Single-Molecule DNA Analysis and Manipulation

    NASA Astrophysics Data System (ADS)

    Zhang, Yuning; Reisner, Walter

    2012-02-01

    Nanopore and nanochannel based devices are robust methods for biomolecular sensing and single DNA manipulation. Nanopore-based DNA sensing has attractive features that make it a leading candidate as a single-molecule DNA sequencing technology. Nanochannel based extension of DNA, combined with enzymatic or denaturation-based barcoding schemes, is already a powerful approach for genome analysis. We believe that there is revolutionary potential in devices that combine nanochannels with nanpore detectors. In particular, due to the fast translocation of a DNA molecule through a standard nanopore configuration, there is an unfavorable trade-off between signal and sequence resolution. With a combined nanochannel-nanopore device, based on embedding a nanopore inside a nanochannel, we can in principle gain independent control over both DNA translocation speed and sensing signal, solving the key draw-back of the standard nanopore configuration. We will discuss our recent progress on device fabrication and characterization. In particular, we demonstrate that we can detect - using fluorescent microscopy - successful translocation of DNA from the nanochannel out through the nanopore, a possible method to 'select' a given barcode for further analysis. In particular, we show that in equilibrium DNA will not escape through an embedded sub-persistence length nanopore, suggesting that the embedded pore could be used as a nanoscale window through which to interrogate a nanochannel extended DNA molecule.

  6. Nanochannel Device with Embedded Nanopore: a New Approach for Single-Molecule DNA Analysis and Manipulation

    NASA Astrophysics Data System (ADS)

    Zhang, Yuning; Reisner, Walter

    2013-03-01

    Nanopore and nanochannel based devices are robust methods for biomolecular sensing and single DNA manipulation. Nanopore-based DNA sensing has attractive features that make it a leading candidate as a single-molecule DNA sequencing technology. Nanochannel based extension of DNA, combined with enzymatic or denaturation-based barcoding schemes, is already a powerful approach for genome analysis. We believe that there is revolutionary potential in devices that combine nanochannels with embedded pore detectors. In particular, due to the fast translocation of a DNA molecule through a standard nanopore configuration, there is an unfavorable trade-off between signal and sequence resolution. With a combined nanochannel-nanopore device, based on embedding a pore inside a nanochannel, we can in principle gain independent control over both DNA translocation speed and sensing signal, solving the key draw-back of the standard nanopore configuration. We demonstrate that we can optically detect successful translocation of DNA from the nanochannel out through the nanopore, a possible method to 'select' a given barcode for further analysis. In particular, we show that in equilibrium DNA will not escape through an embedded sub-persistence length nanopore, suggesting that the pore could be used as a nanoscale window through which to interrogate a nanochannel extended DNA molecule. Furthermore, electrical measurements through the nanopore are performed, indicating that DNA sensing is feasible using the nanochannel-nanopore device.

  7. Solving Set Cover with Pairs Problem using Quantum Annealing

    NASA Astrophysics Data System (ADS)

    Cao, Yudong; Jiang, Shuxian; Perouli, Debbie; Kais, Sabre

    2016-09-01

    Here we consider using quantum annealing to solve Set Cover with Pairs (SCP), an NP-hard combinatorial optimization problem that plays an important role in networking, computational biology, and biochemistry. We show an explicit construction of Ising Hamiltonians whose ground states encode the solution of SCP instances. We numerically simulate the time-dependent Schrödinger equation in order to test the performance of quantum annealing for random instances and compare with that of simulated annealing. We also discuss explicit embedding strategies for realizing our Hamiltonian construction on the D-wave type restricted Ising Hamiltonian based on Chimera graphs. Our embedding on the Chimera graph preserves the structure of the original SCP instance and in particular, the embedding for general complete bipartite graphs and logical disjunctions may be of broader use than that the specific problem we deal with.

  8. Controlled Teleportation of a Qudit State by Partially Entangled GHZ States

    NASA Astrophysics Data System (ADS)

    Wang, Jin-wei; Shu, Lan; Mo, Zhi-wen; Zhang, Zhi-hua

    2014-08-01

    In this paper, we propose a controlled teleportation scheme which communicates an arbitrary ququart state via two sets of partially entangled GHZ state. The necessary measurements and operations are given detailedly. Furthmore the scheme is generalized to teleport a qudit state via s sets of partially entangled GHZ state.

  9. Turn Your Key--Reducing Truck Idling

    ERIC Educational Resources Information Center

    MacRae, Gareth; Stockport, Tina

    2008-01-01

    As Australia enters the era of emissions trading schemes, strategies to further curb emissions will grow in importance. At the same time, a national emissions trading scheme is set to be introduced whilst the country is set to increase its dependency and volume of road transport in years to come. This raises a doubly important question for…

  10. Opportunity NYC-Family Rewards: An Embedded Child and Family Study of Conditional Cash Transfers

    ERIC Educational Resources Information Center

    Morris, Pamela; Aber, J. Lawrence; Wolf, Sharon; Berg, Juliette

    2011-01-01

    This study builds on and informs ecological theory (Bronfenbrenner & Morris, 2006) by focusing on the contextual processes by which individual developmental trajectories can be altered. Ecological theory posits that children are embedded in a nested and interactive set of interrelated contexts beginning with the micro-system (the most…

  11. Embedding Individualized Social Goals into Routine Activities in Inclusive Early Childhood Classrooms

    ERIC Educational Resources Information Center

    Macy, Marisa G.; Bricker, Diane D.

    2007-01-01

    This study examined the effectiveness of embedding children's social goals into routine activities within inclusive preschool classroom settings. An AB (i.e. baseline and intervention) single-subject design was used across three male participants with identified disabilities. Three student-teachers, enrolled in a master's program at a university,…

  12. Massed Trials versus Trials Embedded into Game Play: Child Outcomes and Preference

    ERIC Educational Resources Information Center

    Ledford, Jennifer R.; Chazin, Kate T.; Harbin, Emilee R.; Ward, Sarah E.

    2017-01-01

    Limited data are available regarding how response prompting procedures should be used in early childhood settings. The purpose of this study was to compare the efficiency of progressive time delay instruction presented via two trial arrangements: massed and embedded. During massed trial sessions, a short instructional session was conducted,…

  13. Nursing Faculty Collaborate with Embedded Librarians to Serve Online Graduate Students in a Consortium Setting

    ERIC Educational Resources Information Center

    Guillot, Ladonna; Stahr, Beth; Meeker, Bonnie Juve'

    2010-01-01

    Nursing and library faculty face many information literacy challenges when graduate nursing programs migrate to online course delivery. The authors describe a collaborative model for providing cost-effective online library services to new graduate students in a three-university consortium. The embedded librarian service links a health sciences…

  14. Word embeddings and recurrent neural networks based on Long-Short Term Memory nodes in supervised biomedical word sense disambiguation.

    PubMed

    Jimeno Yepes, Antonio

    2017-09-01

    Word sense disambiguation helps identifying the proper sense of ambiguous words in text. With large terminologies such as the UMLS Metathesaurus ambiguities appear and highly effective disambiguation methods are required. Supervised learning algorithm methods are used as one of the approaches to perform disambiguation. Features extracted from the context of an ambiguous word are used to identify the proper sense of such a word. The type of features have an impact on machine learning methods, thus affect disambiguation performance. In this work, we have evaluated several types of features derived from the context of the ambiguous word and we have explored as well more global features derived from MEDLINE using word embeddings. Results show that word embeddings improve the performance of more traditional features and allow as well using recurrent neural network classifiers based on Long-Short Term Memory (LSTM) nodes. The combination of unigrams and word embeddings with an SVM sets a new state of the art performance with a macro accuracy of 95.97 in the MSH WSD data set. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    NASA Astrophysics Data System (ADS)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  16. A Pareto-based Ensemble with Feature and Instance Selection for Learning from Multi-Class Imbalanced Datasets.

    PubMed

    Fernández, Alberto; Carmona, Cristobal José; José Del Jesus, María; Herrera, Francisco

    2017-09-01

    Imbalanced classification is related to those problems that have an uneven distribution among classes. In addition to the former, when instances are located into the overlapped areas, the correct modeling of the problem becomes harder. Current solutions for both issues are often focused on the binary case study, as multi-class datasets require an additional effort to be addressed. In this research, we overcome these problems by carrying out a combination between feature and instance selections. Feature selection will allow simplifying the overlapping areas easing the generation of rules to distinguish among the classes. Selection of instances from all classes will address the imbalance itself by finding the most appropriate class distribution for the learning task, as well as possibly removing noise and difficult borderline examples. For the sake of obtaining an optimal joint set of features and instances, we embedded the searching for both parameters in a Multi-Objective Evolutionary Algorithm, using the C4.5 decision tree as baseline classifier in this wrapper approach. The multi-objective scheme allows taking a double advantage: the search space becomes broader, and we may provide a set of different solutions in order to build an ensemble of classifiers. This proposal has been contrasted versus several state-of-the-art solutions on imbalanced classification showing excellent results in both binary and multi-class problems.

  17. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  18. Fisher information framework for time series modeling

    NASA Astrophysics Data System (ADS)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  19. A broadcast-based key agreement scheme using set reconciliation for wireless body area networks.

    PubMed

    Ali, Aftab; Khan, Farrukh Aslam

    2014-05-01

    Information and communication technologies have thrived over the last few years. Healthcare systems have also benefited from this progression. A wireless body area network (WBAN) consists of small, low-power sensors used to monitor human physiological values remotely, which enables physicians to remotely monitor the health of patients. Communication security in WBANs is essential because it involves human physiological data. Key agreement and authentication are the primary issues in the security of WBANs. To agree upon a common key, the nodes exchange information with each other using wireless communication. This information exchange process must be secure enough or the information exchange should be minimized to a certain level so that if information leak occurs, it does not affect the overall system. Most of the existing solutions for this problem exchange too much information for the sake of key agreement; getting this information is sufficient for an attacker to reproduce the key. Set reconciliation is a technique used to reconcile two similar sets held by two different hosts with minimal communication complexity. This paper presents a broadcast-based key agreement scheme using set reconciliation for secure communication in WBANs. The proposed scheme allows the neighboring nodes to agree upon a common key with the personal server (PS), generated from the electrocardiogram (EKG) feature set of the host body. Minimal information is exchanged in a broadcast manner, and even if every node is missing a different subset, by reconciling these feature sets, the whole network will still agree upon a single common key. Because of the limited information exchange, if an attacker gets the information in any way, he/she will not be able to reproduce the key. The proposed scheme mitigates replay, selective forwarding, and denial of service attacks using a challenge-response authentication mechanism. The simulation results show that the proposed scheme has a great deal of adoptability in terms of security, communication overhead, and running time complexity, as compared to the existing EKG-based key agreement scheme.

  20. A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY

    EPA Science Inventory

    The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...

  1. A power-efficient ZF precoding scheme for multi-user indoor visible light communication systems

    NASA Astrophysics Data System (ADS)

    Zhao, Qiong; Fan, Yangyu; Deng, Lijun; Kang, Bochao

    2017-02-01

    In this study, we propose a power-efficient ZF precoding scheme for visible light communication (VLC) downlink multi-user multiple-input-single-output (MU-MISO) systems, which incorporates the zero-forcing (ZF) and the characteristics of VLC systems. The main idea of this scheme is that the channel matrix used to perform pseudoinverse comes from the set of optical Access Points (APs) shared by more than one user, instead of the set of all involved serving APs as the existing ZF precoding schemes often used. By doing this, the waste of power, which is caused by the transmission of one user's data in the un-serving APs, can be avoided. In addition, the size of the channel matrix needs to perform pseudoinverse becomes smaller, which helps to reduce the computation complexity. Simulation results in two scenarios show that the proposed ZF precoding scheme has higher power efficiency, better bit error rate (BER) performance and lower computation complexity compared with traditional ZF precoding schemes.

  2. Data embedding employing degenerate clusters of data having differences less than noise value

    DOEpatents

    Sanford, M.T. II; Handel, T.G.

    1998-10-06

    A method of embedding auxiliary information into a set of host data, such as a photograph, television signal, facsimile transmission, or identification card. All such host data contain intrinsic noise, allowing pixels in the host data which are nearly identical and which have values differing by less than the noise value to be manipulated and replaced with auxiliary data. As the embedding method does not change the elemental values of the host data, the auxiliary data do not noticeably affect the appearance or interpretation of the host data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. 35 figs.

  3. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    PubMed

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  4. Two-character motion analysis and synthesis.

    PubMed

    Kwon, Taesoo; Cho, Young-Sang; Park, Sang Il; Shin, Sung Yong

    2008-01-01

    In this paper, we deal with the problem of synthesizing novel motions of standing-up martial arts such as Kickboxing, Karate, and Taekwondo performed by a pair of human-like characters while reflecting their interactions. Adopting an example-based paradigm, we address three non-trivial issues embedded in this problem: motion modeling, interaction modeling, and motion synthesis. For the first issue, we present a semi-automatic motion labeling scheme based on force-based motion segmentation and learning-based action classification. We also construct a pair of motion transition graphs each of which represents an individual motion stream. For the second issue, we propose a scheme for capturing the interactions between two players. A dynamic Bayesian network is adopted to build a motion transition model on top of the coupled motion transition graph that is constructed from an example motion stream. For the last issue, we provide a scheme for synthesizing a novel sequence of coupled motions, guided by the motion transition model. Although the focus of the present work is on martial arts, we believe that the framework of the proposed approach can be conveyed to other two-player motions as well.

  5. Grand Canonical adaptive resolution simulation for molecules with electrons: A theoretical framework based on physical consistency

    NASA Astrophysics Data System (ADS)

    Delle Site, Luigi

    2018-01-01

    A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. The Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.

  6. What is New in the “New Rural Co-operative Medical System”? An Assessment in One Kazak County of the Xinjiang Uyghur Autonomous Region*

    PubMed Central

    Klotzbücher, Sascha; Lässig, Peter; Jiangmei, Qin; Weigelin-Schwiedrzik, Susanne

    2011-01-01

    In 2002, the Chinese leadership announced a change in national welfare policy: Voluntary medical schemes at county level, called the “New Rural Co-operative Medical System” should cover all counties by 2010. This article addresses the main characteristics of this system, analyses the introduction of local schemes based on our own field studies in one Kazak county of the Xinjiang Uyghur Autonomous Region since 2006, and argues that the fast progressing of the local scheme and the flexibility shown by local administrators in considering structural and procedural adjustments are not the result of central directives but of local initiatives. Recentralization from the township governments to functional departments in the provincial and the central state administration is only one aspect of current rural governance. Complementary forms of locally embedded responsiveness to the needs of health care recipients are crucial in restructuring the administration and discharge of health care. These new modes of governance are different from the hierarchical control and institutionalized representation of interests of the local population. PMID:22058584

  7. Designing the role of the embedded care manager.

    PubMed

    Hines, Patricia; Mercury, Marge

    2013-01-01

    : The role of the professional case manager is changing rapidly. Health reform has called upon the industry to ensure that care is delivered in an efficient, effective, and high-quality and low cost manner. As a means to achieve this objective, health plans and health systems are moving the care manager out of a centralized location within their organizations to "embedding" them into physician offices. This move enables the care manager to work alongside the primary care physicians and their high-risk patients. This article discusses the framework for designing and implementing an embedded care manager role into a physician practice. Key elements of the program are discussed. IMPLICATIONS FOR CARE MANAGEMENT:: Historically care management has played a foundational role in improving the quality of care for individuals and populations via the efficient and effective use of resources. Now with the goals of health care reform, a successful transition from a volume-based to value-based reimbursement system requires primary care physicians to welcome care managers into their practices to improve patient care, quality, and costs through care coordination across health care settings and populations. : As patient-centered medical homes and integrated delivery systems formulate their plans for population health management, their efforts have included embedding a care manager in the primary practice setting. Having care managers embedded at the physician offices increases their ability to collaborate with the physician and their staff in the implementation and monitoring care plans for their patients. : Implementing an embedded care manager into an existing physician's practice requires the following:Although the embedded care manager is a highly evolving role, physician groups are beginning to realize the benefits from their care management collaborations. Examples cited include improved outreach and coordination, patient adherence to care plans, and improved quality of life.

  8. Origami mechanologic.

    PubMed

    Treml, Benjamin; Gillman, Andrew; Buskohl, Philip; Vaia, Richard

    2018-06-18

    Robots autonomously interact with their environment through a continual sense-decide-respond control loop. Most commonly, the decide step occurs in a central processing unit; however, the stiffness mismatch between rigid electronics and the compliant bodies of soft robots can impede integration of these systems. We develop a framework for programmable mechanical computation embedded into the structure of soft robots that can augment conventional digital electronic control schemes. Using an origami waterbomb as an experimental platform, we demonstrate a 1-bit mechanical storage device that writes, erases, and rewrites itself in response to a time-varying environmental signal. Further, we show that mechanical coupling between connected origami units can be used to program the behavior of a mechanical bit, produce logic gates such as AND, OR, and three input majority gates, and transmit signals between mechanologic gates. Embedded mechanologic provides a route to add autonomy and intelligence in soft robots and machines. Copyright © 2018 the Author(s). Published by PNAS.

  9. Local electric dipole moments for periodic systems via density functional theory embedding.

    PubMed

    Luber, Sandra

    2014-12-21

    We describe a novel approach for the calculation of local electric dipole moments for periodic systems. Since the position operator is ill-defined in periodic systems, maximally localized Wannier functions based on the Berry-phase approach are usually employed for the evaluation of local contributions to the total electric dipole moment of the system. We propose an alternative approach: within a subsystem-density functional theory based embedding scheme, subset electric dipole moments are derived without any additional localization procedure, both for hybrid and non-hybrid exchange-correlation functionals. This opens the way to a computationally efficient evaluation of local electric dipole moments in (molecular) periodic systems as well as their rigorous splitting into atomic electric dipole moments. As examples, Infrared spectra of liquid ethylene carbonate and dimethyl carbonate are presented, which are commonly employed as solvents in Lithium ion batteries.

  10. Two-Layer Fragile Watermarking Method Secured with Chaotic Map for Authentication of Digital Holy Quran

    PubMed Central

    Khalil, Mohammed S.; Khan, Muhammad Khurram; Alginahi, Yasser M.

    2014-01-01

    This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small. PMID:25028681

  11. Two-layer fragile watermarking method secured with chaotic map for authentication of digital Holy Quran.

    PubMed

    Khalil, Mohammed S; Kurniawan, Fajri; Khan, Muhammad Khurram; Alginahi, Yasser M

    2014-01-01

    This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small.

  12. Generation of vortex array laser beams with Dove prism embedded unbalanced Mach-Zehnder interferometer

    NASA Astrophysics Data System (ADS)

    Chu, Shu-Chun

    2009-02-01

    This paper introduces a scheme for generation of vortex laser beams from a solid-state laser with off-axis laser-diode pumping. The proposed system consists of a Dove prism embedded in an unbalanced Mach-Zehnder interferometer configuration. This configuration allows controlled construction of p × p vortex array beams from Ince-Gaussian modes, IGep,p modes. An incident IGe p,p laser beam of variety order p can easily be generated from an end-pumped solid-state laser with an off-axis pumping mechanism. This study simulates this type of vortex array laser beam generation and discusses beam propagation effects. The formation of ordered transverse emission patterns have applications in a variety of areas such as optical data storage, distribution, and processing that exploit the robustness of soliton and vortex fields and optical manipulations of small particles and atoms in the featured intensity distribution.

  13. Reversible Watermarking Surviving JPEG Compression.

    PubMed

    Zain, J; Clarke, M

    2005-01-01

    This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).

  14. Safe and Efficient Support for Embeded Multi-Processors in ADA

    NASA Astrophysics Data System (ADS)

    Ruiz, Jose F.

    2010-08-01

    New software demands increasing processing power, and multi-processor platforms are spreading as the answer to achieve the required performance. Embedded real-time systems are also subject to this trend, but in the case of real-time mission-critical systems, the properties of reliability, predictability and analyzability are also paramount. The Ada 2005 language defined a subset of its tasking model, the Ravenscar profile, that provides the basis for the implementation of deterministic and time analyzable applications on top of a streamlined run-time system. This Ravenscar tasking profile, originally designed for single processors, has proven remarkably useful for modelling verifiable real-time single-processor systems. This paper proposes a simple extension to the Ravenscar profile to support multi-processor systems using a fully partitioned approach. The implementation of this scheme is simple, and it can be used to develop applications amenable to schedulability analysis.

  15. Efficient solutions to the Euler equations for supersonic flow with embedded subsonic regions

    NASA Technical Reports Server (NTRS)

    Walters, Robert W.; Dwoyer, Douglas L.

    1987-01-01

    A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two dimensions is described. Convergence of the basic algorithm to the steady state is quadratic for fully supersonic flows and is linear for other flows. This is in contrast to the block alternating direction implicit methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented herein is easily coupled with methods to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, and yields a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing oblique and normal shock waves which confirm the efficiency of the iteration strategy.

  16. Spread spectrum image steganography.

    PubMed

    Marvel, L M; Boncelet, C R; Retter, C T

    1999-01-01

    In this paper, we present a new method of digital steganography, entitled spread spectrum image steganography (SSIS). Steganography, which means "covered writing" in Greek, is the science of communicating in a hidden manner. Following a discussion of steganographic communication theory and review of existing techniques, the new method, SSIS, is introduced. This system hides and recovers a message of substantial length within digital imagery while maintaining the original image size and dynamic range. The hidden message can be recovered using appropriate keys without any knowledge of the original image. Image restoration, error-control coding, and techniques similar to spread spectrum are described, and the performance of the system is illustrated. A message embedded by this method can be in the form of text, imagery, or any other digital signal. Applications for such a data-hiding scheme include in-band captioning, covert communication, image tamperproofing, authentication, embedded control, and revision tracking.

  17. DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs).

    PubMed

    Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer

    2018-05-12

    Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime.

  18. DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs)

    PubMed Central

    Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer

    2018-01-01

    Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime. PMID:29757208

  19. An optical color image watermarking scheme by using compressive sensing with human visual characteristics in gyrator domain

    NASA Astrophysics Data System (ADS)

    Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian

    2017-05-01

    A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.

  20. Electronic damping of anharmonic adsorbate vibrations at metallic surfaces

    NASA Astrophysics Data System (ADS)

    Tremblay, Jean Christophe; Monturet, Serge; Saalfrank, Peter

    2010-03-01

    The nonadiabatic coupling of an adsorbate close to a metallic surface leads to electronic damping of adsorbate vibrations and line broadening in vibrational spectroscopy. Here, a perturbative treatment of the electronic contribution to the lifetime broadening serves as a building block for a new approach, in which anharmonic vibrational transition rates are calculated from a position-dependent coupling function. Different models for the coupling function will be tested, all related to embedding theory. The first two are models based on a scattering approach with (i) a jellium-type and (ii) a density functional theory based embedding density, respectively. In a third variant a further refined model is used for the embedding density, and a semiempirical approach is taken in which a scaling factor is chosen to match harmonic, single-site, first-principles transition rates, obtained from periodic density functional theory. For the example of hydrogen atoms on (adsorption) and below (subsurface absorption) a Pd(111) surface, lifetimes of and transition rates between vibrational levels are computed. The transition rates emerging from different models serve as input for the selective subsurface adsorption of hydrogen in palladium starting from an adsorption site, by using sequences of infrared laser pulses in a laser distillation scheme.

  1. Collusion issue in video watermarking

    NASA Astrophysics Data System (ADS)

    Doerr, Gwenael; Dugelay, Jean-Luc

    2005-03-01

    Digital watermarking has first been introduced as a possible way to ensure intellectual property (IP) protection. However, fifteen years after its infancy, it is still viewed as a young technology and digital watermarking is far from being introduced in Digital Right Management (DRM) frameworks. A possible explanation is that the research community has so far mainly focused on the robustness of the embedded watermark and has almost ignored security aspects. For IP protection applications such as fingerprinting and copyright protection, the watermark should provide means to ensure some kind of trust in a non secure environment. To this end, security against attacks from malicious users has to be considered. This paper will focus on collusion attacks to evaluate security in the context of video watermarking. In particular, security pitfalls will be exhibited when frame-by-frame embedding strategies are enforced for video watermarking. Two alternative strategies will be surveyed: either eavesdropping the watermarking channel to identify some redundant hidden structure, or jamming the watermarking channel to wash out the embedded watermark signal. Finally, the need for a new brand of watermarking schemes will be highlighted if the watermark is to be released in a hostile environment, which is typically the case for IP protection applications.

  2. Nanodiamonds as multi-purpose labels for microscopy.

    PubMed

    Hemelaar, S R; de Boer, P; Chipaux, M; Zuidema, W; Hamoh, T; Martinez, F Perona; Nagl, A; Hoogenboom, J P; Giepmans, B N G; Schirhagl, R

    2017-04-07

    Nanodiamonds containing fluorescent nitrogen-vacancy centers are increasingly attracting interest for use as a probe in biological microscopy. This interest stems from (i) strong resistance to photobleaching allowing prolonged fluorescence observation times; (ii) the possibility to excite fluorescence using a focused electron beam (cathodoluminescence; CL) for high-resolution localization; and (iii) the potential use for nanoscale sensing. For all these schemes, the development of versatile molecular labeling using relatively small diamonds is essential. Here, we show the direct targeting of a biological molecule with nanodiamonds as small as 70 nm using a streptavidin conjugation and standard antibody labelling approach. We also show internalization of 40 nm sized nanodiamonds. The fluorescence from the nanodiamonds survives osmium-fixation and plastic embedding making them suited for correlative light and electron microscopy. We show that CL can be observed from epon-embedded nanodiamonds, while surface-exposed nanoparticles also stand out in secondary electron (SE) signal due to the exceptionally high diamond SE yield. Finally, we demonstrate the magnetic read-out using fluorescence from diamonds prior to embedding. Thus, our results firmly establish nanodiamonds containing nitrogen-vacancy centers as unique, versatile probes for combining and correlating different types of microscopy, from fluorescence imaging and magnetometry to ultrastructural investigation using electron microscopy.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramezani, Hamidreza; Dubois, Marc; Wang, Yuan

    Here, we propose a mechanism for directional excitation without breaking reciprocity. This is achieved by embedding an impedance matched parity-time symmetric potential in a three-port system. The amplitude distribution within the gain and loss regions is strongly influenced by the direction of the incoming field. Consequently, the excitation of the third port is contingent on the direction of incidence while transmission in the main channel is immune. This design improves the four-port directional coupler scheme, as there is no need to implement an anechoic termination to one of the ports.

  4. Flexible Plasmonic Sensors

    PubMed Central

    Shir, Daniel; Ballard, Zachary S.; Ozcan, Aydogan

    2016-01-01

    Mechanical flexibility and the advent of scalable, low-cost, and high-throughput fabrication techniques have enabled numerous potential applications for plasmonic sensors. Sensitive and sophisticated biochemical measurements can now be performed through the use of flexible plasmonic sensors integrated into existing medical and industrial devices or sample collection units. More robust sensing schemes and practical techniques must be further investigated to fully realize the potentials of flexible plasmonics as a framework for designing low-cost, embedded and integrated sensors for medical, environmental, and industrial applications. PMID:27547023

  5. A new Euler scheme based on harmonic-polygon approach for solving first order ordinary differential equation

    NASA Astrophysics Data System (ADS)

    Yusop, Nurhafizah Moziyana Mohd; Hasan, Mohammad Khatim; Wook, Muslihah; Amran, Mohd Fahmi Mohamad; Ahmad, Siti Rohaidah

    2017-10-01

    There are many benefits to improve Euler scheme for solving the Ordinary Differential Equation Problems. Among the benefits are simple implementation and low-cost computational. However, the problem of accuracy in Euler scheme persuade scholar to use complex method. Therefore, the main purpose of this research are show the construction a new modified Euler scheme that improve accuracy of Polygon scheme in various step size. The implementing of new scheme are used Polygon scheme and Harmonic mean concept that called as Harmonic-Polygon scheme. This Harmonic-Polygon can provide new advantages that Euler scheme could offer by solving Ordinary Differential Equation problem. Four set of problems are solved via Harmonic-Polygon. Findings show that new scheme or Harmonic-Polygon scheme can produce much better accuracy result.

  6. Ensemble superparameterization versus stochastic parameterization: A comparison of model uncertainty representation in tropical weather prediction

    NASA Astrophysics Data System (ADS)

    Subramanian, Aneesh C.; Palmer, Tim N.

    2017-06-01

    Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.Plain Language SummaryProbabilistic weather forecasts, especially for tropical weather, is still a significant challenge for global weather forecasting systems. Expressing uncertainty along with weather forecasts is important for informed decision making. Hence, we explore the use of a relatively new approach in using super-parameterization, where a cloud resolving model is embedded within a global model, in probabilistic tropical weather forecasts at medium range. We show that this approach helps improve modeling uncertainty in forecasts of certain features such as precipitation magnitude and location better, but forecasts of tropical winds are not necessarily improved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=discrete+AND+maths&id=EJ1090293','ERIC'); return false;" href="https://eric.ed.gov/?q=discrete+AND+maths&id=EJ1090293"><span>Embedded Simultaneous Prompting Procedure to Teach STEM Content to High School Students with Moderate Disabilities in an Inclusive Setting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Heinrich, Sara; Collins, Belva C.; Knight, Victoria; Spriggs, Amy D.</p> <p>2016-01-01</p> <p>Effects of an embedded simultaneous prompting procedure to teach STEM (science, technology, engineering, math) content to three secondary students with moderate intellectual disabilities in an inclusive general education classroom were evaluated in the current study. Students learned discrete (i.e., geometric figures, science vocabulary, or use of…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CQGra..35i4002C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CQGra..35i4002C"><span>Inference of boundaries in causal sets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cunningham, William J.</p> <p>2018-05-01</p> <p>We investigate the extrinsic geometry of causal sets in (1+1) -dimensional Minkowski spacetime. The properties of boundaries in an embedding space can be used not only to measure observables, but also to supplement the discrete action in the partition function via discretized Gibbons–Hawking–York boundary terms. We define several ways to represent a causal set using overlapping subsets, which then allows us to distinguish between null and non-null bounding hypersurfaces in an embedding space. We discuss algorithms to differentiate between different types of regions, consider when these distinctions are possible, and then apply the algorithms to several spacetime regions. Numerical results indicate the volumes of timelike boundaries can be measured to within 0.5% accuracy for flat boundaries and within 10% accuracy for highly curved boundaries for medium-sized causal sets with N  =  214 spacetime elements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1257983','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1257983"><span>Embedding global and collective in a torus network with message class map based tree path selection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chen, Dong; Coteus, Paul W.; Eisley, Noel A.</p> <p></p> <p>Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computermore » program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23556962','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23556962"><span>Harnessing quantum transport by transient chaos.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Rui; Huang, Liang; Lai, Ying-Cheng; Grebogi, Celso; Pecora, Louis M</p> <p>2013-03-01</p> <p>Chaos has long been recognized to be generally advantageous from the perspective of control. In particular, the infinite number of unstable periodic orbits embedded in a chaotic set and the intrinsically sensitive dependence on initial conditions imply that a chaotic system can be controlled to a desirable state by using small perturbations. Investigation of chaos control, however, was largely limited to nonlinear dynamical systems in the classical realm. In this paper, we show that chaos may be used to modulate or harness quantum mechanical systems. To be concrete, we focus on quantum transport through nanostructures, a problem of considerable interest in nanoscience, where a key feature is conductance fluctuations. We articulate and demonstrate that chaos, more specifically transient chaos, can be effective in modulating the conductance-fluctuation patterns. Experimentally, this can be achieved by applying an external gate voltage in a device of suitable geometry to generate classically inaccessible potential barriers. Adjusting the gate voltage allows the characteristics of the dynamical invariant set responsible for transient chaos to be varied in a desirable manner which, in turn, can induce continuous changes in the statistical characteristics of the quantum conductance-fluctuation pattern. To understand the physical mechanism of our scheme, we develop a theory based on analyzing the spectrum of the generalized non-Hermitian Hamiltonian that includes the effect of leads, or electronic waveguides, as self-energy terms. As the escape rate of the underlying non-attracting chaotic set is increased, the imaginary part of the complex eigenenergy becomes increasingly large so that pointer states are more difficult to form, making smoother the conductance-fluctuation pattern.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4448944','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4448944"><span>From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Tsai, Wen-Ting; Hassan, Ahmed; Sarkar, Purbasha; Correa, Joaquin; Metlagel, Zoltan; Jorgens, Danielle M.; Auer, Manfred</p> <p>2014-01-01</p> <p>Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets. PMID:25145678</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5458883','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5458883"><span>Fiber-Embedded Metallic Materials: From Sensing towards Nervous Behavior</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Saheb, Nouari; Mekid, Samir</p> <p>2015-01-01</p> <p>Embedding of fibers in materials has attracted serious attention from researchers and has become a new research trend. Such material structures are usually termed “smart” or more recently “nervous”. Materials can have the capability of sensing and responding to the surrounding environmental stimulus, in the former, and the capability of feeling multiple structural and external stimuli, while feeding information back to a controller for appropriate real-time action, in the latter. In this paper, embeddable fibers, embedding processes, and behavior of fiber-embedded metallic materials are reviewed. Particular emphasis has been given to embedding fiber Bragg grating (FBG) array sensors and piezo wires, because of their high potential to be used in nervous materials for structural health monitoring. Ultrasonic consolidation and laser-based layered manufacturing processes are discussed in detail because of their high potential to integrate fibers without disruption. In addition, current challenges associated with embedding fibers in metallic materials are highlighted and recommendations for future research work are set. PMID:28793689</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9669E..03Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9669E..03Z"><span>An integrated compact airborne multispectral imaging system using embedded computer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Yuedong; Wang, Li; Zhang, Xuguo</p> <p>2015-08-01</p> <p>An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26574270','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26574270"><span>Open-ended recursive calculation of single residues of response functions for perturbation-dependent basis sets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Friese, Daniel H; Ringholm, Magnus; Gao, Bin; Ruud, Kenneth</p> <p>2015-10-13</p> <p>We present theory, implementation, and applications of a recursive scheme for the calculation of single residues of response functions that can treat perturbations that affect the basis set. This scheme enables the calculation of nonlinear light absorption properties to arbitrary order for other perturbations than an electric field. We apply this scheme for the first treatment of two-photon circular dichroism (TPCD) using London orbitals at the Hartree-Fock level of theory. In general, TPCD calculations suffer from the problem of origin dependence, which has so far been solved by using the velocity gauge for the electric dipole operator. This work now enables comparison of results from London orbital and velocity gauge based TPCD calculations. We find that the results from the two approaches both exhibit strong basis set dependence but that they are very similar with respect to their basis set convergence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20426055','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20426055"><span>Evaluation of 4D-CT lung registration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kabus, Sven; Klinder, Tobias; Murphy, Keelin; van Ginneken, Bram; van Lorenz, Cristian; Pluim, Josien P W</p> <p>2009-01-01</p> <p>Non-rigid registration accuracy assessment is typically performed by evaluating the target registration error at manually placed landmarks. For 4D-CT lung data, we compare two sets of landmark distributions: a smaller set primarily defined on vessel bifurcations as commonly described in the literature and a larger set being well-distributed throughout the lung volume. For six different registration schemes (three in-house schemes and three schemes frequently used by the community) the landmark error is evaluated and found to depend significantly on the distribution of the landmarks. In particular, lung regions near to the pleura show a target registration error three times larger than near-mediastinal regions. While the inter-method variability on the landmark positions is rather small, the methods show discriminating differences with respect to consistency and local volume change. In conclusion, both a well-distributed set of landmarks and a deformation vector field analysis are necessary for reliable non-rigid registration accuracy assessment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25845645','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25845645"><span>Performance of Frozen Density Embedding for Modeling Hole Transfer Reactions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ramos, Pablo; Papadakis, Markos; Pavanello, Michele</p> <p>2015-06-18</p> <p>We have carried out a thorough benchmark of the frozen density-embedding (FDE) method for calculating hole transfer couplings. We have considered 10 exchange-correlation functionals, 3 nonadditive kinetic energy functionals, and 3 basis sets. Overall, we conclude that with a 7% mean relative unsigned error, the PBE and PW91 functionals coupled with the PW91k nonadditive kinetic energy functional and a TZP basis set constitute the most stable and accurate levels of theory for hole transfer coupling calculations. The FDE-ET method is found to be an excellent tool for computing diabatic couplings for hole transfer reactions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21774552','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21774552"><span>DEKOIS: demanding evaluation kits for objective in silico screening--a versatile tool for benchmarking docking programs and scoring functions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vogel, Simon M; Bauer, Matthias R; Boeckler, Frank M</p> <p>2011-10-24</p> <p>For widely applied in silico screening techniques success depends on the rational selection of an appropriate method. We herein present a fast, versatile, and robust method to construct demanding evaluation kits for objective in silico screening (DEKOIS). This automated process enables creating tailor-made decoy sets for any given sets of bioactives. It facilitates a target-dependent validation of docking algorithms and scoring functions helping to save time and resources. We have developed metrics for assessing and improving decoy set quality and employ them to investigate how decoy embedding affects docking. We demonstrate that screening performance is target-dependent and can be impaired by latent actives in the decoy set (LADS) or enhanced by poor decoy embedding. The presented method allows extending and complementing the collection of publicly available high quality decoy sets toward new target space. All present and future DEKOIS data sets will be made accessible at www.dekois.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.7073E..1MC','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.7073E..1MC"><span>Design and evaluation of sparse quantization index modulation watermarking schemes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter</p> <p>2008-08-01</p> <p>In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20787233-efficiency-coherent-state-quantum-cryptography-presence-loss-influence-realistic-error-correction','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20787233-efficiency-coherent-state-quantum-cryptography-presence-loss-influence-realistic-error-correction"><span>Efficiency of coherent-state quantum cryptography in the presence of loss: Influence of realistic error correction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Heid, Matthias; Luetkenhaus, Norbert</p> <p>2006-05-15</p> <p>We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Scheme&pg=2&id=ED566999','ERIC'); return false;" href="https://eric.ed.gov/?q=Scheme&pg=2&id=ED566999"><span>Work, Train, Win: Work-Based Learning Design and Management for Productivity Gains. OECD Education Working Papers, No. 135</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kis, Viktoria</p> <p>2016-01-01</p> <p>Realising the potential of work-based learning schemes as a driver of productivity requires careful design and support. The length of work-based learning schemes should be adapted to the profile of productivity gains. A scheme that is too long for a given skill set might be unattractive for learners and waste public resources, but a scheme that is…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018TDR.....9..153E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018TDR.....9..153E"><span>Efficient Hybrid Watermarking Scheme for Security and Transmission Bit Rate Enhancement of 3D Color-Plus-Depth Video Communication</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>El-Shafai, W.; El-Rabaie, S.; El-Halawany, M.; Abd El-Samie, F. E.</p> <p>2018-03-01</p> <p>Three-Dimensional Video-plus-Depth (3DV + D) comprises diverse video streams captured by different cameras around an object. Therefore, there is a great need to fulfill efficient compression to transmit and store the 3DV + D content in compressed form to attain future resource bounds whilst preserving a decisive reception quality. Also, the security of the transmitted 3DV + D is a critical issue for protecting its copyright content. This paper proposes an efficient hybrid watermarking scheme for securing the 3DV + D transmission, which is the homomorphic transform based Singular Value Decomposition (SVD) in Discrete Wavelet Transform (DWT) domain. The objective of the proposed watermarking scheme is to increase the immunity of the watermarked 3DV + D to attacks and achieve adequate perceptual quality. Moreover, the proposed watermarking scheme reduces the transmission-bandwidth requirements for transmitting the color-plus-depth 3DV over limited-bandwidth wireless networks through embedding the depth frames into the color frames of the transmitted 3DV + D. Thus, it saves the transmission bit rate and subsequently it enhances the channel bandwidth-efficiency. The performance of the proposed watermarking scheme is compared with those of the state-of-the-art hybrid watermarking schemes. The comparisons depend on both the subjective visual results and the objective results; the Peak Signal-to-Noise Ratio (PSNR) of the watermarked frames and the Normalized Correlation (NC) of the extracted watermark frames. Extensive simulation results on standard 3DV + D sequences have been conducted in the presence of attacks. The obtained results confirm that the proposed hybrid watermarking scheme is robust in the presence of attacks. It achieves not only very good perceptual quality with appreciated PSNR values and saving in the transmission bit rate, but also high correlation coefficient values in the presence of attacks compared to the existing hybrid watermarking schemes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018OptEL..14...92C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018OptEL..14...92C"><span>Experimental study on cross-sensitivity of temperature and vibration of embedded fiber Bragg grating sensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Tao; Ye, Meng-li; Liu, Shu-liang; Deng, Yan</p> <p>2018-03-01</p> <p>In view of the principle for occurrence of cross-sensitivity, a series of calibration experiments are carried out to solve the cross-sensitivity problem of embedded fiber Bragg gratings (FBGs) using the reference grating method. Moreover, an ultrasonic-vibration-assisted grinding (UVAG) model is established, and finite element analysis (FEA) is carried out under the monitoring environment of embedded temperature measurement system. In addition, the related temperature acquisition tests are set in accordance with requirements of the reference grating method. Finally, comparative analyses of the simulation and experimental results are performed, and it may be concluded that the reference grating method may be utilized to effectively solve the cross-sensitivity of embedded FBGs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3376597','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3376597"><span>A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur</p> <p>2012-01-01</p> <p>This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22736956','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22736956"><span>A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur</p> <p>2012-01-01</p> <p>This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009IEITF..92...42A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009IEITF..92...42A"><span>Tag-KEM from Set Partial Domain One-Way Permutations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abe, Masayuki; Cui, Yang; Imai, Hideki; Kurosawa, Kaoru</p> <p></p> <p>Recently a framework called Tag-KEM/DEM was introduced to construct efficient hybrid encryption schemes. Although it is known that generic encode-then-encrypt construction of chosen ciphertext secure public-key encryption also applies to secure Tag-KEM construction and some known encoding method like OAEP can be used for this purpose, it is worth pursuing more efficient encoding method dedicated for Tag-KEM construction. This paper proposes an encoding method that yields efficient Tag-KEM schemes when combined with set partial one-way permutations such as RSA and Rabin's encryption scheme. To our knowledge, this leads to the most practical hybrid encryption scheme of this type. We also present an efficient Tag-KEM which is CCA-secure under general factoring assumption rather than Blum factoring assumption.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CompM.tmp...20A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CompM.tmp...20A"><span>Multiple crack detection in 3D using a stable XFEM and global optimization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.</p> <p>2018-02-01</p> <p>A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1136285.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1136285.pdf"><span>Course-Embedded Mentoring for First-Year Students: Melding Academic Subject Support with Role Modeling, Psycho-Social Support, and Goal Setting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Henry, Jim; Bruland, Holly Huff; Sano-Franchini, Jennifer</p> <p>2011-01-01</p> <p>This article examines a mentoring initiative that embedded advanced students in first-year composition courses to mentor students to excel to the best of their abilities. Mentors attended all classes along with students and conducted many out-of-class individual conferences, documenting each of them using programimplemented work logs. Four hundred…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20026574','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20026574"><span>Meeting stroke survivors' perceived needs: a qualitative study of a community-based exercise and education scheme.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Reed, Mary; Harrington, Rachel; Duggan, Aine; Wood, Victorine A</p> <p>2010-01-01</p> <p>A qualitative study using a phenomenological approach, to explore stroke survivors' needs and their perceptions of whether a community stroke scheme met these needs. Semi-structured in-depth interviews of 12 stroke survivors, purposively selected from participants attending a new community stroke scheme. Interpretative phenomenological analysis of interviews by two researchers independently. Participants attending the community stroke scheme sought to reconstruct their lives in the aftermath of their stroke. To enable this they needed internal resources of confidence and sense of purpose to 'create their social self', and external resources of 'responsive services' and an 'informal support network', to provide direction and encouragement. Participants felt the community stroke scheme met some of these needs through exercise, goal setting and peer group interaction, which included social support and knowledge acquisition. Stroke survivors need a variety of internal and external resources so that they can rebuild their lives positively post stroke. A stroke-specific community scheme, based on exercise, life-centred goal setting, peer support and knowledge acquisition, is an external resource that can help with meeting some of the stroke survivor's needs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25308517','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25308517"><span>Reducing sojourn points from recurrence plots to improve transition detection: Application to fetal heart rate transitions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zaylaa, Amira; Charara, Jamal; Girault, Jean-Marc</p> <p>2015-08-01</p> <p>The analysis of biomedical signals demonstrating complexity through recurrence plots is challenging. Quantification of recurrences is often biased by sojourn points that hide dynamic transitions. To overcome this problem, time series have previously been embedded at high dimensions. However, no one has quantified the elimination of sojourn points and rate of detection, nor the enhancement of transition detection has been investigated. This paper reports our on-going efforts to improve the detection of dynamic transitions from logistic maps and fetal hearts by reducing sojourn points. Three signal-based recurrence plots were developed, i.e. embedded with specific settings, derivative-based and m-time pattern. Determinism, cross-determinism and percentage of reduced sojourn points were computed to detect transitions. For logistic maps, an increase of 50% and 34.3% in sensitivity of detection over alternatives was achieved by m-time pattern and embedded recurrence plots with specific settings, respectively, and with a 100% specificity. For fetal heart rates, embedded recurrence plots with specific settings provided the best performance, followed by derivative-based recurrence plot, then unembedded recurrence plot using the determinism parameter. The relative errors between healthy and distressed fetuses were 153%, 95% and 91%. More than 50% of sojourn points were eliminated, allowing better detection of heart transitions triggered by gaseous exchange factors. This could be significant in improving the diagnosis of fetal state. Copyright © 2014 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007EJASP2008..211C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007EJASP2008..211C"><span>A Robust Zero-Watermarking Algorithm for Audio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Ning; Zhu, Jie</p> <p>2007-12-01</p> <p>In traditional watermarking algorithms, the insertion of watermark into the host signal inevitably introduces some perceptible quality degradation. Another problem is the inherent conflict between imperceptibility and robustness. Zero-watermarking technique can solve these problems successfully. Instead of embedding watermark, the zero-watermarking technique extracts some essential characteristics from the host signal and uses them for watermark detection. However, most of the available zero-watermarking schemes are designed for still image and their robustness is not satisfactory. In this paper, an efficient and robust zero-watermarking technique for audio signal is presented. The multiresolution characteristic of discrete wavelet transform (DWT), the energy compression characteristic of discrete cosine transform (DCT), and the Gaussian noise suppression property of higher-order cumulant are combined to extract essential features from the host audio signal and they are then used for watermark recovery. Simulation results demonstrate the effectiveness of our scheme in terms of inaudibility, detection reliability, and robustness.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23482179','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23482179"><span>Distributed fiber-optic laser-ultrasound generation based on ghost-mode of tilted fiber Bragg gratings.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tian, Jiajun; Zhang, Qi; Han, Ming</p> <p>2013-03-11</p> <p>Active ultrasonic testing is widely used for medical diagnosis, material characterization and structural health monitoring. Ultrasonic transducer is a key component in active ultrasonic testing. Due to their many advantages such as small size, light weight, and immunity to electromagnetic interference, fiber-optic ultrasonic transducers are particularly attractive for permanent, embedded applications in active ultrasonic testing for structural health monitoring. However, current fiber-optic transducers only allow effective ultrasound generation at a single location of the fiber end. Here we demonstrate a fiber-optic device that can effectively generate ultrasound at multiple, selected locations along a fiber in a controllable manner based on a smart light tapping scheme that only taps out the light of a particular wavelength for laser-ultrasound generation and allow light of longer wavelengths pass by without loss. Such a scheme may also find applications in remote fiber-optic device tuning and quasi-distributed biochemical fiber-optic sensing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJC...78..119S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJC...78..119S"><span>An automated subtraction of NLO EW infrared divergences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schönherr, Marek</p> <p>2018-02-01</p> <p>In this paper a generalisation of the Catani-Seymour dipole subtraction method to next-to-leading order electroweak calculations is presented. All singularities due to photon and gluon radiation off both massless and massive partons in the presence of both massless and massive spectators are accounted for. Particular attention is paid to the simultaneous subtraction of singularities of both QCD and electroweak origin which are present in the next-to-leading order corrections to processes with more than one perturbative order contributing at Born level. Similarly, embedding non-dipole-like photon splittings in the dipole subtraction scheme discussed. The implementation of the formulated subtraction scheme in the framework of the Sherpa Monte-Carlo event generator, including the restriction of the dipole phase space through the α -parameters and expanding its existing subtraction for NLO QCD calculations, is detailed and numerous internal consistency checks validating the obtained results are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EJASP2014...12H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EJASP2014...12H"><span>Incorporation of perceptually adaptive QIM with singular value decomposition for blind audio watermarking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hu, Hwai-Tsu; Chou, Hsien-Hsin; Yu, Chu; Hsu, Ling-Yuan</p> <p>2014-12-01</p> <p>This paper presents a novel approach for blind audio watermarking. The proposed scheme utilizes the flexibility of discrete wavelet packet transformation (DWPT) to approximate the critical bands and adaptively determines suitable embedding strengths for carrying out quantization index modulation (QIM). The singular value decomposition (SVD) is employed to analyze the matrix formed by the DWPT coefficients and embed watermark bits by manipulating singular values subject to perceptual criteria. To achieve even better performance, two auxiliary enhancement measures are attached to the developed scheme. Performance evaluation and comparison are demonstrated with the presence of common digital signal processing attacks. Experimental results confirm that the combination of the DWPT, SVD, and adaptive QIM achieves imperceptible data hiding with satisfying robustness and payload capacity. Moreover, the inclusion of self-synchronization capability allows the developed watermarking system to withstand time-shifting and cropping attacks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27561753','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27561753"><span>Crypto-Watermarking of Transmitted Medical Images.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Al-Haj, Ali; Mohammad, Ahmad; Amer, Alaa'</p> <p>2017-02-01</p> <p>Telemedicine is a booming healthcare practice that has facilitated the exchange of medical data and expertise between healthcare entities. However, the widespread use of telemedicine applications requires a secured scheme to guarantee confidentiality and verify authenticity and integrity of exchanged medical data. In this paper, we describe a region-based, crypto-watermarking algorithm capable of providing confidentiality, authenticity, and integrity for medical images of different modalities. The proposed algorithm provides authenticity by embedding robust watermarks in images' region of non-interest using SVD in the DWT domain. Integrity is provided in two levels: strict integrity implemented by a cryptographic hash watermark, and content-based integrity implemented by a symmetric encryption-based tamper localization scheme. Confidentiality is achieved as a byproduct of hiding patient's data in the image. Performance of the algorithm was evaluated with respect to imperceptibility, robustness, capacity, and tamper localization, using different medical images. The results showed the effectiveness of the algorithm in providing security for telemedicine applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016OptEn..55a6103L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016OptEn..55a6103L"><span>Decoding mobile-phone image sensor rolling shutter effect for visible light communications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Yang</p> <p>2016-01-01</p> <p>Optical wireless communication (OWC) using visible lights, also known as visible light communication (VLC), has attracted significant attention recently. As the traditional OWC and VLC receivers (Rxs) are based on PIN photo-diode or avalanche photo-diode, deploying the complementary metal-oxide-semiconductor (CMOS) image sensor as the VLC Rx is attractive since nowadays nearly every person has a smart phone with embedded CMOS image sensor. However, deploying the CMOS image sensor as the VLC Rx is challenging. In this work, we propose and demonstrate two simple contrast ratio (CR) enhancement schemes to improve the contrast of the rolling shutter pattern. Then we describe their processing algorithms one by one. The experimental results show that both the proposed CR enhancement schemes can significantly mitigate the high-intensity fluctuations of the rolling shutter pattern and improve the bit-error-rate performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4299031','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4299031"><span>Secure Publish-Subscribe Protocols for Heterogeneous Medical Wireless Body Area Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Picazo-Sanchez, Pablo; Tapiador, Juan E.; Peris-Lopez, Pedro; Suarez-Tangil, Guillermo</p> <p>2014-01-01</p> <p>Security and privacy issues in medical wireless body area networks (WBANs) constitute a major unsolved concern because of the challenges posed by the scarcity of resources in WBAN devices and the usability restrictions imposed by the healthcare domain. In this paper, we describe a WBAN architecture based on the well-known publish-subscribe paradigm. We present two protocols for publishing data and sending commands to a sensor that guarantee confidentiality and fine-grained access control. Both protocols are based on a recently proposed ciphertext policy attribute-based encryption (CP-ABE) scheme that is lightweight enough to be embedded into wearable sensors. We show how sensors can implement lattice-based access control (LBAC) policies using this scheme, which are highly appropriate for the eHealth domain. We report experimental results with a prototype implementation demonstrating the suitability of our proposed solution. PMID:25460814</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdWR..111..239F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdWR..111..239F"><span>Benchmarks for single-phase flow in fractured porous media</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Flemisch, Bernd; Berre, Inga; Boon, Wietse; Fumagalli, Alessio; Schwenck, Nicolas; Scotti, Anna; Stefansson, Ivar; Tatomir, Alexandru</p> <p>2018-01-01</p> <p>This paper presents several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media. A number of solution strategies are compared, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fracture model. The proposed benchmarks test the schemes by increasing the difficulties in terms of network geometry, e.g. intersecting fractures, and physical parameters, e.g. low and high fracture-matrix permeability ratio as well as heterogeneous fracture permeabilities. For each problem, the results presented are the number of unknowns, the approximation errors in the porous matrix and in the fractures with respect to a reference solution, and the sparsity and condition number of the discretized linear system. All data and meshes used in this study are publicly available for further comparisons.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22253617-coherent-states-formulation-polymer-field-theory','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22253617-coherent-states-formulation-polymer-field-theory"><span>Coherent states formulation of polymer field theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Man, Xingkun; Villet, Michael C.; Materials Research Laboratory, University of California, Santa Barbara, California 93106</p> <p>2014-01-14</p> <p>We introduce a stable and efficient complex Langevin (CL) scheme to enable the first direct numerical simulations of the coherent-states (CS) formulation of polymer field theory. In contrast with Edwards’ well-known auxiliary-field (AF) framework, the CS formulation does not contain an embedded nonlinear, non-local, implicit functional of the auxiliary fields, and the action of the field theory has a fully explicit, semi-local, and finite-order polynomial character. In the context of a polymer solution model, we demonstrate that the new CS-CL dynamical scheme for sampling fluctuations in the space of coherent states yields results in good agreement with now-standard AF-CL simulations.more » The formalism is potentially applicable to a broad range of polymer architectures and may facilitate systematic generation of trial actions for use in coarse-graining and numerical renormalization-group studies.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26943520','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26943520"><span>Efficient Measurement of Multiparticle Entanglement with Embedding Quantum Simulator.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Ming-Cheng; Wu, Dian; Su, Zu-En; Cai, Xin-Dong; Wang, Xi-Lin; Yang, Tao; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei</p> <p>2016-02-19</p> <p>The quantum measurement of entanglement is a demanding task in the field of quantum information. Here, we report the direct and scalable measurement of multiparticle entanglement with embedding photonic quantum simulators. In this embedding framework [R. Di Candia et al. Phys. Rev. Lett. 111, 240502 (2013)], the N-qubit entanglement, which does not associate with a physical observable directly, can be efficiently measured with only two (for even N) and six (for odd N) local measurement settings. Our experiment uses multiphoton quantum simulators to mimic dynamical concurrence and three-tangle entangled systems and to track their entanglement evolutions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5555976','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5555976"><span>Embedding a Palliative Approach in Nursing Care Delivery</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Porterfield, Pat; Roberts, Della; Lee, Joyce; Liang, Leah; Reimer-Kirkham, Sheryl; Pesut, Barb; Schalkwyk, Tilly; Stajduhar, Kelli; Tayler, Carolyn; Baumbusch, Jennifer; Thorne, Sally</p> <p>2017-01-01</p> <p>A palliative approach involves adapting and integrating principles and values from palliative care into the care of persons who have life-limiting conditions throughout their illness trajectories. The aim of this research was to determine what approaches to nursing care delivery support the integration of a palliative approach in hospital, residential, and home care settings. The findings substantiate the importance of embedding the values and tenets of a palliative approach into nursing care delivery, the roles that nurses have in working with interdisciplinary teams to integrate a palliative approach, and the need for practice supports to facilitate that embedding and integration. PMID:27930401</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930067183&hterms=Tam&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DTam','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930067183&hterms=Tam&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3DTam"><span>Dispersion-relation-preserving finite difference schemes for computational acoustics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tam, Christopher K. W.; Webb, Jay C.</p> <p>1993-01-01</p> <p>Time-marching dispersion-relation-preserving (DRP) schemes can be constructed by optimizing the finite difference approximations of the space and time derivatives in wave number and frequency space. A set of radiation and outflow boundary conditions compatible with the DRP schemes is constructed, and a sequence of numerical simulations is conducted to test the effectiveness of the DRP schemes and the radiation and outflow boundary conditions. Close agreement with the exact solutions is obtained.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19800060050&hterms=energy+transition&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Denergy%2Btransition','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19800060050&hterms=energy+transition&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Denergy%2Btransition"><span>A new semiclassical decoupling scheme for electronic transitions in molecular collisions - Application to vibrational-to-electronic energy transfer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lee, H.-W.; Lam, K. S.; Devries, P. L.; George, T. F.</p> <p>1980-01-01</p> <p>A new semiclassical decoupling scheme (the trajectory-based decoupling scheme) is introduced in a computational study of vibrational-to-electronic energy transfer for a simple model system that simulates collinear atom-diatom collisions. The probability of energy transfer (P) is calculated quasiclassically using the new scheme as well as quantum mechanically as a function of the atomic electronic-energy separation (lambda), with overall good agreement between the two sets of results. Classical mechanics with the new decoupling scheme is found to be capable of predicting resonance behavior whereas an earlier decoupling scheme (the coordinate-based decoupling scheme) failed. Interference effects are not exhibited in P vs lambda results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.7127E..06R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.7127E..06R"><span>Research on numerical control system based on S3C2410 and MCX314AL</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ren, Qiang; Jiang, Tingbiao</p> <p>2008-10-01</p> <p>With the rapid development of micro-computer technology, embedded system, CNC technology and integrated circuits, numerical control system with powerful functions can be realized by several high-speed CPU chips and RISC (Reduced Instruction Set Computing) chips which have small size and strong stability. In addition, the real-time operating system also makes the attainment of embedded system possible. Developing the NC system based on embedded technology can overcome some shortcomings of common PC-based CNC system, such as the waste of resources, low control precision, low frequency and low integration. This paper discusses a hardware platform of ENC (Embedded Numerical Control) system based on embedded processor chip ARM (Advanced RISC Machines)-S3C2410 and DSP (Digital Signal Processor)-MCX314AL and introduces the process of developing ENC system software. Finally write the MCX314AL's driver under the embedded Linux operating system. The embedded Linux operating system can deal with multitask well moreover satisfy the real-time and reliability of movement control. NC system has the advantages of best using resources and compact system with embedded technology. It provides a wealth of functions and superior performance with a lower cost. It can be sure that ENC is the direction of the future development.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JMP....59e2503A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JMP....59e2503A"><span>On the Ck-embedding of Lorentzian manifolds in Ricci-flat spaces</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Avalos, R.; Dahia, F.; Romero, C.</p> <p>2018-05-01</p> <p>In this paper, we investigate the problem of non-analytic embeddings of Lorentzian manifolds in Ricci-flat semi-Riemannian spaces. In order to do this, we first review some relevant results in the area and then motivate both the mathematical and physical interests in this problem. We show that any n-dimensional compact Lorentzian manifold (Mn, g), with g in the Sobolev space Hs+3, s >n/2 , admits an isometric embedding in a (2n + 2)-dimensional Ricci-flat semi-Riemannian manifold. The sharpest result available for these types of embeddings, in the general setting, comes as a corollary of Greene's remarkable embedding theorems R. Greene [Mem. Am. Math. Soc. 97, 1 (1970)], which guarantee the embedding of a compact n-dimensional semi-Riemannian manifold into an n(n + 5)-dimensional semi-Euclidean space, thereby guaranteeing the embedding into a Ricci-flat space with the same dimension. The theorem presented here improves this corollary in n2 + 3n - 2 codimensions by replacing the Riemann-flat condition with the Ricci-flat one from the beginning. Finally, we will present a corollary of this theorem, which shows that a compact strip in an n-dimensional globally hyperbolic space-time can be embedded in a (2n + 2)-dimensional Ricci-flat semi-Riemannian manifold.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22144524','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22144524"><span>Unsupervised image matching based on manifold alignment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pei, Yuru; Huang, Fengchun; Shi, Fuhao; Zha, Hongbin</p> <p>2012-08-01</p> <p>This paper challenges the issue of automatic matching between two image sets with similar intrinsic structures and different appearances, especially when there is no prior correspondence. An unsupervised manifold alignment framework is proposed to establish correspondence between data sets by a mapping function in the mutual embedding space. We introduce a local similarity metric based on parameterized distance curves to represent the connection of one point with the rest of the manifold. A small set of valid feature pairs can be found without manual interactions by matching the distance curve of one manifold with the curve cluster of the other manifold. To avoid potential confusions in image matching, we propose an extended affine transformation to solve the nonrigid alignment in the embedding space. The comparatively tight alignments and the structure preservation can be obtained simultaneously. The point pairs with the minimum distance after alignment are viewed as the matchings. We apply manifold alignment to image set matching problems. The correspondence between image sets of different poses, illuminations, and identities can be established effectively by our approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28150431','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28150431"><span>Should learners reason one step at a time? A randomised trial of two diagnostic scheme designs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Blissett, Sarah; Morrison, Deric; McCarty, David; Sibbald, Matthew</p> <p>2017-04-01</p> <p>Making a diagnosis can be difficult for learners as they must integrate multiple clinical variables. Diagnostic schemes can help learners with this complex task. A diagnostic scheme is an algorithm that organises possible diagnoses by assigning signs or symptoms (e.g. systolic murmur) to groups of similar diagnoses (e.g. aortic stenosis and aortic sclerosis) and provides distinguishing features to help discriminate between similar diagnoses (e.g. carotid pulse). The current literature does not identify whether scheme layouts should guide learners to reason one step at a time in a terminally branching scheme or weigh multiple variables simultaneously in a hybrid scheme. We compared diagnostic accuracy, perceptual errors and cognitive load using two scheme layouts for cardiac auscultation. Focused on the task of identifying murmurs on Harvey, a cardiopulmonary simulator, 86 internal medicine residents used two scheme layouts. The terminally branching scheme organised the information into single variable decisions. The hybrid scheme combined single variable decisions with a chart integrating multiple distinguishing features. Using a crossover design, participants completed one set of murmurs (diastolic or systolic) with either the terminally branching or the hybrid scheme. The second set of murmurs was completed with the other scheme. A repeated measures manova was performed to compare diagnostic accuracy, perceptual errors and cognitive load between the scheme layouts. There was a main effect of the scheme layout (Wilks' λ = 0.841, F 3,80 = 5.1, p = 0.003). Use of a terminally branching scheme was associated with increased diagnostic accuracy (65 versus 53%, p = 0.02), fewer perceptual errors (0.61 versus 0.98 errors, p = 0.001) and lower cognitive load (3.1 versus 3.5/7, p = 0.023). The terminally branching scheme was associated with improved diagnostic accuracy, fewer perceptual errors and lower cognitive load, suggesting that terminally branching schemes are effective for improving diagnostic accuracy. These findings can inform the design of schemes and other clinical decision aids. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9785E..10A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9785E..10A"><span>Visualizing and enhancing a deep learning framework using patients age and gender for chest x-ray image retrieval</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit</p> <p>2016-03-01</p> <p>We explore the combination of text metadata, such as patients' age and gender, with image-based features, for X-ray chest pathology image retrieval. We focus on a feature set extracted from a pre-trained deep convolutional network shown in earlier work to achieve state-of-the-art results. Two distance measures are explored: a descriptor-based measure, which computes the distance between image descriptors, and a classification-based measure, which performed by a comparison of the corresponding SVM classification probabilities. We show that retrieval results increase once the age and gender information combined with the features extracted from the last layers of the network, with best results using the classification-based scheme. Visualization of the X-ray data is presented by embedding the high dimensional deep learning features in a 2-D dimensional space while preserving the pairwise distances using the t-SNE algorithm. The 2-D visualization gives the unique ability to find groups of X-ray images that are similar to the query image and among themselves, which is a characteristic we do not see in a 1-D traditional ranking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..121d2021H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..121d2021H"><span>Research on the supercapacitor support schemes for LVRT of variable-frequency drive in the thermal power plant</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Han, Qiguo; Zhu, Kai; Shi, Wenming; Wu, Kuayu; Chen, Kai</p> <p>2018-02-01</p> <p>In order to solve the problem of low voltage ride through(LVRT) of the major auxiliary equipment’s variable-frequency drive (VFD) in thermal power plant, the scheme of supercapacitor paralleled in the DC link of VFD is put forward, furthermore, two solutions of direct parallel support and voltage boost parallel support of supercapacitor are proposed. The capacitor values for the relevant motor loads are calculated according to the law of energy conservation, and they are verified by Matlab simulation. At last, a set of test prototype is set up, and the test results prove the feasibility of the proposed schemes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvD..97h5016G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvD..97h5016G"><span>Renormalization of QCD in the interpolating momentum subtraction scheme at three loops</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gracey, J. A.; Simms, R. M.</p> <p>2018-04-01</p> <p>We introduce a more general set of kinematic renormalization schemes than the original momentum subtraction schemes of Celmaster and Gonsalves. These new schemes will depend on a parameter ω , which tags the external momentum of one of the legs of the three-point vertex functions in QCD. In each of the three new schemes, we renormalize QCD in the Landau and maximal Abelian gauges and establish the three-loop renormalization group functions in each gauge. For an application, we evaluate two critical exponents at the Banks-Zaks fixed point and demonstrate that their values appear to be numerically scheme independent in a subrange of the conformal window.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/249252','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/249252"><span>The data embedding method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Sandford, M.T. II; Bradley, J.N.; Handel, T.G.</p> <p></p> <p>Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in Microsoft{reg_sign} bitmap (.BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits,more » is termed {open_quote}steganography.{close_quote} Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or {open_quote}lossy{close_quote} compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is in data an analysis algorithm.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996SPIE.2615..226S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996SPIE.2615..226S"><span>Data embedding method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sandford, Maxwell T., II; Bradley, Jonathan N.; Handel, Theodore G.</p> <p>1996-01-01</p> <p>Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in MicrosoftTM bitmap (BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits, is termed `steganography.' Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or `lossy' compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is derived from the original host data by an analysis algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25810771','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25810771"><span>Enhancing of chemical compound and drug name recognition using representative tag scheme and fine-grained tokenization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dai, Hong-Jie; Lai, Po-Ting; Chang, Yung-Chun; Tsai, Richard Tzong-Han</p> <p>2015-01-01</p> <p>The functions of chemical compounds and drugs that affect biological processes and their particular effect on the onset and treatment of diseases have attracted increasing interest with the advancement of research in the life sciences. To extract knowledge from the extensive literatures on such compounds and drugs, the organizers of BioCreative IV administered the CHEMical Compound and Drug Named Entity Recognition (CHEMDNER) task to establish a standard dataset for evaluating state-of-the-art chemical entity recognition methods. This study introduces the approach of our CHEMDNER system. Instead of emphasizing the development of novel feature sets for machine learning, this study investigates the effect of various tag schemes on the recognition of the names of chemicals and drugs by using conditional random fields. Experiments were conducted using combinations of different tokenization strategies and tag schemes to investigate the effects of tag set selection and tokenization method on the CHEMDNER task. This study presents the performance of CHEMDNER of three more representative tag schemes-IOBE, IOBES, and IOB12E-when applied to a widely utilized IOB tag set and combined with the coarse-/fine-grained tokenization methods. The experimental results thus reveal that the fine-grained tokenization strategy performance best in terms of precision, recall and F-scores when the IOBES tag set was utilized. The IOBES model with fine-grained tokenization yielded the best-F-scores in the six chemical entity categories other than the "Multiple" entity category. Nonetheless, no significant improvement was observed when a more representative tag schemes was used with the coarse or fine-grained tokenization rules. The best F-scores that were achieved using the developed system on the test dataset of the CHEMDNER task were 0.833 and 0.815 for the chemical documents indexing and the chemical entity mention recognition tasks, respectively. The results herein highlight the importance of tag set selection and the use of different tokenization strategies. Fine-grained tokenization combined with the tag set IOBES most effectively recognizes chemical and drug names. To the best of the authors' knowledge, this investigation is the first comprehensive investigation use of various tag set schemes combined with different tokenization strategies for the recognition of chemical entities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3655730','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3655730"><span>A Nonrigid Kernel-Based Framework for 2D-3D Pose Estimation and 2D Image Segmentation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sandhu, Romeil; Dambreville, Samuel; Yezzi, Anthony; Tannenbaum, Allen</p> <p>2013-01-01</p> <p>In this work, we present a nonrigid approach to jointly solving the tasks of 2D-3D pose estimation and 2D image segmentation. In general, most frameworks that couple both pose estimation and segmentation assume that one has exact knowledge of the 3D object. However, under nonideal conditions, this assumption may be violated if only a general class to which a given shape belongs is given (e.g., cars, boats, or planes). Thus, we propose to solve the 2D-3D pose estimation and 2D image segmentation via nonlinear manifold learning of 3D embedded shapes for a general class of objects or deformations for which one may not be able to associate a skeleton model. Thus, the novelty of our method is threefold: First, we present and derive a gradient flow for the task of nonrigid pose estimation and segmentation. Second, due to the possible nonlinear structures of one’s training set, we evolve the preimage obtained through kernel PCA for the task of shape analysis. Third, we show that the derivation for shape weights is general. This allows us to use various kernels, as well as other statistical learning methodologies, with only minimal changes needing to be made to the overall shape evolution scheme. In contrast with other techniques, we approach the nonrigid problem, which is an infinite-dimensional task, with a finite-dimensional optimization scheme. More importantly, we do not explicitly need to know the interaction between various shapes such as that needed for skeleton models as this is done implicitly through shape learning. We provide experimental results on several challenging pose estimation and segmentation scenarios. PMID:20733218</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013GMDD....6.4983L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013GMDD....6.4983L"><span>A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.</p> <p>2013-09-01</p> <p>Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent, The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data-sets are made available to facilitate the process of model evaluation and scheme intercomparison.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014GMD.....7..105L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014GMD.....7..105L"><span>A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.</p> <p>2014-01-01</p> <p>Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent. The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data sets are made available to facilitate the process of model evaluation and scheme intercomparison.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25463269','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25463269"><span>Evaluating the effectiveness of self-administration of medication (SAM) schemes in the hospital setting: a systematic review of the literature.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Richardson, Suzanna J; Brooks, Hannah L; Bramley, George; Coleman, Jamie J</p> <p>2014-01-01</p> <p>Self-administration of medicines is believed to increase patients' understanding about their medication and to promote their independence and autonomy in the hospital setting. The effect of inpatient self-administration of medication (SAM) schemes on patients, staff and institutions is currently unclear. To systematically review the literature relating to the effect of SAM schemes on the following outcomes: patient knowledge, patient compliance/medication errors, success in self-administration, patient satisfaction, staff satisfaction, staff workload, and costs. Keyword and text word searches of online databases were performed between January and March 2013. Included articles described and evaluated inpatient SAM schemes. Case studies and anecdotal studies were excluded. 43 papers were included for final analysis. Due to the heterogeneity of results and unclear findings it was not possible to perform a quantitative synthesis of results. Participation in SAM schemes often led to increased knowledge about drugs and drug regimens, but not side effects. However, the effect of SAM schemes on patient compliance/medication errors was inconclusive. Patients and staff were highly satisfied with their involvement in SAM schemes. SAM schemes appear to provide some benefits (e.g. increased patient knowledge), but their effect on other outcomes (e.g. compliance) is unclear. Few studies of high methodological quality using validated outcome measures exist. Inconsistencies in both measuring and reporting outcomes across studies make it challenging to compare results and draw substantive conclusions about the effectiveness of SAM schemes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22583215','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22583215"><span>The role of axis embedding on rigid rotor decomposition analysis of variational rovibrational wave functions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Szidarovszky, Tamás; Fábri, Csaba; Császár, Attila G</p> <p>2012-05-07</p> <p>Approximate rotational characterization of variational rovibrational wave functions via the rigid rotor decomposition (RRD) protocol is developed for Hamiltonians based on arbitrary sets of internal coordinates and axis embeddings. An efficient and general procedure is given that allows employing the Eckart embedding with arbitrary polyatomic Hamiltonians through a fully numerical approach. RRD tables formed by projecting rotational-vibrational wave functions into products of rigid-rotor basis functions and previously determined vibrational eigenstates yield rigid-rotor labels for rovibrational eigenstates by selecting the largest overlap. Embedding-dependent RRD analyses are performed, up to high energies and rotational excitations, for the H(2) (16)O isotopologue of the water molecule. Irrespective of the embedding chosen, the RRD procedure proves effective in providing unambiguous rotational assignments at low energies and J values. Rotational labeling of rovibrational states of H(2) (16)O proves to be increasingly difficult beyond about 10,000 cm(-1), close to the barrier to linearity of the water molecule. For medium energies and excitations the Eckart embedding yields the largest RRD coefficients, thus providing the largest number of unambiguous rotational labels.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22416236-implementation-density-functional-embedding-theory-within-projector-augmented-wave-method-applications-semiconductor-defect-states','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22416236-implementation-density-functional-embedding-theory-within-projector-augmented-wave-method-applications-semiconductor-defect-states"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Yu, Kuang; Libisch, Florian; Carter, Emily A., E-mail: eac@princeton.edu</p> <p></p> <p>We report a new implementation of the density functional embedding theory (DFET) in the VASP code, using the projector-augmented-wave (PAW) formalism. Newly developed algorithms allow us to efficiently perform optimized effective potential optimizations within PAW. The new algorithm generates robust and physically correct embedding potentials, as we verified using several test systems including a covalently bound molecule, a metal surface, and bulk semiconductors. We show that with the resulting embedding potential, embedded cluster models can reproduce the electronic structure of point defects in bulk semiconductors, thereby demonstrating the validity of DFET in semiconductors for the first time. Compared to ourmore » previous version, the new implementation of DFET within VASP affords use of all features of VASP (e.g., a systematic PAW library, a wide selection of functionals, a more flexible choice of U correction formalisms, and faster computational speed) with DFET. Furthermore, our results are fairly robust with respect to both plane-wave and Gaussian type orbital basis sets in the embedded cluster calculations. This suggests that the density functional embedding method is potentially an accurate and efficient way to study properties of isolated defects in semiconductors.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3274050','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3274050"><span>A Target Coverage Scheduling Scheme Based on Genetic Algorithms in Directional Sensor Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gil, Joon-Min; Han, Youn-Hee</p> <p>2011-01-01</p> <p>As a promising tool for monitoring the physical world, directional sensor networks (DSNs) consisting of a large number of directional sensors are attracting increasing attention. As directional sensors in DSNs have limited battery power and restricted angles of sensing range, maximizing the network lifetime while monitoring all the targets in a given area remains a challenge. A major technique to conserve the energy of directional sensors is to use a node wake-up scheduling protocol by which some sensors remain active to provide sensing services, while the others are inactive to conserve their energy. In this paper, we first address a Maximum Set Covers for DSNs (MSCD) problem, which is known to be NP-complete, and present a greedy algorithm-based target coverage scheduling scheme that can solve this problem by heuristics. This scheme is used as a baseline for comparison. We then propose a target coverage scheduling scheme based on a genetic algorithm that can find the optimal cover sets to extend the network lifetime while monitoring all targets by the evolutionary global search technique. To verify and evaluate these schemes, we conducted simulations and showed that the schemes can contribute to extending the network lifetime. Simulation results indicated that the genetic algorithm-based scheduling scheme had better performance than the greedy algorithm-based scheme in terms of maximizing network lifetime. PMID:22319387</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23802128','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23802128"><span>Implementing technology-based embedded assessment in the home and community life of individuals aging with disabilities: a participatory research and development study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Ke-Yu; Harniss, Mark; Patel, Shwetak; Johnson, Kurt</p> <p>2014-03-01</p> <p>The goal of the study was to investigate the accuracy, feasibility and acceptability of implementing an embedded assessment system in the homes of individuals aging with disabilities. We developed and studied a location tracking system, UbiTrack, which can be used for both indoor and outdoor location sensing. The system was deployed in the homes of five participants with spinal cord injuries, muscular dystrophy, multiple sclerosis and late effects of polio. We collected sensor data throughout the deployment, conducted pre and post interviews and collected weekly diaries to measure ground truth. The system was deployed successfully although there were challenges related to system installation and calibration. System accuracy ranged from 62% to 87% depending upon room configuration and number of wireless access points installed. In general, participants reported that the system was easy to use, did not require significant effort on their part and did not interfere with their daily lives. Embedded assessment has great potential as a mechanism to gather ongoing information about the health of individuals aging with disabilities; however, there are significant challenges to its implementation in real-world settings with people with disabilities that will need to be resolved before it can be practically implemented. Technology-based embedded assessment has the potential to promote health for adults with disabilities and allow for aging in place. It may also reduce the difficulty, cost and intrusiveness of health measurement. Many new commercial and non-commercial products are available to support embedded assessment; however, most products have not been well-tested in real-world environments with individuals aging with disability. Community settings and diverse population of people with disabilities pose significant challenges to the implementation of embedded assessment systems.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17270970','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17270970"><span>Reversible watermarking for authentication of DICOM images.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zain, J M; Baldwin, L P; Clarke, M</p> <p>2004-01-01</p> <p>We propose a watermarking scheme that can recover the original image from the watermarked one. The purpose is to verify the integrity and authenticity of DICOM images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the whole image is embedded in the least significant bits of the RONI (Region of Non-Interest). If the image has not been altered, the watermark will be extracted and the original image will be recovered. SHA-256 of the recovered image will be compared with the extracted watermark for authentication.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA632277','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA632277"><span>What’s in a Name: A Comparative Analysis of the United States Real ID Act and the United Kingdom’s National Identity Scheme</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2015-12-01</p> <p>has always been important to modern governments, but the issue has become much more pressing in the Internet age, when a person’s digital identity...nationals in the UK are being given different cards. 4. Place of birth 5. Signature - digitally embedded in the card 6. Date of card issue and date...license or personal identification card number  A digital photograph of the person  The person’s address of principal residence  The person’s</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1414764-directional-excitation-without-breaking-reciprocity','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1414764-directional-excitation-without-breaking-reciprocity"><span>Directional excitation without breaking reciprocity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Ramezani, Hamidreza; Dubois, Marc; Wang, Yuan; ...</p> <p>2016-09-02</p> <p>Here, we propose a mechanism for directional excitation without breaking reciprocity. This is achieved by embedding an impedance matched parity-time symmetric potential in a three-port system. The amplitude distribution within the gain and loss regions is strongly influenced by the direction of the incoming field. Consequently, the excitation of the third port is contingent on the direction of incidence while transmission in the main channel is immune. This design improves the four-port directional coupler scheme, as there is no need to implement an anechoic termination to one of the ports.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20646032-near-complete-teleportation-superposed-coherent-state','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20646032-near-complete-teleportation-superposed-coherent-state"><span>Near-complete teleportation of a superposed coherent state</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Cheong, Yong Wook; Kim, Hyunjae; Lee, Hai-Woong</p> <p>2004-09-01</p> <p>The four Bell-type entangled coherent states, {alpha}>-{alpha}>{+-}-{alpha}>{alpha}> and {alpha}>{alpha}>{+-}-{alpha}>-{alpha}>, can be discriminated with a high probability using only linear optical means, as long as {alpha} is not too small. Based on this observation, we propose a simple scheme to almost completely teleport a superposed coherent state. The nonunitary transformation that is required to complete the teleportation can be achieved by embedding the receiver's field state in a larger Hilbert space consisting of the field and a single atom and performing a unitary transformation on this Hilbert space00.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/433856-advanced-microprocessor-based-power-protection-system-using-artificial-neural-network-techniques','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/433856-advanced-microprocessor-based-power-protection-system-using-artificial-neural-network-techniques"><span>Advanced microprocessor based power protection system using artificial neural network techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chen, Z.; Kalam, A.; Zayegh, A.</p> <p></p> <p>This paper describes an intelligent embedded microprocessor based system for fault classification in power system protection system using advanced 32-bit microprocessor technology. The paper demonstrates the development of protective relay to provide overcurrent protection schemes for fault detection. It also describes a method for power fault classification in three-phase system based on the use of neural network technology. The proposed design is implemented and tested on a single line three phase power system in power laboratory. Both the hardware and software development are described in detail.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840018936','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840018936"><span>A numerical study of the 2- and 3-dimensional unsteady Navier-Stokes equations in velocity-vorticity variables using compact difference schemes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gatski, T. B.; Grosch, C. E.</p> <p>1984-01-01</p> <p>A compact finite-difference approximation to the unsteady Navier-Stokes equations in velocity-vorticity variables is used to numerically simulate a number of flows. These include two-dimensional laminar flow of a vortex evolving over a flat plate with an embedded cavity, the unsteady flow over an elliptic cylinder, and aspects of the transient dynamics of the flow over a rearward facing step. The methodology required to extend the two-dimensional formulation to three-dimensions is presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910001590','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910001590"><span>Analysis and control of supersonic vortex breakdown flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kandil, Osama A.</p> <p>1990-01-01</p> <p>Analysis and computation of steady, compressible, quasi-axisymmetric flow of an isolated, slender vortex are considered. The compressible, Navier-Stokes equations are reduced to a simpler set by using the slenderness and quasi-axisymmetry assumptions. The resulting set along with a compatibility equation are transformed from the diverging physical domain to a rectangular computational domain. Solving for a compatible set of initial profiles and specifying a compatible set of boundary conditions, the equations are solved using a type-differencing scheme. Vortex breakdown locations are detected by the failure of the scheme to converge. Computational examples include isolated vortex flows at different Mach numbers, external axial-pressure gradients and swirl ratios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5795413','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5795413"><span>Range Sidelobe Suppression Using Complementary Sets in Distributed Multistatic Radar Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Xuezhi; Song, Yongping; Huang, Xiaotao; Moran, Bill</p> <p>2017-01-01</p> <p>We propose an alternative waveform scheme built on mutually-orthogonal complementary sets for a distributed multistatic radar. Our analysis and simulation show a reduced frequency band requirement for signal separation between antennas with centralized signal processing using the same carrier frequency. While the scheme can tolerate fluctuations of carrier frequencies and phases, range sidelobes arise when carrier frequencies between antennas are significantly different. PMID:29295566</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26685270','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26685270"><span>Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann</p> <p>2017-01-01</p> <p>Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4630940','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4630940"><span>Notched-noise embedded frequency specific chirps for objective audiometry using auditory brainstem responses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Corona-Strauss, Farah I.; Schick, Bernhard; Delb, Wolfgang; Strauss, Daniel J.</p> <p>2012-01-01</p> <p>It has been shown recently that chirp-evoked auditory brainstem responses (ABRs) show better performance than click stimulations, especially at low intensity levels. In this paper we present the development, test, and evaluation of a series of notched-noise embedded frequency specific chirps. ABRs were collected in healthy young control subjects using the developed stimuli. Results of the analysis of the corresponding ABRs using a time-scale phase synchronization stability (PSS) measure are also reported. The resultant wave V amplitude and latency measures showed a similar behavior as for values reported in literature. The PSS of frequency specific chirp-evoked ABRs reflected the presence of the wave V for all stimulation intensities. The scales that resulted in higher PSS are in line with previous findings, where ABRs evoked by broadband chirps were analyzed, and which stated that low frequency channels are better for the recognition and analysis of chirp-evoked ABRs. We conclude that the development and test of the series of notched-noise embedded frequency specific chirps allowed the assessment of frequency specific ABRs, showing an identifiable wave V for different intensity levels. Future work may include the development of a faster automatic recognition scheme for these frequency specific ABRs. PMID:26557336</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAr42W3..527N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAr42W3..527N"><span>Towards Guided Underwater Survey Using Light Visual Odometry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.</p> <p>2017-02-01</p> <p>A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8755E..03R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8755E..03R"><span>Biometric feature embedding using robust steganography technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.</p> <p>2013-05-01</p> <p>This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007APS..SHK.D3001B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007APS..SHK.D3001B"><span>Embedded Cohesive Elements (ECE) Approach to the Simulation of Spall Fracture Experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bonora, Nicola; Esposito, Luca; Ruggiero, Andrew</p> <p>2007-06-01</p> <p>Discrepancies between the calculated and observed velocity vs time plot, relatively to the spall signal portion in terms of both signal amplitude and frequency, in numerical simulations of flyer plate impact test are usually shown. These are often ascribed either to material model or the numerical scheme used. Bonora et al. (2003 )[Bonora N., Ruggiero A. and Milella P.P., 2003, Fracture energy effect on spall signal, Proc. of 13^th APS SCCM03, Portland, USA] showed that, for ductile metals, these differences can be the imputed to the dissipation process during fracturing due to the viscous separation of spall fracture plane surfaces. In this work that concept has been further developed implementing an embedded cohesive elements (ECE) technology into FEM. The ECE method consists in embedding cohesive elements (normal and shear forces only) into standard isoparametric 2D or 3D FEM continuum elements. The cohesive elements remain silent and inactive until the continuum element fails. At failure, the continuum element is removed while the ECE becomes active until the separation energy is dissipated. Here, the methodology is presented and applied to simulate soft spall in ductile metals such as OHFC copper. Results of parametric study on mesh size and cohesive law shape effect are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5846287','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5846287"><span>Baseline and extensions approach to information retrieval of complex medical data: Poznan's approach to the bioCADDIE 2016</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cieslewicz, Artur; Dutkiewicz, Jakub; Jedrzejek, Czeslaw</p> <p>2018-01-01</p> <p>Abstract Information retrieval from biomedical repositories has become a challenging task because of their increasing size and complexity. To facilitate the research aimed at improving the search for relevant documents, various information retrieval challenges have been launched. In this article, we present the improved medical information retrieval systems designed by Poznan University of Technology and Poznan University of Medical Sciences as a contribution to the bioCADDIE 2016 challenge—a task focusing on information retrieval from a collection of 794 992 datasets generated from 20 biomedical repositories. The system developed by our team utilizes the Terrier 4.2 search platform enhanced by a query expansion method using word embeddings. This approach, after post-challenge modifications and improvements (with particular regard to assigning proper weights for original and expanded terms), allowed us achieving the second best infNDCG measure (0.4539) compared with the challenge results and infAP 0.3978. This demonstrates that proper utilization of word embeddings can be a valuable addition to the information retrieval process. Some analysis is provided on related work involving other bioCADDIE contributions. We discuss the possibility of improving our results by using better word embedding schemes to find candidates for query expansion. Database URL: https://biocaddie.org/benchmark-data PMID:29688372</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26863578','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26863578"><span>Blind Compressed Sensing Enables 3-Dimensional Dynamic Free Breathing Magnetic Resonance Imaging of Lung Volumes and Diaphragm Motion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bhave, Sampada; Lingala, Sajan Goud; Newell, John D; Nagle, Scott K; Jacob, Mathews</p> <p>2016-06-01</p> <p>The objective of this study was to increase the spatial and temporal resolution of dynamic 3-dimensional (3D) magnetic resonance imaging (MRI) of lung volumes and diaphragm motion. To achieve this goal, we evaluate the utility of the proposed blind compressed sensing (BCS) algorithm to recover data from highly undersampled measurements. We evaluated the performance of the BCS scheme to recover dynamic data sets from retrospectively and prospectively undersampled measurements. We also compared its performance against that of view-sharing, the nuclear norm minimization scheme, and the l1 Fourier sparsity regularization scheme. Quantitative experiments were performed on a healthy subject using a fully sampled 2D data set with uniform radial sampling, which was retrospectively undersampled with 16 radial spokes per frame to correspond to an undersampling factor of 8. The images obtained from the 4 reconstruction schemes were compared with the fully sampled data using mean square error and normalized high-frequency error metrics. The schemes were also compared using prospective 3D data acquired on a Siemens 3 T TIM TRIO MRI scanner on 8 healthy subjects during free breathing. Two expert cardiothoracic radiologists (R1 and R2) qualitatively evaluated the reconstructed 3D data sets using a 5-point scale (0-4) on the basis of spatial resolution, temporal resolution, and presence of aliasing artifacts. The BCS scheme gives better reconstructions (mean square error = 0.0232 and normalized high frequency = 0.133) than the other schemes in the 2D retrospective undersampling experiments, producing minimally distorted reconstructions up to an acceleration factor of 8 (16 radial spokes per frame). The prospective 3D experiments show that the BCS scheme provides visually improved reconstructions than the other schemes do. The BCS scheme provides improved qualitative scores over nuclear norm and l1 Fourier sparsity regularization schemes in the temporal blurring and spatial blurring categories. The qualitative scores for aliasing artifacts in the images reconstructed by nuclear norm scheme and BCS scheme are comparable.The comparisons of the tidal volume changes also show that the BCS scheme has less temporal blurring as compared with the nuclear norm minimization scheme and the l1 Fourier sparsity regularization scheme. The minute ventilation estimated by BCS for tidal breathing in supine position (4 L/min) and the measured supine inspiratory capacity (1.5 L) is in good correlation with the literature. The improved performance of BCS can be explained by its ability to efficiently adapt to the data, thus providing a richer representation of the signal. The feasibility of the BCS scheme was demonstrated for dynamic 3D free breathing MRI of lung volumes and diaphragm motion. A temporal resolution of ∼500 milliseconds, spatial resolution of 2.7 × 2.7 × 10 mm, with whole lung coverage (16 slices) was achieved using the BCS scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JCoPh.370...43P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JCoPh.370...43P"><span>An RBF-FD closest point method for solving PDEs on surfaces</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Petras, A.; Ling, L.; Ruuth, S. J.</p> <p>2018-10-01</p> <p>Partial differential equations (PDEs) on surfaces appear in many applications throughout the natural and applied sciences. The classical closest point method (Ruuth and Merriman (2008) [17]) is an embedding method for solving PDEs on surfaces using standard finite difference schemes. In this paper, we formulate an explicit closest point method using finite difference schemes derived from radial basis functions (RBF-FD). Unlike the orthogonal gradients method (Piret (2012) [22]), our proposed method uses RBF centers on regular grid nodes. This formulation not only reduces the computational cost but also avoids the ill-conditioning from point clustering on the surface and is more natural to couple with a grid based manifold evolution algorithm (Leung and Zhao (2009) [26]). When compared to the standard finite difference discretization of the closest point method, the proposed method requires a smaller computational domain surrounding the surface, resulting in a decrease in the number of sampling points on the surface. In addition, higher-order schemes can easily be constructed by increasing the number of points in the RBF-FD stencil. Applications to a variety of examples are provided to illustrate the numerical convergence of the method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7870E..0GB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7870E..0GB"><span>Secure annotation for medical images based on reversible watermarking in the Integer Fibonacci-Haar transform domain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Battisti, F.; Carli, M.; Neri, A.</p> <p>2011-03-01</p> <p>The increasing use of digital image-based applications is resulting in huge databases that are often difficult to use and prone to misuse and privacy concerns. These issues are especially crucial in medical applications. The most commonly adopted solution is the encryption of both the image and the patient data in separate files that are then linked. This practice results to be inefficient since, in order to retrieve patient data or analysis details, it is necessary to decrypt both files. In this contribution, an alternative solution for secure medical image annotation is presented. The proposed framework is based on the joint use of a key-dependent wavelet transform, the Integer Fibonacci-Haar transform, of a secure cryptographic scheme, and of a reversible watermarking scheme. The system allows: i) the insertion of the patient data into the encrypted image without requiring the knowledge of the original image, ii) the encryption of annotated images without causing loss in the embedded information, and iii) due to the complete reversibility of the process, it allows recovering the original image after the mark removal. Experimental results show the effectiveness of the proposed scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EJASP2017...59X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EJASP2017...59X"><span>Efficient reversible data hiding in encrypted image with public key cryptosystem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xiang, Shijun; Luo, Xinrong</p> <p>2017-12-01</p> <p>This paper proposes a new reversible data hiding scheme for encrypted images by using homomorphic and probabilistic properties of Paillier cryptosystem. The proposed method can embed additional data directly into encrypted image without any preprocessing operations on original image. By selecting two pixels as a group for encryption, data hider can retrieve the absolute differences of groups of two pixels by employing a modular multiplicative inverse method. Additional data can be embedded into encrypted image by shifting histogram of the absolute differences by using the homomorphic property in encrypted domain. On the receiver side, legal user can extract the marked histogram in encrypted domain in the same way as data hiding procedure. Then, the hidden data can be extracted from the marked histogram and the encrypted version of original image can be restored by using inverse histogram shifting operations. Besides, the marked absolute differences can be computed after decryption for extraction of additional data and restoration of original image. Compared with previous state-of-the-art works, the proposed scheme can effectively avoid preprocessing operations before encryption and can efficiently embed and extract data in encrypted domain. The experiments on the standard image files also certify the effectiveness of the proposed scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJE...105..941A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJE...105..941A"><span>Analysis of switch and examine combining with post-examining selection in cognitive radio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Agarwal, Rupali; Srivastava, Neelam; Katiyar, Himanshu</p> <p>2018-06-01</p> <p>To perform spectrum sensing in fading environment is one of the most challenging tasks for a CR system. Diversity combining schemes are used to combat the effect of fading and hence detection probability of CR gets improved. Among many diversity combining techniques, switched diversity offers one of the lowest complexity solutions. The receiver embedded with switched diversity looks for an acceptable diversity path (having signal to noise ratio (SNR) above the required threshold) to receive the data. In conventional switch and examine combining (SEC) scheme, when no acceptable path is found after all the paths are examined, the receiver randomly chooses an unacceptable path. Switch and examine combining with post-examining selection (SECp) is a modified version of conventional SEC. In SECp, the conventional SEC scheme is altered in a way that it selects the best path when no acceptable path is found after all paths have been examined. In this paper, formula for probability of detection ( ?) is derived using SECp and SEC diversity combining technique over Rayleigh fading channel. Also the performance of SECp is compared with SEC and no diversity case. Performance comparison is done with the help of SNR vs. ? and complementary receiver operating characteristic curves.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2224339','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2224339"><span>Araldite as an Embedding Medium for Electron Microscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Glauert, Audrey M.; Glauert, R. H.</p> <p>1958-01-01</p> <p>Epoxy resins are suitable media for embedding for electron microscopy, as they set uniformly with virtually no shrinkage. A mixture of araldite epoxy resins has been developed which is soluble in ethanol, and which yields a block of the required hardness for thin sectioning. The critical modifications to the conventional mixtures are the choice of a plasticized resin in conjunction with an aliphatic anhydride as the hardener. The hardness of the final block can be varied by incorporating additional plasticizer, and the rate of setting can be controlled by the use of an amine accelerator. The properties of the araldite mixture can be varied quite widely by adjusting the proportions of the various constituents. The procedure for embedding biological specimens is similar to that employed with methacrylates, although longer soaking times are recommended to ensure the complete penetration of the more viscous epoxy resin. An improvement in the preservation of the fine structure of a variety of specimens has already been reported, and a typical electron microgram illustrates the present paper. PMID:13525433</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1093264','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1093264"><span>Embedding global barrier and collective in torus network with each node combining input from receivers according to class map for output to senders</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Heidelberger, Philip; Senger, Robert M; Salapura, Valentina; Steinmacher-Burow, Burkhard; Sugawara, Yutaka; Takken, Todd E</p> <p>2013-08-27</p> <p>Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computer program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008PhyS...78e8103A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008PhyS...78e8103A"><span>Quantum and semiclassical spin networks: from atomic and molecular physics to quantum computing and gravity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aquilanti, Vincenzo; Bitencourt, Ana Carla P.; Ferreira, Cristiane da S.; Marzuoli, Annalisa; Ragni, Mirco</p> <p>2008-11-01</p> <p>The mathematical apparatus of quantum-mechanical angular momentum (re)coupling, developed originally to describe spectroscopic phenomena in atomic, molecular, optical and nuclear physics, is embedded in modern algebraic settings which emphasize the underlying combinatorial aspects. SU(2) recoupling theory, involving Wigner's 3nj symbols, as well as the related problems of their calculations, general properties, asymptotic limits for large entries, nowadays plays a prominent role also in quantum gravity and quantum computing applications. We refer to the ingredients of this theory—and of its extension to other Lie and quantum groups—by using the collective term of 'spin networks'. Recent progress is recorded about the already established connections with the mathematical theory of discrete orthogonal polynomials (the so-called Askey scheme), providing powerful tools based on asymptotic expansions, which correspond on the physical side to various levels of semi-classical limits. These results are useful not only in theoretical molecular physics but also in motivating algorithms for the computationally demanding problems of molecular dynamics and chemical reaction theory, where large angular momenta are typically involved. As for quantum chemistry, applications of these techniques include selection and classification of complete orthogonal basis sets in atomic and molecular problems, either in configuration space (Sturmian orbitals) or in momentum space. In this paper, we list and discuss some aspects of these developments—such as for instance the hyperquantization algorithm—as well as a few applications to quantum gravity and topology, thus providing evidence of a unifying background structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997SPIE.3039..277R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997SPIE.3039..277R"><span>Theoretical and experimental investigations of an active hydrofoil with SMA actuators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rediniotis, Othon K.; Lagoudas, Dimitris C.; Mashio, Tomoka; Garner, Luke J.; Qidwai, Muhammad A.</p> <p>1997-06-01</p> <p>In the area of underwater vehicle design, the development of highly maneuverable vehicles is presently of interest with their design being based on the swimming techniques and anatomic structure of fish; primarily the undulatory body motions, the highly controllable fins and the large aspect ratio lunatic tail. The tailoring and implementation of the accumulated knowledge into biomimetic vehicles is a task of multidisciplinary nature with two of the dominant fields being actuation and hydrodynamic control. Within this framework, we present here our progress towards the development of a type of biomimetic muscle that utilizes shape memory alloy (SMA) technology. The muscle is presently applied to the control of hydrodynamic forces and moments, including thrust generation, on a 2D hydrofoil. The main actuation elements are two sets of thin SMA wires embedded into an elastomeric element that provides the main structural support. Controlled heating and cooling of the two wire sets generates bi-direction bending of the elastomer, which in turn deflects or oscillates the trailing edge of the hydrofoil. The aquatic environment of the hydrofoil lends itself to cooling schemes that utilize the excellent heat transfer properties of water. The modeling of deflected shapes as a function of input current has been carried out using a thermomechanical constitutive model for SMA coupled with the elastic response of the elastomer. An approximate structural analysis model, as well as detailed FEM analysis has been performed and the model predictions are been compared with preliminary experimental measurements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26840319','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26840319"><span>Secure and Privacy-Preserving Body Sensor Data Collection and Query Scheme.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhu, Hui; Gao, Lijuan; Li, Hui</p> <p>2016-02-01</p> <p>With the development of body sensor networks and the pervasiveness of smart phones, different types of personal data can be collected in real time by body sensors, and the potential value of massive personal data has attracted considerable interest recently. However, the privacy issues of sensitive personal data are still challenging today. Aiming at these challenges, in this paper, we focus on the threats from telemetry interface and present a secure and privacy-preserving body sensor data collection and query scheme, named SPCQ, for outsourced computing. In the proposed SPCQ scheme, users' personal information is collected by body sensors in different types and converted into multi-dimension data, and each dimension is converted into the form of a number and uploaded to the cloud server, which provides a secure, efficient and accurate data query service, while the privacy of sensitive personal information and users' query data is guaranteed. Specifically, based on an improved homomorphic encryption technology over composite order group, we propose a special weighted Euclidean distance contrast algorithm (WEDC) for multi-dimension vectors over encrypted data. With the SPCQ scheme, the confidentiality of sensitive personal data, the privacy of data users' queries and accurate query service can be achieved in the cloud server. Detailed analysis shows that SPCQ can resist various security threats from telemetry interface. In addition, we also implement SPCQ on an embedded device, smart phone and laptop with a real medical database, and extensive simulation results demonstrate that our proposed SPCQ scheme is highly efficient in terms of computation and communication costs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3348803','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3348803"><span>An Artificial Neural Network Embedded Position and Orientation Determination Algorithm for Low Cost MEMS INS/GPS Integrated Sensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chiang, Kai-Wei; Chang, Hsiu-Wen; Li, Chia-Yuan; Huang, Yun-Wen</p> <p>2009-01-01</p> <p>Digital mobile mapping, which integrates digital imaging with direct geo-referencing, has developed rapidly over the past fifteen years. Direct geo-referencing is the determination of the time-variable position and orientation parameters for a mobile digital imager. The most common technologies used for this purpose today are satellite positioning using Global Positioning System (GPS) and Inertial Navigation System (INS) using an Inertial Measurement Unit (IMU). They are usually integrated in such a way that the GPS receiver is the main position sensor, while the IMU is the main orientation sensor. The Kalman Filter (KF) is considered as the optimal estimation tool for real-time INS/GPS integrated kinematic position and orientation determination. An intelligent hybrid scheme consisting of an Artificial Neural Network (ANN) and KF has been proposed to overcome the limitations of KF and to improve the performance of the INS/GPS integrated system in previous studies. However, the accuracy requirements of general mobile mapping applications can’t be achieved easily, even by the use of the ANN-KF scheme. Therefore, this study proposes an intelligent position and orientation determination scheme that embeds ANN with conventional Rauch-Tung-Striebel (RTS) smoother to improve the overall accuracy of a MEMS INS/GPS integrated system in post-mission mode. By combining the Micro Electro Mechanical Systems (MEMS) INS/GPS integrated system and the intelligent ANN-RTS smoother scheme proposed in this study, a cheaper but still reasonably accurate position and orientation determination scheme can be anticipated. PMID:22574034</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18269938','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18269938"><span>Adaptive fuzzy-neural-network control for maglev transportation system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wai, Rong-Jong; Lee, Jeng-Dao</p> <p>2008-01-01</p> <p>A magnetic-levitation (maglev) transportation system including levitation and propulsion control is a subject of considerable scientific interest because of highly nonlinear and unstable behaviors. In this paper, the dynamic model of a maglev transportation system including levitated electromagnets and a propulsive linear induction motor (LIM) based on the concepts of mechanical geometry and motion dynamics is developed first. Then, a model-based sliding-mode control (SMC) strategy is introduced. In order to alleviate chattering phenomena caused by the inappropriate selection of uncertainty bound, a simple bound estimation algorithm is embedded in the SMC strategy to form an adaptive sliding-mode control (ASMC) scheme. However, this estimation algorithm is always a positive value so that tracking errors introduced by any uncertainty will cause the estimated bound increase even to infinity with time. Therefore, it further designs an adaptive fuzzy-neural-network control (AFNNC) scheme by imitating the SMC strategy for the maglev transportation system. In the model-free AFNNC, online learning algorithms are designed to cope with the problem of chattering phenomena caused by the sign action in SMC design, and to ensure the stability of the controlled system without the requirement of auxiliary compensated controllers despite the existence of uncertainties. The outputs of the AFNNC scheme can be directly supplied to the electromagnets and LIM without complicated control transformations for relaxing strict constrains in conventional model-based control methodologies. The effectiveness of the proposed control schemes for the maglev transportation system is verified by numerical simulations, and the superiority of the AFNNC scheme is indicated in comparison with the SMC and ASMC strategies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4801556','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4801556"><span>Secure and Privacy-Preserving Body Sensor Data Collection and Query Scheme</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhu, Hui; Gao, Lijuan; Li, Hui</p> <p>2016-01-01</p> <p>With the development of body sensor networks and the pervasiveness of smart phones, different types of personal data can be collected in real time by body sensors, and the potential value of massive personal data has attracted considerable interest recently. However, the privacy issues of sensitive personal data are still challenging today. Aiming at these challenges, in this paper, we focus on the threats from telemetry interface and present a secure and privacy-preserving body sensor data collection and query scheme, named SPCQ, for outsourced computing. In the proposed SPCQ scheme, users’ personal information is collected by body sensors in different types and converted into multi-dimension data, and each dimension is converted into the form of a number and uploaded to the cloud server, which provides a secure, efficient and accurate data query service, while the privacy of sensitive personal information and users’ query data is guaranteed. Specifically, based on an improved homomorphic encryption technology over composite order group, we propose a special weighted Euclidean distance contrast algorithm (WEDC) for multi-dimension vectors over encrypted data. With the SPCQ scheme, the confidentiality of sensitive personal data, the privacy of data users’ queries and accurate query service can be achieved in the cloud server. Detailed analysis shows that SPCQ can resist various security threats from telemetry interface. In addition, we also implement SPCQ on an embedded device, smart phone and laptop with a real medical database, and extensive simulation results demonstrate that our proposed SPCQ scheme is highly efficient in terms of computation and communication costs. PMID:26840319</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22230795-discontinuous-galerkin-conservative-level-set-scheme-interface-capturing-multiphase-flows','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22230795-discontinuous-galerkin-conservative-level-set-scheme-interface-capturing-multiphase-flows"><span>A discontinuous Galerkin conservative level set scheme for interface capturing in multiphase flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Owkes, Mark, E-mail: mfc86@cornell.edu; Desjardins, Olivier</p> <p>2013-09-15</p> <p>The accurate conservative level set (ACLS) method of Desjardins et al. [O. Desjardins, V. Moureau, H. Pitsch, An accurate conservative level set/ghost fluid method for simulating turbulent atomization, J. Comput. Phys. 227 (18) (2008) 8395–8416] is extended by using a discontinuous Galerkin (DG) discretization. DG allows for the scheme to have an arbitrarily high order of accuracy with the smallest possible computational stencil resulting in an accurate method with good parallel scaling. This work includes a DG implementation of the level set transport equation, which moves the level set with the flow field velocity, and a DG implementation of themore » reinitialization equation, which is used to maintain the shape of the level set profile to promote good mass conservation. A near second order converging interface curvature is obtained by following a height function methodology (common amongst volume of fluid schemes) in the context of the conservative level set. Various numerical experiments are conducted to test the properties of the method and show excellent results, even on coarse meshes. The tests include Zalesak’s disk, two-dimensional deformation of a circle, time evolution of a standing wave, and a study of the Kelvin–Helmholtz instability. Finally, this novel methodology is employed to simulate the break-up of a turbulent liquid jet.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050175858','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050175858"><span>Explicit Von Neumann Stability Conditions for the c-tau Scheme: A Basic Scheme in the Development of the CE-SE Courant Number Insensitive Schemes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chang, Sin-Chung</p> <p>2005-01-01</p> <p>As part of the continuous development of the space-time conservation element and solution element (CE-SE) method, recently a set of so call ed "Courant number insensitive schemes" has been proposed. The key advantage of these new schemes is that the numerical dissipation associa ted with them generally does not increase as the Courant number decre ases. As such, they can be applied to problems with large Courant number disparities (such as what commonly occurs in Navier-Stokes problem s) without incurring excessive numerical dissipation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27313000','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27313000"><span>Implementation of a cryo-electron tomography tilt-scheme optimized for high resolution subtomogram averaging.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hagen, Wim J H; Wan, William; Briggs, John A G</p> <p>2017-02-01</p> <p>Cryo-electron tomography (cryoET) allows 3D structural information to be obtained from cells and other biological samples in their close-to-native state. In combination with subtomogram averaging, detailed structures of repeating features can be resolved. CryoET data is collected as a series of images of the sample from different tilt angles; this is performed by physically rotating the sample in the microscope between each image. The angles at which the images are collected, and the order in which they are collected, together are called the tilt-scheme. Here we describe a "dose-symmetric tilt-scheme" that begins at low tilt and then alternates between increasingly positive and negative tilts. This tilt-scheme maximizes the amount of high-resolution information maintained in the tomogram for subsequent subtomogram averaging, and may also be advantageous for other applications. We describe implementation of the tilt-scheme in combination with further data-collection refinements including setting thresholds on acceptable drift and improving focus accuracy. Requirements for microscope set-up are introduced, and a macro is provided which automates the application of the tilt-scheme within SerialEM. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26767365','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26767365"><span>Guidelines for reporting embedded recruitment trials.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Madurasinghe, Vichithranie W</p> <p>2016-01-14</p> <p>Recruitment to clinical trials is difficult with many trials failing to recruit to target and within time. Embedding trials of recruitment interventions within host trials may provide a successful way to improve this. There are no guidelines for reporting such embedded methodology trials. As part of the Medical Research Council funded Systematic Techniques for Assisting Recruitment to Trials (MRC START) programme designed to test interventions to improve recruitment to trials, we developed guidelines for reporting embedded trials. We followed a three-phase guideline development process: (1) pre-meeting literature review to generate items for the reporting guidelines; (2) face-to-face consensus meetings to draft the reporting guidelines; and (3) post-meeting feedback review, and pilot testing, followed by finalisation of the reporting guidelines. We developed a reporting checklist based on the Consolidated Standards for Reporting Trials (CONSORT) statement 2010. Embedded trials evaluating recruitment interventions should follow the CONSORT statement 2010 and report all items listed as essential. We used a number of examples to illustrate key issues that arise in embedded trials and how best to report them, including (a) how to deal with description of the host trial; (b) the importance of describing items that may differ in the host and embedded trials (such as the setting and the eligible population); and (c) the importance of identifying clearly the point at which the recruitment interventions were embedded in the host trial. Implementation of these guidelines will improve the quality of reports of embedded recruitment trials while advancing the science, design and conduct of embedded trials as a whole.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27792361','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27792361"><span>Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yuan, Haidong</p> <p>2016-10-14</p> <p>Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6344E..1OL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6344E..1OL"><span>The design of a new laser acupuncture instrument based on internet</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Chengwei; Liu, Jiguang; Huang, Zhen; Jin, Zhigao</p> <p>2006-06-01</p> <p>Laser acupuncture defined as the stimulation of traditional acupuncture points with low-intensity, non-thermal laser irradiation and the therapeutic use of laser acupuncture is rapidly gaining in popularity. As recovery instrument, physiotherapy instrument has a long curing period but perfect curative effect; furthermore, the treatment scheme needs to he revised on the basis of exchanges between patients and medical staff. In this paper a new laser acupuncture instrument based on Internet is designed. This multi-functional visual physiotherapy system based on embedded TCP/IP protocol, is further developed, which can realize visual real-time communication between patients and doctors with the help of Internet. Patients can enjoy professional medical care at home. Therefore, the equipment is suitable to those where specialists are needed; such as villages, towns, communities, small private clinics, and those families applicable. For such equipment, the key is to design an embedded networked module. The solution of this paper is to design the Ethernet interface based on DSP.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840007046','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840007046"><span>TAS: A Transonic Aircraft/Store flow field prediction code</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Thompson, D. S.</p> <p>1983-01-01</p> <p>A numerical procedure has been developed that has the capability to predict the transonic flow field around an aircraft with an arbitrarily located, separated store. The TAS code, the product of a joint General Dynamics/NASA ARC/AFWAL research and development program, will serve as the basis for a comprehensive predictive method for aircraft with arbitrary store loadings. This report described the numerical procedures employed to simulate the flow field around a configuration of this type. The validity of TAS code predictions is established by comparison with existing experimental data. In addition, future areas of development of the code are outlined. A brief description of code utilization is also given in the Appendix. The aircraft/store configuration is simulated using a mesh embedding approach. The computational domain is discretized by three meshes: (1) a planform-oriented wing/body fine mesh, (2) a cylindrical store mesh, and (3) a global Cartesian crude mesh. This embedded mesh scheme enables simulation of stores with fins of arbitrary angular orientation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25570990','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25570990"><span>Combination of watermarking and joint watermarking-decryption for reliability control and traceability of medical images.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bouslimi, D; Coatrieux, G; Cozic, M; Roux, Ch</p> <p>2014-01-01</p> <p>In this paper, we propose a novel crypto-watermarking system for the purpose of verifying the reliability of images and tracing them, i.e. identifying the person at the origin of an illegal distribution. This system couples a common watermarking method, based on Quantization Index Modulation (QIM), and a joint watermarking-decryption (JWD) approach. At the emitter side, it allows the insertion of a watermark as a proof of reliability of the image before sending it encrypted; at the reception, another watermark, a proof of traceability, is embedded during the decryption process. The scheme we propose makes interoperate such a combination of watermarking approaches taking into account risks of interferences between embedded watermarks, allowing the access to both reliability and traceability proofs. Experimental results confirm the efficiency of our system, and demonstrate it can be used to identify the physician at the origin of a disclosure even if the image has been modified.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002SPIE.4675....1F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002SPIE.4675....1F"><span>Practical steganalysis of digital images: state of the art</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fridrich, Jessica; Goljan, Miroslav</p> <p>2002-04-01</p> <p>Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24672359','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24672359"><span>Locating damage using integrated global-local approach with wireless sensing system and single-chip impedance measurement device.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lin, Tzu-Hsuan; Lu, Yung-Chi; Hung, Shih-Lin</p> <p>2014-01-01</p> <p>This study developed an integrated global-local approach for locating damage on building structures. A damage detection approach with a novel embedded frequency response function damage index (NEFDI) was proposed and embedded in the Imote2.NET-based wireless structural health monitoring (SHM) system to locate global damage. Local damage is then identified using an electromechanical impedance- (EMI-) based damage detection method. The electromechanical impedance was measured using a single-chip impedance measurement device which has the advantages of small size, low cost, and portability. The feasibility of the proposed damage detection scheme was studied with reference to a numerical example of a six-storey shear plane frame structure and a small-scale experimental steel frame. Numerical and experimental analysis using the integrated global-local SHM approach reveals that, after NEFDI indicates the approximate location of a damaged area, the EMI-based damage detection approach can then identify the detailed damage location in the structure of the building.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvB..97c5108L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvB..97c5108L"><span>Accurate description of charged excitations in molecular solids from embedded many-body perturbation theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Jing; D'Avino, Gabriele; Duchemin, Ivan; Beljonne, David; Blase, Xavier</p> <p>2018-01-01</p> <p>We present a novel hybrid quantum/classical approach to the calculation of charged excitations in molecular solids based on the many-body Green's function G W formalism. Molecules described at the G W level are embedded into the crystalline environment modeled with an accurate classical polarizable scheme. This allows the calculation of electron addition and removal energies in the bulk and at crystal surfaces where charged excitations are probed in photoelectron experiments. By considering the paradigmatic case of pentacene and perfluoropentacene crystals, we discuss the different contributions from intermolecular interactions to electronic energy levels, distinguishing between polarization, which is accounted for combining quantum and classical polarizabilities, and crystal field effects, that can impact energy levels by up to ±0.6 eV. After introducing band dispersion, we achieve quantitative agreement (within 0.2 eV) on the ionization potential and electron affinity measured at pentacene and perfluoropentacene crystal surfaces characterized by standing molecules.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003SPIE.5020..689C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003SPIE.5020..689C"><span>Localized lossless authentication watermark (LAW)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Celik, Mehmet U.; Sharma, Gaurav; Tekalp, A. Murat; Saber, Eli S.</p> <p>2003-06-01</p> <p>A novel framework is proposed for lossless authentication watermarking of images which allows authentication and recovery of original images without any distortions. This overcomes a significant limitation of traditional authentication watermarks that irreversibly alter image data in the process of watermarking and authenticate the watermarked image rather than the original. In particular, authenticity is verified before full reconstruction of the original image, whose integrity is inferred from the reversibility of the watermarking procedure. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not required. A particular instantiation of the framework is implemented using a hierarchical authentication scheme and the lossless generalized-LSB data embedding mechanism. The resulting algorithm, called localized lossless authentication watermark (LAW), can localize tampered regions of the image; has a low embedding distortion, which can be removed entirely if necessary; and supports public/private key authentication and recovery options. The effectiveness of the framework and the instantiation is demonstrated through examples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvB..97t1106N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvB..97t1106N"><span>Giant nonlinear interaction between two optical beams via a quantum dot embedded in a photonic wire</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nguyen, H. A.; Grange, T.; Reznychenko, B.; Yeo, I.; de Assis, P.-L.; Tumanov, D.; Fratini, F.; Malik, N. S.; Dupuy, E.; Gregersen, N.; Auffèves, A.; Gérard, J.-M.; Claudon, J.; Poizat, J.-Ph.</p> <p>2018-05-01</p> <p>Optical nonlinearities usually appear for large intensities, but discrete transitions allow for giant nonlinearities operating at the single-photon level. This has been demonstrated in the last decade for a single optical mode with cold atomic gases, or single two-level systems coupled to light via a tailored photonic environment. Here, we demonstrate a two-mode giant nonlinearity with a single semiconductor quantum dot (QD) embedded in a photonic wire antenna. We exploit two detuned optical transitions associated with the exciton-biexciton QD level scheme. Owing to the broadband waveguide antenna, the two transitions are efficiently interfaced with two free-space laser beams. The reflection of one laser beam is then controlled by the other beam, with a threshold power as low as 10 photons per exciton lifetime (1.6 nW ). Such a two-color nonlinearity opens appealing perspectives for the realization of ultralow-power logical gates and optical quantum gates, and could also be implemented in an integrated photonic circuit based on planar waveguides.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.C41C1226M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.C41C1226M"><span>Improved Discretization of Grounding Lines and Calving Fronts using an Embedded-Boundary Approach in BISICLES</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Martin, D. F.; Cornford, S. L.; Schwartz, P.; Bhalla, A.; Johansen, H.; Ng, E.</p> <p>2017-12-01</p> <p>Correctly representing grounding line and calving-front dynamics is of fundamental importance in modeling marine ice sheets, since the configuration of these interfaces exerts a controlling influence on the dynamics of the ice sheet. Traditional ice sheet models have struggled to correctly represent these regions without very high spatial resolution. We have developed a front-tracking discretization for grounding lines and calving fronts based on the Chombo embedded-boundary cut-cell framework. This promises better representation of these interfaces vs. a traditional stair-step discretization on Cartesian meshes like those currently used in the block-structured AMR BISICLES code. The dynamic adaptivity of the BISICLES model complements the subgrid-scale discretizations of this scheme, producing a robust approach for tracking the evolution of these interfaces. Also, the fundamental discontinuous nature of flow across grounding lines is respected by mathematically treating it as a material phase change. We present examples of this approach to demonstrate its effectiveness.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999SPIE.3528..225S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999SPIE.3528..225S"><span>Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Soderquist, Peter; Leeser, Miriam E.</p> <p>1999-01-01</p> <p>Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AcAau..68..549O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AcAau..68..549O"><span>Aristotelian syllogisms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ollongren, Alexander</p> <p>2011-02-01</p> <p>Aristotelian assertive syllogistic logic (without modalities) is embedded in the author's Lingua Cosmica. The well-known basic structures of assertions and conversions between them in this logic are represented in LINCOS. Since these representations correspond with set-theoretic operations, the latter are embedded in LINCOS as well. Based on this valid argumentation in Aristotle's sense is obtained for four important so-called perfect figures. Their constructive (intuitionistic) verifications are of a surprisingly elegant simplicity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950020943','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950020943"><span>The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan</p> <p>1995-01-01</p> <p>The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27301008','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27301008"><span>Ab initio optimization principle for the ground states of translationally invariant strongly correlated quantum lattice models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ran, Shi-Ju</p> <p>2016-05-01</p> <p>In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising chain and 2D classical Ising model, showing the remarkable efficiency and accuracy of the AOP.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10427E..0IC','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10427E..0IC"><span>Nearest neighbor-density-based clustering methods for large hyperspectral images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cariou, Claude; Chehdi, Kacem</p> <p>2017-10-01</p> <p>We address the problem of hyperspectral image (HSI) pixel partitioning using nearest neighbor - density-based (NN-DB) clustering methods. NN-DB methods are able to cluster objects without specifying the number of clusters to be found. Within the NN-DB approach, we focus on deterministic methods, e.g. ModeSeek, knnClust, and GWENN (standing for Graph WatershEd using Nearest Neighbors). These methods only require the availability of a k-nearest neighbor (kNN) graph based on a given distance metric. Recently, a new DB clustering method, called Density Peak Clustering (DPC), has received much attention, and kNN versions of it have quickly followed and showed their efficiency. However, NN-DB methods still suffer from the difficulty of obtaining the kNN graph due to the quadratic complexity with respect to the number of pixels. This is why GWENN was embedded into a multiresolution (MR) scheme to bypass the computation of the full kNN graph over the image pixels. In this communication, we propose to extent the MR-GWENN scheme on three aspects. Firstly, similarly to knnClust, the original labeling rule of GWENN is modified to account for local density values, in addition to the labels of previously processed objects. Secondly, we set up a modified NN search procedure within the MR scheme, in order to stabilize of the number of clusters found from the coarsest to the finest spatial resolution. Finally, we show that these extensions can be easily adapted to the three other NN-DB methods (ModeSeek, knnClust, knnDPC) for pixel clustering in large HSIs. Experiments are conducted to compare the four NN-DB methods for pixel clustering in HSIs. We show that NN-DB methods can outperform a classical clustering method such as fuzzy c-means (FCM), in terms of classification accuracy, relevance of found clusters, and clustering speed. Finally, we demonstrate the feasibility and evaluate the performances of NN-DB methods on a very large image acquired by our AISA Eagle hyperspectral imaging sensor.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4205300','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4205300"><span>3D facial landmarks: Inter-operator variability of manual annotation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007SPIE.6576E..0UM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007SPIE.6576E..0UM"><span>FPGA wavelet processor design using language for instruction-set architectures (LISA)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios</p> <p>2007-04-01</p> <p>The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25391359','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25391359"><span>From Curves to Trees: A Tree-like Shapes Distance Using the Elastic Shape Analysis Framework.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mottini, A; Descombes, X; Besse, F</p> <p>2015-04-01</p> <p>Trees are a special type of graph that can be found in various disciplines. In the field of biomedical imaging, trees have been widely studied as they can be used to describe structures such as neurons, blood vessels and lung airways. It has been shown that the morphological characteristics of these structures can provide information on their function aiding the characterization of pathological states. Therefore, it is important to develop methods that analyze their shape and quantify differences between their structures. In this paper, we present a method for the comparison of tree-like shapes that takes into account both topological and geometrical information. This method, which is based on the Elastic Shape Analysis Framework, also computes the mean shape of a population of trees. As a first application, we have considered the comparison of axon morphology. The performance of our method has been evaluated on two sets of images. For the first set of images, we considered four different populations of neurons from different animals and brain sections from the NeuroMorpho.org open database. The second set was composed of a database of 3D confocal microscopy images of three populations of axonal trees (normal and two types of mutations) of the same type of neurons. We have calculated the inter and intra class distances between the populations and embedded the distance in a classification scheme. We have compared the performance of our method against three other state of the art algorithms, and results showed that the proposed method better distinguishes between the populations. Furthermore, we present the mean shape of each population. These shapes present a more complete picture of the morphological characteristics of each population, compared to the average value of certain predefined features.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18244365','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18244365"><span>Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hoya, T; Chambers, J A</p> <p>2001-01-01</p> <p>In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17431296','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17431296"><span>Matching by linear programming and successive convexification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jiang, Hao; Drew, Mark S; Li, Ze-Nian</p> <p>2007-06-01</p> <p>We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA142378','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA142378"><span>A Code Division Multiple Access Communication System for the Low Frequency Band.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1983-04-01</p> <p>frequency channels spread-spectrum communication / complex sequences, orthogonal codes impulsive noise 20. ABSTRACT (Continue an reverse side It...their transmissions with signature sequences. Our LF/CDMA scheme is different in that each user’s signature sequence set consists of M orthogonal ...signature sequences. Our LF/CDMA scheme is different in that each user’s signature sequence set consists of M orthogonal sequences and thus log 2 M</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvA..97b2311G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvA..97b2311G"><span>Experimental demonstration of selective quantum process tomography on an NMR quantum information processor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita</p> <p>2018-02-01</p> <p>We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CPM.....5..345F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CPM.....5..345F"><span>A hybrid Lagrangian Voronoi-SPH scheme</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fernandez-Gutierrez, D.; Souto-Iglesias, A.; Zohdi, T. I.</p> <p>2018-07-01</p> <p>A hybrid Lagrangian Voronoi-SPH scheme, with an explicit weakly compressible formulation for both the Voronoi and SPH sub-domains, has been developed. The SPH discretization is substituted by Voronoi elements close to solid boundaries, where SPH consistency and boundary conditions implementation become problematic. A buffer zone to couple the dynamics of both sub-domains is used. This zone is formed by a set of particles where fields are interpolated taking into account SPH particles and Voronoi elements. A particle may move in or out of the buffer zone depending on its proximity to a solid boundary. The accuracy of the coupled scheme is discussed by means of a set of well-known verification benchmarks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CPM...tmp...26F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CPM...tmp...26F"><span>A hybrid Lagrangian Voronoi-SPH scheme</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fernandez-Gutierrez, D.; Souto-Iglesias, A.; Zohdi, T. I.</p> <p>2017-11-01</p> <p>A hybrid Lagrangian Voronoi-SPH scheme, with an explicit weakly compressible formulation for both the Voronoi and SPH sub-domains, has been developed. The SPH discretization is substituted by Voronoi elements close to solid boundaries, where SPH consistency and boundary conditions implementation become problematic. A buffer zone to couple the dynamics of both sub-domains is used. This zone is formed by a set of particles where fields are interpolated taking into account SPH particles and Voronoi elements. A particle may move in or out of the buffer zone depending on its proximity to a solid boundary. The accuracy of the coupled scheme is discussed by means of a set of well-known verification benchmarks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29793071','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29793071"><span>Extracting similar terms from multiple EMR-based semantic embeddings to support chart reviews.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cheng Ye, M S; Fabbri, Daniel</p> <p>2018-05-21</p> <p>Word embeddings project semantically similar terms into nearby points in a vector space. When trained on clinical text, these embeddings can be leveraged to improve keyword search and text highlighting. In this paper, we present methods to refine the selection process of similar terms from multiple EMR-based word embeddings, and evaluate their performance quantitatively and qualitatively across multiple chart review tasks. Word embeddings were trained on each clinical note type in an EMR. These embeddings were then combined, weighted, and truncated to select a refined set of similar terms to be used in keyword search and text highlighting. To evaluate their quality, we measured the similar terms' information retrieval (IR) performance using precision-at-K (P@5, P@10). Additionally a user study evaluated users' search term preferences, while a timing study measured the time to answer a question from a clinical chart. The refined terms outperformed the baseline method's information retrieval performance (e.g., increasing the average P@5 from 0.48 to 0.60). Additionally, the refined terms were preferred by most users, and reduced the average time to answer a question. Clinical information can be more quickly retrieved and synthesized when using semantically similar term from multiple embeddings. Copyright © 2018. Published by Elsevier Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20070003485&hterms=levels+law&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DA%2Blevels%2Blaw','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20070003485&hterms=levels+law&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DA%2Blevels%2Blaw"><span>Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Barth, Timothy J.; Sethian, James A.</p> <p>2006-01-01</p> <p>Borrowing from techniques developed for conservation law equations, we have developed both monotone and higher order accurate numerical schemes which discretize the Hamilton-Jacobi and level set equations on triangulated domains. The use of unstructured meshes containing triangles (2D) and tetrahedra (3D) easily accommodates mesh adaptation to resolve disparate level set feature scales with a minimal number of solution unknowns. The minisymposium talk will discuss these algorithmic developments and present sample calculations using our adaptive triangulation algorithm applied to various moving interface problems such as etching, deposition, and curvature flow.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJQI...1650021Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJQI...1650021Z"><span>Quantum color image watermarking based on Arnold transformation and LSB steganography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Luo, Gaofeng</p> <p></p> <p>In this paper, a quantum color image watermarking scheme is proposed through twice-scrambling of Arnold transformations and steganography of least significant bit (LSB). Both carrier image and watermark images are represented by the novel quantum representation of color digital images model (NCQI). The image sizes for carrier and watermark are assumed to be 2n×2n and 2n‑1×2n‑1, respectively. At first, the watermark is scrambled into a disordered form through image preprocessing technique of exchanging the image pixel position and altering the color information based on Arnold transforms, simultaneously. Then, the scrambled watermark with 2n‑1×2n‑1 image size and 24-qubit grayscale is further expanded to an image with size 2n×2n and 6-qubit grayscale using the nearest-neighbor interpolation method. Finally, the scrambled and expanded watermark is embedded into the carrier by steganography of LSB scheme, and a key image with 2n×2n size and 3-qubit information is generated at the meantime, which only can use the key image to retrieve the original watermark. The extraction of watermark is the reverse process of embedding, which is achieved by applying a sequence of operations in the reverse order. Simulation-based experimental results involving different carrier and watermark images (i.e. conventional or non-quantum) are simulated based on the classical computer’s MATLAB 2014b software, which illustrates that the present method has a good performance in terms of three items: visual quality, robustness and steganography capacity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24043410','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24043410"><span>A joint FED watermarking system using spatial fusion for verifying the security issues of teleradiology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Viswanathan, P; Krishna, P Venkata</p> <p>2014-05-01</p> <p>Teleradiology allows transmission of medical images for clinical data interpretation to provide improved e-health care access, delivery, and standards. The remote transmission raises various ethical and legal issues like image retention, fraud, privacy, malpractice liability, etc. A joint FED watermarking system means a joint fingerprint/encryption/dual watermarking system is proposed for addressing these issues. The system combines a region based substitution dual watermarking algorithm using spatial fusion, stream cipher algorithm using symmetric key, and fingerprint verification algorithm using invariants. This paper aims to give access to the outcomes of medical images with confidentiality, availability, integrity, and its origin. The watermarking, encryption, and fingerprint enrollment are conducted jointly in protection stage such that the extraction, decryption, and verification can be applied independently. The dual watermarking system, introducing two different embedding schemes, one used for patient data and other for fingerprint features, reduces the difficulty in maintenance of multiple documents like authentication data, personnel and diagnosis data, and medical images. The spatial fusion algorithm, which determines the region of embedding using threshold from the image to embed the encrypted patient data, follows the exact rules of fusion resulting in better quality than other fusion techniques. The four step stream cipher algorithm using symmetric key for encrypting the patient data with fingerprint verification system using algebraic invariants improves the robustness of the medical information. The experiment result of proposed scheme is evaluated for security and quality analysis in DICOM medical images resulted well in terms of attacks, quality index, and imperceptibility.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20866790','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20866790"><span>Randomized central limit theorems: A unified theory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Eliazar, Iddo; Klafter, Joseph</p> <p>2010-08-01</p> <p>The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000SPIE.3974..643V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000SPIE.3974..643V"><span>Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder</p> <p>2000-04-01</p> <p>Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22301062','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22301062"><span>Development of a novel coding scheme (SABICS) to record nurse-child interactive behaviours in a community dental preventive intervention.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry</p> <p>2012-08-01</p> <p>To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1395102','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1395102"><span>Flexible $$I_{Q}\\!\\!-\\!\\!V$$ Scheme of a DFIG for Rapid Voltage Regulation of a Wind Power Plant</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kim, Jinho; Muljadi, Eduard; Park, Jung -Wook</p> <p></p> <p>This paper proposes a flexible reactive current-to-voltage (I Q-V) scheme of a doubly-fed induction generator (DFIG) for the rapid voltage regulation of a wind power plant (WPP). In the proposed scheme, the WPP controller dispatches different voltage set points to the DFIGs depending on their rotor voltage margins. The DFIGs inject different reactive power with the flexible I Q-V schemes implemented in the rotor-side and grid-side converters. The I Q-V characteristic, which consists of the gain and width of a linear band and I Q capability, varies with time depending on the I Q capability of the converters and amore » voltage dip at the point of interconnection (POI). To increase the I Q capability during a fault, the active current is reduced in proportion to a voltage dip. If the I Q capability and/or the POI voltage dip are large, the I Q-V gain is set to be high, thereby providing rapid voltage regulation. To avoid an overvoltage after the fault clearance, a rapid I Q reduction scheme is implemented in the WPP and DFIG controllers. The performance of the proposed flexible scheme was verified under scenarios with various disturbances. In conclusion, the proposed scheme can help increase wind power penetration without jeopardizing voltage stability.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JCoPh.354..476R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JCoPh.354..476R"><span>A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein</p> <p>2018-02-01</p> <p>The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1395102-flexible-i_-scheme-dfig-rapid-voltage-regulation-wind-power-plant','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1395102-flexible-i_-scheme-dfig-rapid-voltage-regulation-wind-power-plant"><span>Flexible $$I_{Q}\\!\\!-\\!\\!V$$ Scheme of a DFIG for Rapid Voltage Regulation of a Wind Power Plant</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Kim, Jinho; Muljadi, Eduard; Park, Jung -Wook; ...</p> <p>2017-04-28</p> <p>This paper proposes a flexible reactive current-to-voltage (I Q-V) scheme of a doubly-fed induction generator (DFIG) for the rapid voltage regulation of a wind power plant (WPP). In the proposed scheme, the WPP controller dispatches different voltage set points to the DFIGs depending on their rotor voltage margins. The DFIGs inject different reactive power with the flexible I Q-V schemes implemented in the rotor-side and grid-side converters. The I Q-V characteristic, which consists of the gain and width of a linear band and I Q capability, varies with time depending on the I Q capability of the converters and amore » voltage dip at the point of interconnection (POI). To increase the I Q capability during a fault, the active current is reduced in proportion to a voltage dip. If the I Q capability and/or the POI voltage dip are large, the I Q-V gain is set to be high, thereby providing rapid voltage regulation. To avoid an overvoltage after the fault clearance, a rapid I Q reduction scheme is implemented in the WPP and DFIG controllers. The performance of the proposed flexible scheme was verified under scenarios with various disturbances. In conclusion, the proposed scheme can help increase wind power penetration without jeopardizing voltage stability.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG33A0188P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG33A0188P"><span>Emulating RRTMG Radiation with Deep Neural Networks for the Accelerated Model for Climate and Energy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pal, A.; Norman, M. R.</p> <p>2017-12-01</p> <p>The RRTMG radiation scheme in the Accelerated Model for Climate and Energy Multi-scale Model Framework (ACME-MMF), is a bottleneck and consumes approximately 50% of the computational time. To simulate a case using RRTMG radiation scheme in ACME-MMF with high throughput and high resolution will therefore require a speed-up of this calculation while retaining physical fidelity. In this study, RRTMG radiation is emulated with Deep Neural Networks (DNNs). The first step towards this goal is to run a case with ACME-MMF and generate input data sets for the DNNs. A principal component analysis of these input data sets are carried out. Artificial data sets are created using the previous data sets to cover a wider space. These artificial data sets are used in a standalone RRTMG radiation scheme to generate outputs in a cost effective manner. These input-output pairs are used to train multiple architectures DNNs(1). Another DNN(2) is trained using the inputs to predict the error. A reverse emulation is trained to map the output to input. An error controlled code is developed with the two DNNs (1 and 2) and will determine when/if the original parameterization needs to be used.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080004614','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080004614"><span>Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yeh, Pen-Shu (Inventor)</p> <p>1997-01-01</p> <p>A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080004557','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080004557"><span>Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yeh, Pen-Shu (Inventor)</p> <p>1998-01-01</p> <p>A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5548165','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5548165"><span>The Effect of Different Resistance Training Load Schemes on Strength and Body Composition in Trained Men</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lopes, Charles Ricardo; Aoki, Marcelo Saldanha; Crisp, Alex Harley; de Mattos, Renê Scarpari; Lins, Miguel Alves; da Mota, Gustavo Ribeiro; Schoenfeld, Brad Jon; Marchetti, Paulo Henrique</p> <p>2017-01-01</p> <p>Abstract The purpose of this study was to evaluate the impact of moderate-load (10 RM) and low-load (20 RM) resistance training schemes on maximal strength and body composition. Sixteen resistance-trained men were randomly assigned to 1 of 2 groups: a moderate-load group (n = 8) or a low-load group (n = 8). The resistance training schemes consisted of 8 exercises performed 4 times per week for 6 weeks. In order to equate the number of repetitions performed by each group, the moderate load group performed 6 sets of 10 RM, while the low load group performed 3 sets of 20 RM. Between-group differences were evaluated using a 2-way ANOVA and independent t-tests. There was no difference in the weekly total load lifted (sets × reps × kg) between the 2 groups. Both groups equally improved maximal strength and measures of body composition after 6 weeks of resistance training, with no significant between-group differences detected. In conclusion, both moderate-load and low-load resistance training schemes, similar for the total load lifted, induced a similar improvement in maximal strength and body composition in resistance-trained men. PMID:28828088</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21663342','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21663342"><span>Approaching the theoretical limit in periodic local MP2 calculations with atomic-orbital basis sets: the case of LiH.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin</p> <p>2011-06-07</p> <p>The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EntIS...6..433Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EntIS...6..433Z"><span>Term frequency - function of document frequency: a new term weighting scheme for enterprise information retrieval</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Hui; Wang, Deqing; Wu, Wenjun; Hu, Hongping</p> <p>2012-11-01</p> <p>In today's business environment, enterprises are increasingly under pressure to process the vast amount of data produced everyday within enterprises. One method is to focus on the business intelligence (BI) applications and increasing the commercial added-value through such business analytics activities. Term weighting scheme, which has been used to convert the documents as vectors in the term space, is a vital task in enterprise Information Retrieval (IR), text categorisation, text analytics, etc. When determining term weight in a document, the traditional TF-IDF scheme sets weight value for the term considering only its occurrence frequency within the document and in the entire set of documents, which leads to some meaningful terms that cannot get the appropriate weight. In this article, we propose a new term weighting scheme called Term Frequency - Function of Document Frequency (TF-FDF) to address this issue. Instead of using monotonically decreasing function such as Inverse Document Frequency, FDF presents a convex function that dynamically adjusts weights according to the significance of the words in a document set. This function can be manually tuned based on the distribution of the most meaningful words which semantically represent the document set. Our experiments show that the TF-FDF can achieve higher value of Normalised Discounted Cumulative Gain in IR than that of TF-IDF and its variants, and improving the accuracy of relevance ranking of the IR results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21788187','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21788187"><span>Discriminative graph embedding for label propagation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nguyen, Canh Hao; Mamitsuka, Hiroshi</p> <p>2011-09-01</p> <p>In many applications, the available information is encoded in graph structures. This is a common problem in biological networks, social networks, web communities and document citations. We investigate the problem of classifying nodes' labels on a similarity graph given only a graph structure on the nodes. Conventional machine learning methods usually require data to reside in some Euclidean spaces or to have a kernel representation. Applying these methods to nodes on graphs would require embedding the graphs into these spaces. By embedding and then learning the nodes on graphs, most methods are either flexible with different learning objectives or efficient enough for large scale applications. We propose a method to embed a graph into a feature space for a discriminative purpose. Our idea is to include label information into the embedding process, making the space representation tailored to the task. We design embedding objective functions that the following learning formulations become spectral transforms. We then reformulate these spectral transforms into multiple kernel learning problems. Our method, while being tailored to the discriminative tasks, is efficient and can scale to massive data sets. We show the need of discriminative embedding on some simulations. Applying to biological network problems, our method is shown to outperform baselines.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7541E..05F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7541E..05F"><span>Minimizing embedding impact in steganography using trellis-coded quantization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Filler, Tomáš; Judas, Jan; Fridrich, Jessica</p> <p>2010-01-01</p> <p>In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1074193','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1074193"><span>The renormalization scale-setting problem in QCD</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wu, Xing-Gang; Brodsky, Stanley J.; Mojaza, Matin</p> <p>2013-09-01</p> <p>A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The conventional scale-setting procedure assigns an arbitrary range and an arbitrary systematic error to fixed-order pQCD predictions. In fact, this ad hoc procedure gives results which depend on the choice of the renormalization scheme, and it is in conflict with the standard scale-setting procedure used in QED. Predictions for physical results should be independent of the choice of the scheme or other theoretical conventions. We review current ideas and points of view on how to deal with the renormalization scalemore » ambiguity and show how to obtain renormalization scheme- and scale-independent estimates. We begin by introducing the renormalization group (RG) equation and an extended version, which expresses the invariance of physical observables under both the renormalization scheme and scale-parameter transformations. The RG equation provides a convenient way for estimating the scheme- and scale-dependence of a physical process. We then discuss self-consistency requirements of the RG equations, such as reflexivity, symmetry, and transitivity, which must be satisfied by a scale-setting method. Four typical scale setting methods suggested in the literature, i.e., the Fastest Apparent Convergence (FAC) criterion, the Principle of Minimum Sensitivity (PMS), the Brodsky–Lepage–Mackenzie method (BLM), and the Principle of Maximum Conformality (PMC), are introduced. Basic properties and their applications are discussed. We pay particular attention to the PMC, which satisfies all of the requirements of RG invariance. Using the PMC, all non-conformal terms associated with the β-function in the perturbative series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC provides the principle underlying the BLM method, since it gives the general rule for extending BLM up to any perturbative order; in fact, they are equivalent to each other through the PMC–BLM correspondence principle. Thus, all the features previously observed in the BLM literature are also adaptable to the PMC. The PMC scales and the resulting finite-order PMC predictions are to high accuracy independent of the choice of the initial renormalization scale, and thus consistent with RG invariance. The PMC is also consistent with the renormalization scale-setting procedure for QED in the zero-color limit. The use of the PMC thus eliminates a serious systematic scale error in perturbative QCD predictions, greatly improving the precision of empirical tests of the Standard Model and their sensitivity to new physics.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCoPh.349...97F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCoPh.349...97F"><span>Targeted ENO schemes with tailored resolution property for hyperbolic conservation laws</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.</p> <p>2017-11-01</p> <p>In this paper, we extend the range of targeted ENO (TENO) schemes (Fu et al. (2016) [18]) by proposing an eighth-order TENO8 scheme. A general formulation to construct the high-order undivided difference τK within the weighting strategy is proposed. With the underlying scale-separation strategy, sixth-order accuracy for τK in the smooth solution regions is designed for good performance and robustness. Furthermore, a unified framework to optimize independently the dispersion and dissipation properties of high-order finite-difference schemes is proposed. The new framework enables tailoring of dispersion and dissipation as function of wavenumber. The optimal linear scheme has minimum dispersion error and a dissipation error that satisfies a dispersion-dissipation relation. Employing the optimal linear scheme, a sixth-order TENO8-opt scheme is constructed. A set of benchmark cases involving strong discontinuities and broadband fluctuations is computed to demonstrate the high-resolution properties of the new schemes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21992293','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21992293"><span>On basis set superposition error corrected stabilization energies for large n-body clusters.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael</p> <p>2011-10-07</p> <p>In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22423768-narrowing-error-electron-correlation-calculations-basis-set-re-hierarchization-use-unified-singlet-triplet-electron-pair-extrapolation-scheme-application-test-set-systems','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22423768-narrowing-error-electron-correlation-calculations-basis-set-re-hierarchization-use-unified-singlet-triplet-electron-pair-extrapolation-scheme-application-test-set-systems"><span>Narrowing the error in electron correlation calculations by basis set re-hierarchization and use of the unified singlet and triplet electron-pair extrapolation scheme: Application to a test set of 106 systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.</p> <p>2014-12-14</p> <p>A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdSpR..61..927W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdSpR..61..927W"><span>Event-triggered attitude control of spacecraft</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Baolin; Shen, Qiang; Cao, Xibin</p> <p>2018-02-01</p> <p>The problem of spacecraft attitude stabilization control system with limited communication and external disturbances is investigated based on an event-triggered control scheme. In the proposed scheme, information of attitude and control torque only need to be transmitted at some discrete triggered times when a defined measurement error exceeds a state-dependent threshold. The proposed control scheme not only guarantees that spacecraft attitude control errors converge toward a small invariant set containing the origin, but also ensures that there is no accumulation of triggering instants. The performance of the proposed control scheme is demonstrated through numerical simulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880006985','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880006985"><span>An inference engine for embedded diagnostic systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Fox, Barry R.; Brewster, Larry T.</p> <p>1987-01-01</p> <p>The implementation of an inference engine for embedded diagnostic systems is described. The system consists of two distinct parts. The first is an off-line compiler which accepts a propositional logical statement of the relationship between facts and conclusions and produces data structures required by the on-line inference engine. The second part consists of the inference engine and interface routines which accept assertions of fact and return the conclusions which necessarily follow. Given a set of assertions, it will generate exactly the conclusions which logically follow. At the same time, it will detect any inconsistencies which may propagate from an inconsistent set of assertions or a poorly formulated set of rules. The memory requirements are fixed and the worst case execution times are bounded at compile time. The data structures and inference algorithms are very simple and well understood. The data structures and algorithms are described in detail. The system has been implemented on Lisp, Pascal, and Modula-2.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003SPIE.5253..835W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003SPIE.5253..835W"><span>Cosimulation of embedded system using RTOS software simulator</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Shihao; Duan, Zhigang; Liu, Mingye</p> <p>2003-09-01</p> <p>Embedded system design often employs co-simulation to verify system's function; one efficient verification tool of software is Instruction Set Simulator (ISS). As a full functional model of target CPU, ISS interprets instruction of embedded software step by step, which usually is time-consuming since it simulates at low-level. Hence ISS often becomes the bottleneck of co-simulation in a complicated system. In this paper, a new software verification tools, the RTOS software simulator (RSS) was presented. The mechanism of its operation was described in a full details. In RSS method, RTOS API is extended and hardware simulator driver is adopted to deal with data-exchange and synchronism between the two simulators.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>