Sample records for simple straightforward method

  1. A Simple Estimation Method for Aggregate Government Outsourcing

    ERIC Educational Resources Information Center

    Minicucci, Stephen; Donahue, John D.

    2004-01-01

    The scholarly and popular debate on the delegation to the private sector of governmental tasks rests on an inadequate empirical foundation, as no systematic data are collected on direct versus indirect service delivery. We offer a simple method for approximating levels of service outsourcing, based on relatively straightforward combinations of and…

  2. In-situ polymerization PLOT columns I: divinylbenzene

    NASA Technical Reports Server (NTRS)

    Shen, T. C.

    1992-01-01

    A novel method for preparation of porous-layer open-tubular (PLOT) columns is described. The method involves a simple and reproducible, straight-forward in-situ polymerization of monomer directly on the metal tube.

  3. A Simple Spreadsheet Program for the Calculation of Lattice-Site Distributions

    ERIC Educational Resources Information Center

    McCaffrey, John G.

    2009-01-01

    A simple spreadsheet program is presented that can be used by undergraduate students to calculate the lattice-site distributions in solids. A major strength of the method is the natural way in which the correct number of ions or atoms are present, or absent, at specific lattice distances. The expanding-cube method utilized is straightforward to…

  4. Energy Expansion for the Period of Anharmonic Oscillators by the Method of Lindstedt-Poincare

    ERIC Educational Resources Information Center

    Fernandez, Francisco M.

    2004-01-01

    A simple, straightforward and efficient method is proposed for the calculation of the period of anharmonic oscillators as an energy series. The approach is based on perturbation theory and the method of Lindstedt-Poincare.

  5. Ion beam induced 18F-radiofluorination: straightforward synthesis of gaseous radiotracers for the assessment of regional lung ventilation using positron emission tomography.

    PubMed

    Gómez-Vallejo, V; Lekuona, A; Baz, Z; Szczupak, B; Cossío, U; Llop, J

    2016-09-29

    A simple, straightforward and efficient method for the synthesis of [ 18 F]CF 4 and [ 18 F]SF 6 based on an ion beam-induced isotopic exchange reaction is presented. Positron emission tomography ventilation studies in rodents using [ 18 F]CF 4 showed a uniform distribution of the radiofluorinated gas within the lungs and rapid elimination after discontinuation of the administration.

  6. Investigating an Aerial Image First

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.; Elmer, Jeffrey S.

    2006-01-01

    Most introductory optics lab activities begin with students locating the real image formed by a converging lens. The method is simple and straightforward--students move a screen back and forth until the real image is in sharp focus on the screen. Students then draw a simple ray diagram to explain the observation using only two or three special…

  7. Simple method to detect triacylglycerol biosynthesis in a yeast-based recombinant system

    USDA-ARS?s Scientific Manuscript database

    Standard methods to quantify the activity of triacylglycerol (TAG) synthesizing enzymes DGAT and PDAT (TAG-SE) require a sensitive but rather arduous laboratory assay based on radio-labeled substrates. Here we describe two straightforward methods to detect TAG production in baker’s yeast Saccharomyc...

  8. Wronskian Method for Bound States

    ERIC Educational Resources Information Center

    Fernandez, Francisco M.

    2011-01-01

    We propose a simple and straightforward method based on Wronskians for the calculation of bound-state energies and wavefunctions of one-dimensional quantum-mechanical problems. We explicitly discuss the asymptotic behaviour of the wavefunction and show that the allowed energies make the divergent part vanish. As illustrative examples we consider…

  9. Estimating p-n Diode Bulk Parameters, Bandgap Energy and Absolute Zero by a Simple Experiment

    ERIC Educational Resources Information Center

    Ocaya, R. O.; Dejene, F. B.

    2007-01-01

    This paper presents a straightforward but interesting experimental method for p-n diode characterization. The method differs substantially from many approaches in diode characterization by offering much tighter control over the temperature and current variables. The method allows the determination of important diode constants such as temperature…

  10. Breeding for phytonutrient content; examples from watermelon

    USDA-ARS?s Scientific Manuscript database

    Breeding for high phytonutrient fruits and vegetables can be a fairly straightforward endeavor when the compounds of interest produce a visible effect or the methods for quantifying the compounds simple and inexpensive. Lycopene in tomatoes and watermelon is one such compound, since the amount of r...

  11. Auto-programmable impulse neural circuits

    NASA Technical Reports Server (NTRS)

    Watula, D.; Meador, J.

    1990-01-01

    Impulse neural networks use pulse trains to communicate neuron activation levels. Impulse neural circuits emulate natural neurons at a more detailed level than that typically employed by contemporary neural network implementation methods. An impulse neural circuit which realizes short term memory dynamics is presented. The operation of that circuit is then characterized in terms of pulse frequency modulated signals. Both fixed and programmable synapse circuits for realizing long term memory are also described. The implementation of a simple and useful unsupervised learning law is then presented. The implementation of a differential Hebbian learning rule for a specific mean-frequency signal interpretation is shown to have a straightforward implementation using digital combinational logic with a variation of a previously developed programmable synapse circuit. This circuit is expected to be exploited for simple and straightforward implementation of future auto-adaptive neural circuits.

  12. Virtual Ray Tracing as a Conceptual Tool for Image Formation in Mirrors and Lenses

    ERIC Educational Resources Information Center

    Heikkinen, Lasse; Savinainen, Antti; Saarelainen, Markku

    2016-01-01

    The ray tracing method is widely used in teaching geometrical optics at the upper secondary and university levels. However, using simple and straightforward examples may lead to a situation in which students use the model of ray tracing too narrowly. Previous studies show that students seem to use the ray tracing method too concretely instead of…

  13. A Review of Scoring Algorithms for Ability and Aptitude Tests.

    ERIC Educational Resources Information Center

    Chevalier, Shirley A.

    In conventional practice, most educators and educational researchers score cognitive tests using a dichotomous right-wrong scoring system. Although simple and straightforward, this method does not take into consideration other factors, such as partial knowledge or guessing tendencies and abilities. This paper discusses alternative scoring models:…

  14. A simple, sensitive graphical method of treating thermogravimetric analysis data

    Treesearch

    Abraham Broido

    1969-01-01

    Thermogravimetric Analysis (TGA) is finding increasing utility in investigations of the pyrolysis and combustion behavior of materuals. Although a theoretical treatment of the TGA behavior of an idealized reaction is relatively straight-forward, major complications can be introduced when the reactions are complex, e.g., in the pyrolysis of cellulose, and when...

  15. Summary of methods for calculating dynamic lateral stability and response and for estimating aerodynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Campbell, John P; Mckinney, Marion O

    1952-01-01

    A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.

  16. New method for the rapid extraction of natural products: efficient isolation of shikimic acid from star anise.

    PubMed

    Just, Jeremy; Deans, Bianca J; Olivier, Wesley J; Paull, Brett; Bissember, Alex C; Smith, Jason A

    2015-05-15

    A new, practical, rapid, and high-yielding process for the pressurized hot water extraction (PHWE) of multigram quantities of shikimic acid from star anise (Illicium verum) using an unmodified household espresso machine has been developed. This operationally simple and inexpensive method enables the efficient and straightforward isolation of shikimic acid and the facile preparation of a range of its synthetic derivatives.

  17. Dehydration Polymerization for Poly(hetero)arene Conjugated Polymers.

    PubMed

    Mirabal, Rafael A; Vanderzwet, Luke; Abuadas, Sara; Emmett, Michael R; Schipper, Derek

    2018-02-18

    The lack of scalable and sustainable methods to prepare conjugated polymers belies their importance in many enabling technologies. Accessing high-performance poly(hetero)arene conjugated polymers by dehydration has remained an unsolved problem in synthetic chemistry and has historically required transitional-metal coupling reactions. Herein, we report a dehydration method that allows access to conjugated heterocyclic materials. By using the technique, we have prepared a series of small molecules and polymers. The reaction avoids using transition metals, proceeds at room temperature, the only required reactant is a simple base and water is the sole by-product. The dehydration reaction is technically simple and provides a sustainable and straightforward method to prepare conjugated heteroarene motifs. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Integrated Hamiltonian sampling: a simple and versatile method for free energy simulations and conformational sampling.

    PubMed

    Mori, Toshifumi; Hamers, Robert J; Pedersen, Joel A; Cui, Qiang

    2014-07-17

    Motivated by specific applications and the recent work of Gao and co-workers on integrated tempering sampling (ITS), we have developed a novel sampling approach referred to as integrated Hamiltonian sampling (IHS). IHS is straightforward to implement and complementary to existing methods for free energy simulation and enhanced configurational sampling. The method carries out sampling using an effective Hamiltonian constructed by integrating the Boltzmann distributions of a series of Hamiltonians. By judiciously selecting the weights of the different Hamiltonians, one achieves rapid transitions among the energy landscapes that underlie different Hamiltonians and therefore an efficient sampling of important regions of the conformational space. Along this line, IHS shares similar motivations as the enveloping distribution sampling (EDS) approach of van Gunsteren and co-workers, although the ways that distributions of different Hamiltonians are integrated are rather different in IHS and EDS. Specifically, we report efficient ways for determining the weights using a combination of histogram flattening and weighted histogram analysis approaches, which make it straightforward to include many end-state and intermediate Hamiltonians in IHS so as to enhance its flexibility. Using several relatively simple condensed phase examples, we illustrate the implementation and application of IHS as well as potential developments for the near future. The relation of IHS to several related sampling methods such as Hamiltonian replica exchange molecular dynamics and λ-dynamics is also briefly discussed.

  19. Generation of multicellular tumor spheroids by the hanging-drop method.

    PubMed

    Timmins, Nicholas E; Nielsen, Lars K

    2007-01-01

    Owing to their in vivo-like characteristics, three-dimensional (3D) multicellular tumor spheroid (MCTS) cultures are gaining increasing popularity as an in vitro model of tumors. A straightforward and simple approach to the cultivation of these MCTS is the hanging-drop method. Cells are suspended in droplets of medium, where they develop into coherent 3D aggregates and are readily accessed for analysis. In addition to being simple, the method eliminates surface interactions with an underlying substratum (e.g., polystyrene plastic or agarose), requires only a low number of starting cells, and is highly reproducible. This method has also been applied to the co-cultivation of mixed cell populations, including the co-cultivation of endothelial cells and tumor cells as a model of early tumor angiogenesis.

  20. A simplified design of the staggered herringbone micromixer for practical applications

    PubMed Central

    Du, Yan; Zhang, Zhiyi; Yim, ChaeHo; Lin, Min; Cao, Xudong

    2010-01-01

    We demonstrated a simple method for the device design of a staggered herringbone micromixer (SHM) using numerical simulation. By correlating the simulated concentrations with channel length, we obtained a series of concentration versus channel length profiles, and used mixing completion length Lm as the only parameter to evaluate the performance of device structure on mixing. Fluorescence quenching experiments were subsequently conducted to verify the optimized SHM structure for a specific application. Good agreement was found between the optimization and the experimental data. Since Lm is straightforward, easily defined and calculated parameter for characterization of mixing performance, this method for designing micromixers is simple and effective for practical applications. PMID:20697584

  1. A simplified design of the staggered herringbone micromixer for practical applications.

    PubMed

    Du, Yan; Zhang, Zhiyi; Yim, Chaeho; Lin, Min; Cao, Xudong

    2010-05-07

    We demonstrated a simple method for the device design of a staggered herringbone micromixer (SHM) using numerical simulation. By correlating the simulated concentrations with channel length, we obtained a series of concentration versus channel length profiles, and used mixing completion length L(m) as the only parameter to evaluate the performance of device structure on mixing. Fluorescence quenching experiments were subsequently conducted to verify the optimized SHM structure for a specific application. Good agreement was found between the optimization and the experimental data. Since L(m) is straightforward, easily defined and calculated parameter for characterization of mixing performance, this method for designing micromixers is simple and effective for practical applications.

  2. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    PubMed

    Thomson, Robert C

    2009-07-30

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  3. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    PubMed Central

    Thomson, Robert C.

    2009-01-01

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729

  4. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    NASA Astrophysics Data System (ADS)

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-01

    In this article we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012), 10.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015), 10.1103/PhysRevC.91.027901]. We will discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  5. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    DOE PAGES

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-19

    Here, we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012)PRVCAN0556-281310.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015)PRVCAN0556-281310.1103/PhysRevC.91.027901]. We will then discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  6. Microrheology with optical tweezers: measuring the relative viscosity of solutions 'at a glance'.

    PubMed

    Tassieri, Manlio; Del Giudice, Francesco; Robertson, Emma J; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M

    2015-03-06

    We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples.

  7. Microrheology with Optical Tweezers: Measuring the relative viscosity of solutions ‘at a glance'

    PubMed Central

    Tassieri, Manlio; Giudice, Francesco Del; Robertson, Emma J.; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M.

    2015-01-01

    We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples. PMID:25743468

  8. Simple Perturbation Example for Quantum Chemistry.

    ERIC Educational Resources Information Center

    Goodfriend, P. L.

    1985-01-01

    Presents a simple example that illustrates various aspects of the Rayleigh-Schrodinger perturbation theory. The example is a particularly good one because it is straightforward and can be compared with both the exact solution and with experimental data. (JN)

  9. A not-so-funny thing happened on the way to relicensing the Edwards Dam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayer, F.J.; Isaacson, M.

    1995-12-31

    What started out as a seemingly straightforward and simple exercise, obtaining a new FERC license for the Edwards Dam in Augusta, Maine, turned out to be anything but straightforward and far from simple. This article tells the story of one of the more interesting and possibly precedent setting cases in the {open_quotes}class of 93{close_quotes} and is presented in three sections: (1) the history of the Edwards Dam and the FERC regulatory process through the spring of 1995; (2) Edwards` response to the dam removal campaign; and (3) recommendations for FERC licensees threatened by dam removal during relicensing.

  10. Straightforward fabrication of black nano silica dusting powder for latent fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Komalasari, Isna; Krismastuti, Fransiska Sri Herwahyu; Elishian, Christine; Handayani, Eka Mardika; Nugraha, Willy Cahya; Ketrin, Rosi

    2017-11-01

    Imaging of latent fingerprint pattern (aka fingermark) is one of the most important and accurate detection methods in forensic investigation because of the characteristic of individual fingerprint. This detection technique relies on the mechanical adherence of fingerprint powder to the moisture and oily component of the skin left on the surface. The particle size of fingerprint powder is one of the critical parameter to obtain excellent fingerprint image. This study develops a simple, cheap and straightforward method to fabricate Nano size black dusting fingerprint powder based on Nano silica and applies the powder to visualize latent fingerprint. The nanostructured silica was prepared from tetraethoxysilane (TEOS) and then modified with Nano carbon, methylene blue and sodium acetate to color the powder. Finally, as a proof-of-principle, the ability of this black Nano silica dusting powder to image latent fingerprint is successfully demonstrated and the results show that this fingerprint powder provides clearer fingerprint pattern compared to the commercial one highlighting the potential application of the nanostructured silica in forensic science.

  11. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays.

    PubMed

    McLachlan, G J; Bean, R W; Jones, L Ben-Tovim

    2006-07-01

    An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

  12. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme.

    PubMed

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases.

  13. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme

    PubMed Central

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    abstract The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases. PMID:26646356

  14. A simple method for processing data with least square method

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning

    2017-08-01

    The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.

  15. Opportunity Cost: A Reexamination

    ERIC Educational Resources Information Center

    Parkin, Michael

    2016-01-01

    Is opportunity cost an ambiguous and arbitrary concept or a simple, straightforward, and fruitful one? This reexamination of opportunity cost addresses this question, and shows that opportunity cost is an ambiguous concept because "two" definitions are in widespread use. One of the definitions is indeed simple, fruitful, and one that…

  16. Simple method for the generation of multiple homogeneous field volumes inside the bore of superconducting magnets.

    PubMed

    Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris

    2015-07-17

    Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation.

  17. Simple and rapid high-performance liquid chromatographic method for the determination of aspartame and its metabolites in foods.

    PubMed

    Gibbs, B F; Alli, I; Mulligan, C N

    1996-02-23

    A method for the determination of aspartame (N-L-alpha-aspartyl-L-phenylalanine methyl ester) and its metabolites, applicable on a routine quality assurance basis, is described. Liquid samples (diet Coke, 7-Up, Pepsi, etc.) were injected directly onto a mini-cartridge reversed-phase column on a high-performance liquid chromatographic system, whereas solid samples (Equal, hot chocolate powder, pudding, etc.) were extracted with water. Optimising chromatographic conditions resulted in resolved components of interest within 12 min. The by-products were confirmed by mass spectrometry. Although the method was developed on a two-pump HPLC system fitted with a diode-array detector, it is straightforward and can be transformed to the simplest HPLC configuration. Using a single-piston pump (with damper), a fixed-wavelength detector and a recorder/integrator, the degradation of products can be monitored as they decompose. The results obtained were in harmony with previously reported tedious methods. The method is simple, rapid, quantitative and does not involve complex, hazardous or toxic chemistry.

  18. A straightforward method for Vacuum-Ultraviolet flux measurements: The case of the hydrogen discharge lamp and implications for solid-phase actinometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulvio, D., E-mail: daniele.fulvio@uni-jena.de, E-mail: dfu@oact.inaf.it; Brieva, A. C.; Jäger, C.

    2014-07-07

    Vacuum-Ultraviolet (VUV) radiation is responsible for the photo-processing of simple and complex molecules in several terrestrial and extraterrestrial environments. In the laboratory such radiation is commonly simulated by inexpensive and easy-to-use microwave-powered hydrogen discharge lamps. However, VUV flux measurements are not trivial and the methods/devices typically used for this purpose, mainly actinometry and calibrated VUV silicon photodiodes, are not very accurate or expensive and lack of general suitability to experimental setups. Here, we present a straightforward method for measuring the VUV photon flux based on the photoelectric effect and using a gold photodetector. This method is easily applicable to mostmore » experimental setups, bypasses the major problems of the other methods, and provides reliable flux measurements. As a case study, the method is applied to a microwave-powered hydrogen discharge lamp. In addition, the comparison of these flux measurements to those obtained by O{sub 2} actinometry experiments allow us to estimate the quantum yield (QY) values QY{sub 122} = 0.44 ± 0.16 and QY{sub 160} = 0.87 ± 0.30 for solid-phase O{sub 2} actinometry.« less

  19. Immunodiagnosis of childhood malignancies.

    PubMed

    Parham, D M; Holt, H

    1999-09-01

    Immunodiagnosis utilizing immunohistochemical techniques is currently the most commonly utilized and readily available method of ancillary diagnosis in pediatric oncopathology. The methodology comprises relatively simple steps, based on straightforward biologic concepts, and the reagents used are generally well characterized and widely used. The principle of cancer immunodiagnosis is based on the determination of neoplastic lineage using detection of proteins typical of cell differentiation pathways. Methodology sensitivity varies and has become greater with each new generation of tests, but technical draw-backs should be considered to avoid excessive background or nonspecific results. Automated instrumentation offers a degree of accuracy and reproducibility not easily attainable by manual methods.

  20. New QCD sum rules based on canonical commutation relations

    NASA Astrophysics Data System (ADS)

    Hayata, Tomoya

    2012-04-01

    New derivation of QCD sum rules by canonical commutators is developed. It is the simple and straightforward generalization of Thomas-Reiche-Kuhn sum rule on the basis of Kugo-Ojima operator formalism of a non-abelian gauge theory and a suitable subtraction of UV divergences. By applying the method to the vector and axial vector current in QCD, the exact Weinberg’s sum rules are examined. Vector current sum rules and new fractional power sum rules are also discussed.

  1. On geodesics of the rotation group SO(3)

    NASA Astrophysics Data System (ADS)

    Novelia, Alyssa; O'Reilly, Oliver M.

    2015-11-01

    Geodesics on SO(3) are characterized by constant angular velocity motions and as great circles on a three-sphere. The former interpretation is widely used in optometry and the latter features in the interpolation of rotations in computer graphics. The simplicity of these two disparate interpretations belies the complexity of the corresponding rotations. Using a quaternion representation for a rotation, we present a simple proof of the equivalence of the aforementioned characterizations and a straightforward method to establish features of the corresponding rotations.

  2. Tenax extraction as a simple approach to improve environmental risk assessments.

    PubMed

    Harwood, Amanda D; Nutile, Samuel A; Landrum, Peter F; Lydy, Michael J

    2015-07-01

    It is well documented that using exhaustive chemical extractions is not an effective means of assessing exposure of hydrophobic organic compounds in sediments and that bioavailability-based techniques are an improvement over traditional methods. One technique that has shown special promise as a method for assessing the bioavailability of hydrophobic organic compounds in sediment is the use of Tenax-extractable concentrations. A 6-h or 24-h single-point Tenax-extractable concentration correlates to both bioaccumulation and toxicity. This method has demonstrated effectiveness for several hydrophobic organic compounds in various organisms under both field and laboratory conditions. In addition, a Tenax bioaccumulation model was developed for multiple compounds relating 24-h Tenax-extractable concentrations to oligochaete tissue concentrations exposed in both the laboratory and field. This model has demonstrated predictive capacity for additional compounds and species. Use of Tenax-extractable concentrations to estimate exposure is rapid, simple, straightforward, and relatively inexpensive, as well as accurate. Therefore, this method would be an invaluable tool if implemented in risk assessments. © 2015 SETAC.

  3. Controlling the Universe

    ERIC Educational Resources Information Center

    Evanson, Nick

    2004-01-01

    Basic electronic devices have been used to great effect with console computer games. This paper looks at a range of devices from the very simple, such as microswitches and potentiometers, up to the more complex Hall effect probe. There is a great deal of relatively straightforward use of simple devices in computer games systems, and having read…

  4. 50 Ways to Improve Student Behavior: Simple Solutions to Complex Challenges

    ERIC Educational Resources Information Center

    Breaux, Annette; Whitaker, Todd

    2010-01-01

    In a lively and engaging style, Annette Breaux and Todd Whitaker share 50 simple, straightforward techniques for improving student behavior and increasing student cooperation, participation, and achievement. Each practical, well-defined strategy can be applied in classrooms of all grade levels and subjects. Strategies include: (1) How to make…

  5. Star-shaped Polymers through Simple Wavelength-Selective Free-Radical Photopolymerization.

    PubMed

    Eibel, Anna; Fast, David E; Sattelkow, Jürgen; Zalibera, Michal; Wang, Jieping; Huber, Alex; Müller, Georgina; Neshchadin, Dmytro; Dietliker, Kurt; Plank, Harald; Grützmacher, Hansjörg; Gescheidt, Georg

    2017-11-06

    Star-shaped polymers represent highly desired materials in nanotechnology and life sciences, including biomedical applications (e.g., diagnostic imaging, tissue engineering, and targeted drug delivery). Herein, we report a straightforward synthesis of wavelength-selective multifunctional photoinitiators (PIs) that contain a bisacylphosphane oxide (BAPO) group and an α-hydroxy ketone moiety within one molecule. By using three different wavelengths, these photoactive groups can be selectively addressed and activated, thereby allowing the synthesis of ABC-type miktoarm star polymers through a simple, highly selective, and robust free-radical polymerization method. The photochemistry of these new initiators and the feasibility of this concept were investigated in unprecedented detail by using various spectroscopic techniques. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippi, Claudia, E-mail: c.filippi@utwente.nl; Assaraf, Roland, E-mail: assaraf@lct.jussieu.fr; Moroni, Saverio, E-mail: moroni@democritos.it

    2016-05-21

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, inmore » both all-electron and pseudopotential calculations.« less

  7. Ultimate Longitudinal Strength of Composite Ship Hulls

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangming; Huang, Lingkai; Zhu, Libao; Tang, Yuhang; Wang, Anwen

    2017-01-01

    A simple analytical model to estimate the longitudinal strength of ship hulls in composite materials under buckling, material failure and ultimate collapse is presented in this paper. Ship hulls are regarded as assemblies of stiffened panels which idealized as group of plate-stiffener combinations. Ultimate strain of the plate-stiffener combination is predicted under buckling or material failure with composite beam-column theory. The effects of initial imperfection of ship hull and eccentricity of load are included. Corresponding longitudinal strengths of ship hull are derived in a straightforward method. A longitudinally framed ship hull made of symmetrically stacked unidirectional plies under sagging is analyzed. The results indicate that present analytical results have a good agreement with FEM method. The initial deflection of ship hull and eccentricity of load can dramatically reduce the bending capacity of ship hull. The proposed formulations provide a simple but useful tool for the longitudinal strength estimation in practical design.

  8. Simple method for the generation of multiple homogeneous field volumes inside the bore of superconducting magnets

    PubMed Central

    Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris

    2015-01-01

    Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation. PMID:26182891

  9. Cascade multicomponent synthesis of indoles, pyrazoles, and pyridazinones by functionalization of alkenes.

    PubMed

    Matcha, Kiran; Antonchick, Andrey P

    2014-10-27

    The development of multicomponent reactions for indole synthesis is demanding and has hardly been explored. The present study describes the development of a novel multicomponent, cascade approach for indole synthesis. Various substituted indole derivatives were obtained from simple reagents, such as unfunctionalized alkenes, diazonium salts, and sodium triflinate, by using an established straightforward and regioselective method. The method is based on the radical trifluoromethylation of alkenes as an entry into Fischer indole synthesis. Besides indole synthesis, the application of the multicomponent cascade reaction to the synthesis of pyrazoles and pyridazinones is described. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. In-Gel Determination of L-Amino Acid Oxidase Activity Based on the Visualization of Prussian Blue-Forming Reaction

    PubMed Central

    Zhou, Ning; Zhao, Chuntian

    2013-01-01

    L-amino acid oxidase (LAAO) is attracting increasing attention due to its important functions. Diverse detection methods with their own properties have been developed for characterization of LAAO. In the present study, a simple, rapid, sensitive, cost-effective and reproducible method for quantitative in-gel determination of LAAO activity based on the visualization of Prussian blue-forming reaction is described. Coupled with SDS-PAGE, this Prussian blue agar assay can be directly used to determine the numbers and approximate molecular weights of LAAO in one step, allowing straightforward application for purification and sequence identification of LAAO from diverse samples. PMID:23383337

  11. Simple Backdoors on RSA Modulus by Using RSA Vulnerability

    NASA Astrophysics Data System (ADS)

    Sun, Hung-Min; Wu, Mu-En; Yang, Cheng-Ta

    This investigation proposes two methods for embedding backdoors in the RSA modulus N=pq rather than in the public exponent e. This strategy not only permits manufacturers to embed backdoors in an RSA system, but also allows users to choose any desired public exponent, such as e=216+1, to ensure efficient encryption. This work utilizes lattice attack and exhaustive attack to embed backdoors in two proposed methods, called RSASBLT and RSASBES, respectively. Both approaches involve straightforward steps, making their running time roughly the same as that of normal RSA key-generation time, implying that no one can detect the backdoor by observing time imparity.

  12. Confirmatory factor analysis using Microsoft Excel.

    PubMed

    Miles, Jeremy N V

    2005-11-01

    This article presents a method for using Microsoft (MS) Excel for confirmatory factor analysis (CFA). CFA is often seen as an impenetrable technique, and thus, when it is taught, there is frequently little explanation of the mechanisms or underlying calculations. The aim of this article is to demonstrate that this is not the case; it is relatively straightforward to produce a spreadsheet in MS Excel that can carry out simple CFA. It is possible, with few or no programming skills, to effectively program a CFA analysis and, thus, to gain insight into the workings of the procedure.

  13. Improved silver staining of nucleolar organiser regions in paraffin wax sections using an inverted incubation technique.

    PubMed Central

    Coghill, G; Grant, A; Orrell, J M; Jankowski, J; Evans, A T

    1990-01-01

    A new simple modification to the silver staining of nucleolar organiser regions (AgNORs) was devised which, by performing the incubation with the slide inverted, results in minimal undesirable background staining, a persistent problem. Inverted incubation is facilitated by the use of a commercially available plastic coverplate. This technique has several additional advantages over other published staining protocols. In particular, the method is straightforward, fast, and maintains a high degree of contrast between the background and the AgNORs. Images PMID:1702451

  14. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  15. The Crisis Prevention Analysis Model.

    ERIC Educational Resources Information Center

    Hoverland, Hal; And Others

    1986-01-01

    The Crisis Prevention Analysis model offers a framework for simple, straightforward self-appraisal by college administrators of problems in the following areas: fiscal, faculty and staff, support functions, and goals and attitudes areas. (MSE)

  16. Zombie states for description of structure and dynamics of multi-electron systems

    NASA Astrophysics Data System (ADS)

    Shalashilin, Dmitrii V.

    2018-05-01

    Canonical Coherent States (CSs) of Harmonic Oscillator have been extensively used as a basis in a number of computational methods of quantum dynamics. However, generalising such techniques for fermionic systems is difficult because Fermionic Coherent States (FCSs) require complicated algebra of Grassmann numbers not well suited for numerical calculations. This paper introduces a coherent antisymmetrised superposition of "dead" and "alive" electronic states called here Zombie State (ZS), which can be used in a manner of FCSs but without Grassmann algebra. Instead, for Zombie States, a very simple sign-changing rule is used in the definition of creation and annihilation operators. Then, calculation of electronic structure Hamiltonian matrix elements between two ZSs becomes very simple and a straightforward technique for time propagation of fermionic wave functions can be developed. By analogy with the existing methods based on Canonical Coherent States of Harmonic Oscillator, fermionic wave functions can be propagated using a set of randomly selected Zombie States as a basis. As a proof of principles, the proposed Coupled Zombie States approach is tested on a simple example showing that the technique is exact.

  17. Robust and Simple Non-Reflecting Boundary Conditions for the Euler Equations: A New Approach Based on the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Loh, Ching-Yuen; Wang, Xiao-Yen; Yu, Shang-Tao

    2003-01-01

    This paper reports on a significant advance in the area of non-reflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of the development of the space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics-based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains their unique robustness and accuracy in terms of the conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.

  18. A new formulation for anisotropic radiative transfer problems. I - Solution with a variational technique

    NASA Technical Reports Server (NTRS)

    Cheyney, H., III; Arking, A.

    1976-01-01

    The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.

  19. Experimenting with woodwind instruments

    NASA Astrophysics Data System (ADS)

    Lo Presto, Michael C.

    2007-05-01

    Simple experiments involving musical instruments of the woodwind family can be used to demonstrate the basic physics of vibrating air columns in resonance tubes using nothing more than straightforward measurements and data collection hardware and software. More involved experimentation with the same equipment can provide insight into the effects of holes in the tubing and other factors that make simple tubes useful as musical instruments.

  20. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. SLIC superpixels compared to state-of-the-art superpixel methods.

    PubMed

    Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine

    2012-11-01

    Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.

  2. Parallel tempering for the traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percus, Allon; Wang, Richard; Hyman, Jeffrey

    We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less

  3. Zero-multipole summation method for efficiently estimating electrostatic interactions in molecular system.

    PubMed

    Fukuda, Ikuo

    2013-11-07

    The zero-multipole summation method has been developed to efficiently evaluate the electrostatic Coulombic interactions of a point charge system. This summation prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large amounts of energetic noise and significant artifacts. The resulting energy function is represented by a constant term plus a simple pairwise summation, using a damped or undamped Coulombic pair potential function along with a polynomial of the distance between each particle pair. Thus, the implementation is straightforward and enables facile applications to high-performance computations. Any higher-order multipole moment can be taken into account in the neutrality principle, and it only affects the degree and coefficients of the polynomial and the constant term. The lowest and second moments correspond respectively to the Wolf zero-charge scheme and the zero-dipole summation scheme, which was previously proposed. Relationships with other non-Ewald methods are discussed, to validate the current method in their contexts. Good numerical efficiencies were easily obtained in the evaluation of Madelung constants of sodium chloride and cesium chloride crystals.

  4. Highly accurate symplectic element based on two variational principles

    NASA Astrophysics Data System (ADS)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  5. Rutherford's Scattering Formula via the Runge-Lenz Vector.

    ERIC Educational Resources Information Center

    Basano, L.; Bianchi, A.

    1980-01-01

    Discusses how the Runge-Lenz vector provides a way to derive the relation between deflection angle and impact parameter for Coulomb- and Kepler-fields in a very simple and a straightforward way. (Author/HM)

  6. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  7. An evidence-based concept of implant dentistry. Utilization of short and narrow platform implants.

    PubMed

    Ruiz, Jose-Luis

    2012-09-01

    As a profession, we must remember that tooth replacement is not a luxury; it is often a necessity for health reasons. Although bone augmentation and CBCT and expensive surgical guides are often indicated for complex cases, they are being overused. Simple or straightforward implant cases, when there is sufficient natural bone for narrow or shorter implant, can be predictable performed by well-trained GPs and other trained specialists. Complex cases requiring bone augmentation and other complexities as described herein, should be referred to a surgical specialist. Implant courses and curricula have to be based on the level of complexity of implant surgery that each clinician wishes to provide to his or her patients. Using a "logical approach" to implant dentistry keeps cases simple or straightforward, and more accessible to patients by the correct use of narrow and shorter implants.

  8. The analysis of non-linear dynamic behavior (including snap-through) of postbuckled plates by simple analytical solution

    NASA Technical Reports Server (NTRS)

    Ng, C. F.

    1988-01-01

    Static postbuckling and nonlinear dynamic analysis of plates are usually accomplished by multimode analyses, although the methods are complicated and do not give straightforward understanding of the nonlinear behavior. Assuming single-mode transverse displacement, a simple formula is derived for the transverse load displacement relationship of a plate under in-plane compression. The formula is used to derive a simple analytical expression for the static postbuckling displacement and nonlinear dynamic responses of postbuckled plates under sinusoidal or random excitation. Regions with softening and hardening spring behavior are identified. Also, the highly nonlinear motion of snap-through and its effects on the overall dynamic response can be easily interpreted using the single-mode formula. Theoretical results are compared with experimental results obtained using a buckled aluminum panel, using discrete frequency and broadband point excitation. Some important effects of the snap-through motion on the dynamic response of the postbuckled plates are found.

  9. A simple fabrication of CH3NH3PbI3 perovskite for solar cells using low-purity PbI2

    NASA Astrophysics Data System (ADS)

    Guo, Nanjie; Zhang, Taiyang; Li, Ge; Xu, Feng; Qian, Xufang; Zhao, Yixin

    2017-01-01

    The CH3NH3PbI3 (MAPbI3) perovskite was usually prepared by high-purity PbI2 with high cost. The low cost and low-purity PbI2 was seldom reported for fabrication of MAPbI3 because it cannot even dissolve well in widely adopted solvent of DMF. We developed an easy method to adapt low-purity PbI2 for fabrication of high quality MAPbI3 just by the simple addition of some hydrochloric acid into the mixture of low-purity PbI2, MAI and DMF. This straightforward method can not only help dissolve the low quality PbI2 by reacting with some impurities in DMF, but also lead to a successful fabrication of high-quality perovskite solar cells with up to 14.80% efficiency comparable to the high quality PbI2 precursors. Project supported by the National Natural Science Foundation of China (Nos. 51372151, 21303103) and Houyingdong Grant (No. 151046).

  10. Identifying Two-Dimensional Z 2 Antiferromagnetic Topological Insulators

    NASA Astrophysics Data System (ADS)

    Bègue, F.; Pujol, P.; Ramazashvili, R.

    2018-01-01

    We revisit the question of whether a two-dimensional topological insulator may arise in a commensurate Néel antiferromagnet, where staggered magnetization breaks the symmetry with respect to both elementary translation and time reversal, but retains their product as a symmetry. In contrast to the so-called Z 2 topological insulators, an exhaustive characterization of antiferromagnetic topological phases with the help of topological invariants has been missing. We analyze a simple model of an antiferromagnetic topological insulator and chart its phase diagram, using a recently proposed criterion for centrosymmetric systems [13]. We then adapt two methods, originally designed for paramagnetic systems, and make antiferromagnetic topological phases manifest. The proposed methods apply far beyond the particular examples treated in this work, and admit straightforward generalization. We illustrate this by two examples of non-centrosymmetric systems, where no simple criteria have been known to identify topological phases. We also present, for some cases, an explicit construction of edge states in an antiferromagnetic topological insulator.

  11. Robust and Simple Non-Reflecting Boundary Conditions for the Euler Equations - A New Approach based on the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, S.-C.; Himansu, A.; Loh, C.-Y.; Wang, X.-Y.; Yu, S.-T.J.

    2005-01-01

    This paper reports on a significant advance in the area of nonreflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of t he development of t he space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics- based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains t heir unique robustness and accuracy in terms of t he conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.

  12. Endobronchial valves for bronchopleural fistula: pitfalls and principles

    PubMed Central

    Gaspard, Dany; Bartter, Thaddeus; Boujaoude, Ziad; Raja, Haroon; Arya, Rohan; Meena, Nikhil; Abouzgheib, Wissam

    2016-01-01

    Background: Placement of endobronchial valves for bronchopleural fistula (BPF) is not always straightforward. A simple guide to the steps for an uncomplicated procedure does not encompass pitfalls that need to be understood and overcome to maximize the efficacy of this modality. Objectives: The objective of this study was to discuss examples of difficult cases for which the placement of endobronchial valves was not straightforward and required alterations in the usual basic steps. Subsequently, we aimed to provide guiding principles for a successful procedure. Methods: Six illustrative cases were selected to demonstrate issues that can arise during endobronchial valve placement. Results: In each case, a real or apparent lack of decrease in airflow through a BPF was diagnosed and addressed. We have used the selected problem cases to illustrate principles, with the goal of helping to increase the success rate for endobronchial valve placement in the treatment of BPF. Conclusions: This series demonstrates issues that complicate effective placement of endobronchial valves for BPF. These issues form the basis for troubleshooting steps that complement the basic procedural steps. PMID:27742781

  13. A Conceptual Approach to Assimilating Remote Sensing Data to Improve Soil Moisture Profile Estimates in a Surface Flux/Hydrology Model. 3; Disaggregation

    NASA Technical Reports Server (NTRS)

    Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius

    1998-01-01

    This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.

  14. A statistical theory for sound radiation and reflection from a duct

    NASA Technical Reports Server (NTRS)

    Cho, Y. C.

    1979-01-01

    A new analytical method is introduced for the study of the sound radiation and reflection from the open end of a duct. The sound is thought of as an aggregation of the quasiparticles-phonons. The motion of the latter is described in terms of the statistical distribution, which is derived from the classical wave theory. The results are in good agreement with the solutions obtained using the Wiener-Hopf technique when the latter is applicable, but the new method is simple and provides straightforward physical interpretation of the problem. Furthermore, it is applicable to a problem involving a duct in which modes are difficult to determine or cannot be defined at all, whereas the Wiener-Hopf technique is not.

  15. Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.

    2004-01-01

    Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.

  16. Emergence of a confined state in a weakly bent wire

    NASA Astrophysics Data System (ADS)

    Granot, Er'El

    2002-06-01

    In this paper we use a simple straightforward technique to investigate the emergence of a bound state in a weakly bent wire. We show that the bend behaves like an infinitely shallow potential well, and in the limit of small bending angle (φ<<1) and low energy the bend can be presented by a simple one-dimensional δ-function potential, V(x)=-(2(cb)φ2)δ(x) where cb≅2.1.

  17. Straightforward Generation of Pillared, Microporous Graphene Frameworks for Use in Supercapacitors.

    PubMed

    Yuan, Kai; Xu, Yazhou; Uihlein, Johannes; Brunklaus, Gunther; Shi, Lei; Heiderhoff, Ralf; Que, Mingming; Forster, Michael; Chassé, Thomas; Pichler, Thomas; Riedl, Thomas; Chen, Yiwang; Scherf, Ullrich

    2015-11-01

    Microporous, pillared graphene-based frameworks are generated in a simple functionalization/coupling procedure starting from reduced graphene oxide. They are used for the fabrication of high-performance supercapacitor devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. ASSESSMENT OF SPATIAL AUTOCORRELATION IN EMPIRICAL MODELS IN ECOLOGY

    EPA Science Inventory

    Statistically assessing ecological models is inherently difficult because data are autocorrelated and this autocorrelation varies in an unknown fashion. At a simple level, the linking of a single species to a habitat type is a straightforward analysis. With some investigation int...

  19. LandScape: a simple method to aggregate p-values and other stochastic variables without a priori grouping.

    PubMed

    Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob

    2016-08-01

    In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).

  20. Handspinning Enabled Highly Concentrated Carbon Nanotubes with Controlled Orientation in Nanofibers

    PubMed Central

    Lee, Hoik; Watanabe, Kei; Kim, Myungwoong; Gopiraman, Mayakrishnan; Song, Kyung-Hun; Lee, Jung Soon; Kim, Ick Soo

    2016-01-01

    The novel method, handspinning (HS), was invented by mimicking commonly observed methods in our daily lives. The use of HS allows us to fabricate carbon nanotube-reinforced nanofibers (CNT-reinforced nanofibers) by addressing three significant challenges: (i) the difficulty of forming nanofibers at high concentrations of CNTs, (ii) aggregation of the CNTs, and (iii) control of the orientation of the CNTs. The handspun nanofibers showed better physical properties than fibers fabricated by conventional methods, such as electrospinning. Handspun nanofibers retain a larger amount of CNTs than electrospun nanofibers, and the CNTs are easily aligned uniaxially. We attributed these improvements provided by the HS process to simple mechanical stretching force, which allows for orienting the nanofillers along with the force direction without agglomeration, leading to increased contact area between the CNTs and the polymer matrix, thereby providing enhanced interactions. HS is a simple and straightforward method as it does not require an electric field, and, hence, any kinds of polymers and solvents can be applicable. Furthermore, it is feasible to retain a large amount of various nanofillers in the fibers to enhance their physical and chemical properties. Therefore, HS provides an effective pathway to create new types of reinforced nanofibers with outstanding properties. PMID:27876892

  1. Quantitative Modeling of Earth Surface Processes

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.

  2. More details...
  3. Second-Order Conditioning in "Drosophila"

    ERIC Educational Resources Information Center

    Tabone, Christopher J.; de Belle, J. Steven

    2011-01-01

    Associative conditioning in "Drosophila melanogaster" has been well documented for several decades. However, most studies report only simple associations of conditioned stimuli (CS, e.g., odor) with unconditioned stimuli (US, e.g., electric shock) to measure learning or establish memory. Here we describe a straightforward second-order conditioning…

  4. Experimenting with Woodwind Instruments

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2007-01-01

    Simple experiments involving musical instruments of the woodwind family can be used to demonstrate the basic physics of vibrating air columns in resonance tubes using nothing more than straightforward measurements and data collection hardware and software. More involved experimentation with the same equipment can provide insight into the effects…

  5. A Simple Adsorption Experiment

    ERIC Educational Resources Information Center

    Guirado, Gonzalo; Ayllon, Jose A.

    2011-01-01

    The study of adsorption phenomenon is one of the most relevant and traditional physical chemistry experiments performed by chemistry undergraduate students in laboratory courses. In this article, we describe an easy, inexpensive, and straightforward way to experimentally determine adsorption isotherms using pieces of filter paper as the adsorbent…

  6. Research on Leadership, Motivation and Quality of Life in the Air Force Missile and Tanker Units

    DTIC Science & Technology

    1977-06-01

    studying directly the results of applying different management principles in leading organizations. Enlight - ened management had demonstrated In... technologies for handling individual differences are fairly simple and straightforward. The demands placed upon analyses involving relationships and

  7. Ethical Frameworks, Moral Practices and Outdoor Education.

    ERIC Educational Resources Information Center

    Fox, Karen M.; Lautt, Mick

    Insights from quantum physics and chaos theory help create new metaphors about ethical frameworks and moral practices in outdoor education. The seemingly straightforward concept of values is analogous to the initial simple nonlinear equation of a fractal. The value claims of outdoor education--trust, cooperation, environmental awareness,…

  8. The Relativistic Rocket

    ERIC Educational Resources Information Center

    Antippa, Adel F.

    2009-01-01

    We solve the problem of the relativistic rocket by making use of the relation between Lorentzian and Galilean velocities, as well as the laws of superposition of successive collinear Lorentz boosts in the limit of infinitesimal boosts. The solution is conceptually simple, and technically straightforward, and provides an example of a powerful…

  9. Common Magnets, Unexpected Polarities

    ERIC Educational Resources Information Center

    Olson, Mark

    2013-01-01

    In this paper, I discuss a "misconception" in magnetism so simple and pervasive as to be typically unnoticed. That magnets have poles might be considered one of the more straightforward notions in introductory physics. However, the magnets common to students' experiences are likely different from those presented in educational…

  10. BVDV: Detection, Risk Management and Control

    USDA-ARS?s Scientific Manuscript database

    The terms bovine viral diarrhea (BVD) and bovine viral diarrhea viruses (BVDV) are difficult to define in simple straightforward statements because both are umbrella terms covering a wide range of observations and entities. While diarrhea is in the name, BVD, it is used in reference to a number of ...

  11. Implementation of cascade logic gates and majority logic gate on a simple and universal molecular platform.

    PubMed

    Gao, Jinting; Liu, Yaqing; Lin, Xiaodong; Deng, Jiankang; Yin, Jinjin; Wang, Shuo

    2017-10-25

    Wiring a series of simple logic gates to process complex data is significantly important and a large challenge for untraditional molecular computing systems. The programmable property of DNA endows its powerful application in molecular computing. In our investigation, it was found that DNA exhibits excellent peroxidase-like activity in a colorimetric system of TMB/H 2 O 2 /Hemin (TMB, 3,3', 5,5'-Tetramethylbenzidine) in the presence of K + and Cu 2+ , which is significantly inhibited by the addition of an antioxidant. According to the modulated catalytic activity of this DNA-based catalyst, three cascade logic gates including AND-OR-INH (INHIBIT), AND-INH and OR-INH were successfully constructed. Interestingly, by only modulating the concentration of Cu 2+ , a majority logic gate with a single-vote veto function was realized following the same threshold value as that of the cascade logic gates. The strategy is quite straightforward and versatile and provides an instructive method for constructing multiple logic gates on a simple platform to implement complex molecular computing.

  12. Rapid and Simple Detection of Hot Spot Point Mutations of Epidermal Growth Factor Receptor, BRAF, and NRAS in Cancers Using the Loop-Hybrid Mobility Shift Assay

    PubMed Central

    Matsukuma, Shoichi; Yoshihara, Mitsuyo; Kasai, Fumio; Kato, Akinori; Yoshida, Akira; Akaike, Makoto; Kobayashi, Osamu; Nakayama, Haruhiko; Sakuma, Yuji; Yoshida, Tsutomu; Kameda, Yoichi; Tsuchiya, Eiju; Miyagi, Yohei

    2006-01-01

    A simple and rapid method to detect the epidermal growth factor receptor hot spot mutation L858R in lung adenocarcinoma was developed based on principles similar to the universal heteroduplex generator technology. A single-stranded oligonucleotide with an internal deletion was used to generate heteroduplexes (loop-hybrids) bearing a loop in the complementary strand derived from the polymerase chain reaction product of the normal or mutant allele. By placing deletion in the oligonucleotide adjacent to the mutational site, difference in electrophoretic mobility between loop-hybrids with normal and mutated DNA was distinguishable in a native polyacrylamide gel. The method was also modified to detect in-frame deletion mutations of epidermal growth factor receptor in lung adenocarcinomas. In addition, the method was adapted to detect hot spot mutations in the B-type Raf kinase (BRAF) at V600 and in a Ras-oncogene (NRAS) at Q61, the mutations commonly found in thyroid carcinomas. Our mutation detection system, designated the loop-hybrid mobility shift assay was sensitive enough to detect mutant DNA comprising 7.5% of the total DNA. As a simple and straightforward mutation detection technique, loop-hybrid mobility shift assay may be useful for the molecular diagnosis of certain types of clinical cancers. Other applications are also discussed. PMID:16931592

  13. A new potential for the numerical simulations of electrolyte solutions on a hypersphere

    NASA Astrophysics Data System (ADS)

    Caillol, Jean-Michel

    1993-12-01

    We propose a new way of performing numerical simulations of the restricted primitive model of electrolytes—and related models—on a hypersphere. In this new approach, the system is viewed as a single component fluid of charged bihard spheres constrained to move at the surface of a four dimensional sphere. A charged bihard sphere is defined as the rigid association of two antipodal charged hard spheres of opposite signs. These objects interact via a simple analytical potential obtained by solving the Poisson-Laplace equation on the hypersphere. This new technique of simulation enables a precise determination of the chemical potential of the charged species in the canonical ensemble by a straightforward application of Widom's insertion method. Comparisons with previous simulations demonstrate the efficiency and the reliability of the method.

  14. Large-Scale Surfactant-Free Synthesis of p-Type SnTe Nanoparticles for Thermoelectric Applications

    PubMed Central

    Han, Guang; Zhang, Ruizhi; Popuri, Srinivas R.; Greer, Heather F.; Reece, Michael J.; Bos, Jan-Willem G.; Zhou, Wuzong; Knox, Andrew R.; Gregory, Duncan H.

    2017-01-01

    A facile one-pot aqueous solution method has been developed for the fast and straightforward synthesis of SnTe nanoparticles in more than ten gram quantities per batch. The synthesis involves boiling an alkaline Na2SnO2 solution and a NaHTe solution for short time scales, in which the NaOH concentration and reaction duration play vital roles in controlling the phase purity and particle size, respectively. Spark plasma sintering of the SnTe nanoparticles produces nanostructured compacts that have a comparable thermoelectric performance to bulk counterparts synthesised by more time- and energy-intensive methods. This approach, combining an energy-efficient, surfactant-free solution synthesis with spark plasma sintering, provides a simple, rapid, and inexpensive route to p-type SnTe nanostructured materials. PMID:28772593

  15. Analysis of air-, moisture- and solvent-sensitive chemical compounds by mass spectrometry using an inert atmospheric pressure solids analysis probe.

    PubMed

    Mosely, Jackie A; Stokes, Peter; Parker, David; Dyer, Philip W; Messinis, Antonis M

    2018-02-01

    A novel method has been developed that enables chemical compounds to be transferred from an inert atmosphere glove box and into the atmospheric pressure ion source of a mass spectrometer whilst retaining a controlled chemical environment. This innovative method is simple and cheap to implement on some commercially available mass spectrometers. We have termed this approach inert atmospheric pressure solids analysis probe ( iASAP) and demonstrate the benefit of this methodology for two air-/moisture-sensitive chemical compounds whose characterisation by mass spectrometry is now possible and easily achieved. The simplicity of the design means that moving between iASAP and standard ASAP is straightforward and quick, providing a highly flexible platform with rapid sample turnaround.

  16. Preparation and Luminescence Thermochromism of Tetranuclear Copper(I)-Pyridine-Iodide Clusters

    ERIC Educational Resources Information Center

    Parmeggiani, Fabio; Sacchetti, Alessandro

    2012-01-01

    A simple and straightforward synthesis of a tetranuclear copper(I)-pyridine-iodide cluster is described as a laboratory experiment for advanced inorganic chemistry undergraduate students. The product is used to demonstrate the fascinating and visually impressive phenomenon of luminescence thermochromism: exposed to long-wave UV light, the…

  17. The American Indians: Answers to 101 Questions.

    ERIC Educational Resources Information Center

    Bureau of Indian Affairs (Dept. of Interior), Washington, DC.

    Presented in a simple and straightforward manner, this publication answers questions basic to an understanding of the American Indian and his socioeconomic position in the United States. The following identify major areas covered and representative questions: (1) The Indian People (Who is an Indian?); (2) The Legal Status of Indians (Are Indians…

  18. The Food-Safe Schools Action Guide

    ERIC Educational Resources Information Center

    Centers for Disease Control and Prevention, 2007

    2007-01-01

    "The Food-Safe School Needs Assessment and Planning Guide" is a tool that can help schools assess their food safety policies, procedures, and programs and develop plans for improvement. This tool includes a simple, straightforward questionnaire, score card, and planning guide that give administrators, school staff, families, and students a chance…

  19. Children's Ability to Comprehend Main Ideas After Reading Expository Prose.

    ERIC Educational Resources Information Center

    Baumann, James F.

    A study was conducted to evaluate children's ability to comprehend main ideas after reading connected discourse and to develop and validate a straightforward and intuitively simple system for identifying main ideas in prose. Three experimental passages were randomly selected from third and sixth grade social studies textbooks, and education…

  20. Start Smart! Building Brain Power in the Early Years.

    ERIC Educational Resources Information Center

    Schiller, Pam

    Noting current brain development research, this book offers simple, straightforward ways to boost children's brain power with active exploration, repetition, sensory exploration, laughter, and more. The chapters describe how and why the brain develops and explain how parents can give their children the best foundation for future learning.…

  21. Methods for comparing 3D surface attributes

    NASA Astrophysics Data System (ADS)

    Pang, Alex; Freeman, Adam

    1996-03-01

    A common task in data analysis is to compare two or more sets of data, statistics, presentations, etc. A predominant method in use is side-by-side visual comparison of images. While straightforward, it burdens the user with the task of discerning the differences between the two images. The user if further taxed when the images are of 3D scenes. This paper presents several methods for analyzing the extent, magnitude, and manner in which surfaces in 3D differ in their attributes. The surface geometry are assumed to be identical and only the surface attributes (color, texture, etc.) are variable. As a case in point, we examine the differences obtained when a 3D scene is rendered progressively using radiosity with different form factor calculation methods. The comparison methods include extensions of simple methods such as mapping difference information to color or transparency, and more recent methods including the use of surface texture, perturbation, and adaptive placements of error glyphs.

  1. Straightforward Inference of Ancestry and Admixture Proportions through Ancestry-Informative Insertion Deletion Multiplexing

    PubMed Central

    Pereira, Rui; Phillips, Christopher; Pinto, Nádia; Santos, Carla; dos Santos, Sidney Emanuel Batista; Amorim, António; Carracedo, Ángel; Gusmão, Leonor

    2012-01-01

    Ancestry-informative markers (AIMs) show high allele frequency divergence between different ancestral or geographically distant populations. These genetic markers are especially useful in inferring the likely ancestral origin of an individual or estimating the apportionment of ancestry components in admixed individuals or populations. The study of AIMs is of great interest in clinical genetics research, particularly to detect and correct for population substructure effects in case-control association studies, but also in population and forensic genetics studies. This work presents a set of 46 ancestry-informative insertion deletion polymorphisms selected to efficiently measure population admixture proportions of four different origins (African, European, East Asian and Native American). All markers are analyzed in short fragments (under 230 basepairs) through a single PCR followed by capillary electrophoresis (CE) allowing a very simple one tube PCR-to-CE approach. HGDP-CEPH diversity panel samples from the four groups, together with Oceanians, were genotyped to evaluate the efficiency of the assay in clustering populations from different continental origins and to establish reference databases. In addition, other populations from diverse geographic origins were tested using the HGDP-CEPH samples as reference data. The results revealed that the AIM-INDEL set developed is highly efficient at inferring the ancestry of individuals and provides good estimates of ancestry proportions at the population level. In conclusion, we have optimized the multiplexed genotyping of 46 AIM-INDELs in a simple and informative assay, enabling a more straightforward alternative to the commonly available AIM-SNP typing methods dependent on complex, multi-step protocols or implementation of large-scale genotyping technologies. PMID:22272242

  2. MUSTA fluxes for systems of conservation laws

    NASA Astrophysics Data System (ADS)

    Toro, E. F.; Titarev, V. A.

    2006-08-01

    This paper is about numerical fluxes for hyperbolic systems and we first present a numerical flux, called GFORCE, that is a weighted average of the Lax-Friedrichs and Lax-Wendroff fluxes. For the linear advection equation with constant coefficient, the new flux reduces identically to that of the Godunov first-order upwind method. Then we incorporate GFORCE in the framework of the MUSTA approach [E.F. Toro, Multi-Stage Predictor-Corrector Fluxes for Hyperbolic Equations. Technical Report NI03037-NPA, Isaac Newton Institute for Mathematical Sciences, University of Cambridge, UK, 17th June, 2003], resulting in a version that we call GMUSTA. For non-linear systems this gives results that are comparable to those of the Godunov method in conjunction with the exact Riemann solver or complete approximate Riemann solvers, noting however that in our approach, the solution of the Riemann problem in the conventional sense is avoided. Both the GFORCE and GMUSTA fluxes are extended to multi-dimensional non-linear systems in a straightforward unsplit manner, resulting in linearly stable schemes that have the same stability regions as the straightforward multi-dimensional extension of Godunov's method. The methods are applicable to general meshes. The schemes of this paper share with the family of centred methods the common properties of being simple and applicable to a large class of hyperbolic systems, but the schemes of this paper are distinctly more accurate. Finally, we proceed to the practical implementation of our numerical fluxes in the framework of high-order finite volume WENO methods for multi-dimensional non-linear hyperbolic systems. Numerical results are presented for the Euler equations and for the equations of magnetohydrodynamics.

  3. Diketopyrrolopyrrole-based carbon dots for photodynamic therapy.

    PubMed

    He, Haozhe; Zheng, Xiaohua; Liu, Shi; Zheng, Min; Xie, Zhigang; Wang, Yong; Yu, Meng; Shuai, Xintao

    2018-06-01

    The development of a simple and straightforward strategy to synthesize multifunctional carbon dots for photodynamic therapy (PDT) has been an emerging focus. In this work, diketopyrrolopyrrole-based fluorescent carbon dots (DPP CDs) were designed and synthesized through a facile one-pot hydrothermal method by using diketopyrrolopyrrole (DPP) and chitosan (CTS) as raw materials. DPP CDs not only maintained the ability of DPP to generate singlet oxygen (1O2) but also have excellent hydrophilic properties and outstanding biocompatibility. In vitro and in vivo experiments demonstrated that DPP CDs greatly inhibited the growth of tumor cells under laser irradiation (540 nm). This study highlights the potential of the rational design of CDs for efficient cancer therapy.

  4. Sutureless Technique to Fix the Great Saphenous Vein along the Atrioventricular Groove Using Fibrin Glue in Off-Pump Coronary Artery Bypass Grafting.

    PubMed

    Ohira, Suguru; Doi, Kiyoshi; Yaku, Hitoshi

    2016-04-05

    We describe a simple method to fix the great saphenous vein graft (SVG) to the right coronary artery along the atrioventricular groove using fibrin glue in off-pump coronary artery bypass grafting (OPCAB). After completion of the proximal anastomosis, the SVG was placed along the atrioventricular groove to the acute margin. Fibrin glue was sprayed using pressurized carbon dioxide gas. A distal anastomosis was subsequently performed after rotating the heart to expose the posterior descending artery. It is a straightforward and reproducible technique to determine the optimal length of the SVG and prevent kinking or stretching of the graft, especially in OPCAB.

  5. Straightforward synthesis of non-natural L-chalcogen and L-diselenide N-Boc-protected-γ-amino acid derivatives.

    PubMed

    Kawasoko, Cristiane Y; Foletto, Patricia; Rodrigues, Oscar E D; Dornelles, Luciano; Schwab, Ricardo S; Braga, Antonio L

    2013-08-21

    The synthesis of new chiral seleno-, telluro-, and thio-N-Boc-γ-amino acids is described herein. These new compounds were prepared through a simple and short synthetic route, from the inexpensive and commercially-available amino acid L-glutamic acid. The products, with a highly modular character, were obtained in good to excellent yields, via hydrolysis of chalcogen pyroglutamic derivatives with overall retention of the L-glutamic acid stereochemistry. Also, an L-diselenide-N-Boc-γ-amino acid was prepared in good yield. This new synthetic route represents an efficient method for preparing new L-chalcogen- and L-diselenide-γ-amino acids with biological potential.

  6. Multiwavelength metasurfaces through spatial multiplexing

    DOE PAGES

    Arbabi, Ehsan; Arbabi, Amir; Kamali, Seyedeh Mahsa; ...

    2016-09-06

    Metasurfaces are two-dimensional arrangements of optical scatterers rationally arranged to control optical wavefronts. Despite the significant advances made in wavefront engineering through metasurfaces, most of these devices are designed for and operate at a single wavelength. Here we show that spatial multiplexing schemes can be applied to increase the number of operation wavelengths. We use a high contrast dielectric transmittarray platform with amorphous silicon nano-posts to demonstrate polarization insensitive metasurface lenses with a numerical aperture of 0.46, that focus light at 915 and 1550 nm to the same focal distance. We investigate two different methods, one based on large scalemore » segmentation and one on meta-atom interleaving, and compare their performances. An important feature of this method is its simple generalization to adding more wavelengths or new functionalities to a device. Furthermore, it provides a relatively straightforward method for achieving multi-functional and multiwavelength metasurface devices.« less

  7. Protecting Privacy of Shared Epidemiologic Data without Compromising Analysis Potential

    DOE PAGES

    Cologne, John; Grant, Eric J.; Nakashima, Eiji; ...

    2012-01-01

    Objective . Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. Methods . We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Results . Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relativemore » accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. Conclusions . When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs.« less

  8. Protecting Privacy of Shared Epidemiologic Data without Compromising Analysis Potential

    PubMed Central

    Cologne, John; Grant, Eric J.; Nakashima, Eiji; Chen, Yun; Funamoto, Sachiyo; Katayama, Hiroaki

    2012-01-01

    Objective. Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. Methods. We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Results. Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relative accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. Conclusions. When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs. PMID:22505949

  9. 3D Documentation and BIM Modeling of Cultural Heritage Structures Using UAVs: The Case of the Foinikaria Church

    NASA Astrophysics Data System (ADS)

    Themistocleous, K.; Agapiou, A.; Hadjimitsis, D.

    2016-10-01

    The documentation of architectural cultural heritage sites has traditionally been expensive and labor-intensive. New innovative technologies, such as Unmanned Aerial Vehicles (UAVs), provide an affordable, reliable and straightforward method of capturing cultural heritage sites, thereby providing a more efficient and sustainable approach to documentation of cultural heritage structures. In this study, hundreds of images of the Panagia Chryseleousa church in Foinikaria, Cyprus were taken using a UAV with an attached high resolution camera. The images were processed to generate an accurate digital 3D model by using Structure in Motion techniques. Building Information Model (BIM) was then used to generate drawings of the church. The methodology described in the paper provides an accurate, simple and cost-effective method of documenting cultural heritage sites and generating digital 3D models using novel techniques and innovative methods.

  10. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  11. Increasing the Drive of Your Physics Class

    ERIC Educational Resources Information Center

    Eisenstein, Stanley

    2008-01-01

    First-year physics students often have a difficult time grasping Newton's laws of motion and recognizing the forces that these laws depend on. The "Paper Car" project is an experiential activity that is rich in application of force principles. It is also simple enough that students are able to integrate straightforward but non-trivial physics…

  12. Work in Progress

    ERIC Educational Resources Information Center

    Tetlow, Linda

    2009-01-01

    Emergency shelters for disaster relief are an ever present necessity. Anyone who has ever camped in a tent will understand that finding the perfect design, that will be simple and quick to erect, be stable in a variety of weather conditions, and will accommodate a number of people sleeping, sitting or even standing is not straightforward and…

  13. ACED IT: A Tool for Improved Ethical and Moral Decision-Making

    ERIC Educational Resources Information Center

    Kreitler, Crystal Mata; Stenmark, Cheryl K.; Rodarte, Allen M.; Piñón DuMond, Rebecca

    2014-01-01

    Numerous examples of unethical organizational decision-making highlighted in the media have led many to question the general moral perception and ethical judgments of individuals. The present study examined two forms of a straightforward ethical decision-making (EDM) tool (ACED IT cognitive map) that could be a relatively simple instrument for…

  14. Development and Application of Multidisciplinary Sustainability Metrics to Environmental Management in the San Luis Basin in Colorado at AESS

    EPA Science Inventory

    A pilot project was initiated to create an approach to measure, monitor, and maintain prosperity and environmental quality within a regional system. The goal was to produce a scientifically defensible but straightforward and inexpensive methodology that is simple to use and int...

  15. Identities for Generalized Fibonacci Numbers: A Combinatorial Approach

    ERIC Educational Resources Information Center

    Plaza, A.; Falcon, S.

    2008-01-01

    This note shows a combinatorial approach to some identities for generalized Fibonacci numbers. While it is a straightforward task to prove these identities with induction, and also by arithmetical manipulations such as rearrangements, the approach used here is quite simple to follow and eventually reduces the proof to a counting problem. (Contains…

  16. Meals without Squeals: Child Care Feeding Guide and Cookbook.

    ERIC Educational Resources Information Center

    Berman, Christine; Fromer, Jacki

    Simple, straightforward information on child nutrition and growth is offered in this child care feeding guide and cookbook. The book contains clear, easy-to-read menus and recipes, provides solutions to common feeding problems, and shows ways to offer children positive learning experiences with food. Chapter 1 gives an overview to important issues…

  17. Teaching Note: Intimacy Timelines as a Tool for Teaching Feminism

    ERIC Educational Resources Information Center

    Briggs, Lindsay

    2017-01-01

    This essay will describe one activity that the author uses in her human sexuality course to illustrate how patriarchal systems have affected the experiences of females and males across the sexual lifespan. Through this fairly simple and straightforward activity students are able to utilize common experiences and knowledge of real-world issues and…

  18. Connect the Dots: A Dedicated System for Learning Links Teacher Teams to Student Outcomes

    ERIC Educational Resources Information Center

    Ermeling, Bradley A.

    2012-01-01

    Establishing school-based professional learning appears so simple and straightforward during inspiring presentations at summer workshops, but keeping collaborative work focused on teaching and learning in such a way that it produces consistent results is a highly underestimated task. Investigations and experience from a group of researchers at the…

  19. A symmetric multivariate leakage correction for MEG connectomes

    PubMed Central

    Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.

    2015-01-01

    Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259

  20. Environmentally-friendly lithium recycling from a spent organic li-ion battery.

    PubMed

    Renault, Stéven; Brandell, Daniel; Edström, Kristina

    2014-10-01

    A simple and straightforward method using non-polluting solvents and a single thermal treatment step at moderate temperature was investigated as an environmentally-friendly process to recycle lithium from organic electrode materials for secondary lithium batteries. This method, highly dependent on the choice of electrolyte, gives up to 99% of sustained capacity for the recycled materials used in a second life-cycle battery when compared with the original. The best results were obtained using a dimethyl carbonate/lithium bis(trifluoromethane sulfonyl) imide electrolyte that does not decompose in presence of water. The process implies a thermal decomposition step at a moderate temperature of the extracted organic material into lithium carbonate, which is then used as a lithiation agent for the preparation of fresh electrode material without loss of lithium. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Shape-selective synthesis of non-micellar cobalt oxide (CoO) nanomaterials by microwave irradiations

    NASA Astrophysics Data System (ADS)

    Kundu, Subrata; Jayachandran, M.

    2013-04-01

    Shape-selective formation of CoO nanoparticles has been developed using a simple one-step in situ non-micellar microwave (MW) heating method. CoO NPs were synthesized by mixing aqueous CoCl2·6H2O solution with poly (vinyl) alcohol (PVA) in the presence of sodium hydroxide (NaOH). The reaction mixture was irradiated using MW for a total time of 2 min. This process exclusively generated different shapes like nanosphere, nanosheet, and nanodendrite structures just by tuning the Co(II) ion to PVA molar ratios and controlling other reaction parameters. The proposed synthesis method is efficient, straightforward, reproducible, and robust. Other than in catalysis, these cobalt oxide nanomaterials can be used for making pigments, battery materials, for developing solid state sensors, and also as an anisotropy source for magnetic recording.

  2. Parallel cascade selection molecular dynamics for efficient conformational sampling and free energy calculation of proteins

    NASA Astrophysics Data System (ADS)

    Kitao, Akio; Harada, Ryuhei; Nishihara, Yasutaka; Tran, Duy Phuoc

    2016-12-01

    Parallel Cascade Selection Molecular Dynamics (PaCS-MD) was proposed as an efficient conformational sampling method to investigate conformational transition pathway of proteins. In PaCS-MD, cycles of (i) selection of initial structures for multiple independent MD simulations and (ii) conformational sampling by independent MD simulations are repeated until the convergence of the sampling. The selection is conducted so that protein conformation gradually approaches a target. The selection of snapshots is a key to enhance conformational changes by increasing the probability of rare event occurrence. Since the procedure of PaCS-MD is simple, no modification of MD programs is required; the selections of initial structures and the restart of the next cycle in the MD simulations can be handled with relatively simple scripts with straightforward implementation. Trajectories generated by PaCS-MD were further analyzed by the Markov state model (MSM), which enables calculation of free energy landscape. The combination of PaCS-MD and MSM is reported in this work.

  3. Berimbau: A simple instrument for teaching basic concepts in the physics and psychoacoustics of music

    NASA Astrophysics Data System (ADS)

    Vilão, Rui C.; Melo, Santino L. S.

    2014-12-01

    We address the production of musical tones by a simple musical instrument of the Brazilian tradition: the berimbau-de-barriga. The vibration physics of the string and of the air mass inside the gourd are reviewed. Straightforward measurements of an actual berimbau, which illustrate the basic physical phenomena, are performed using a PC-based "soundcard oscilloscope." The inharmonicity of the string and the role of the gourd are discussed in the context of known results in the psychoacoustics of pitch definition.

  4. A signal-flow-graph approach to on-line gradient calculation.

    PubMed

    Campolucci, P; Uncini, A; Piazza, F

    2000-08-01

    A large class of nonlinear dynamic adaptive systems such as dynamic recurrent neural networks can be effectively represented by signal flow graphs (SFGs). By this method, complex systems are described as a general connection of many simple components, each of them implementing a simple one-input, one-output transformation, as in an electrical circuit. Even if graph representations are popular in the neural network community, they are often used for qualitative description rather than for rigorous representation and computational purposes. In this article, a method for both on-line and batch-backward gradient computation of a system output or cost function with respect to system parameters is derived by the SFG representation theory and its known properties. The system can be any causal, in general nonlinear and time-variant, dynamic system represented by an SFG, in particular any feedforward, time-delay, or recurrent neural network. In this work, we use discrete-time notation, but the same theory holds for the continuous-time case. The gradient is obtained in a straightforward way by the analysis of two SFGs, the original one and its adjoint (obtained from the first by simple transformations), without the complex chain rule expansions of derivatives usually employed. This method can be used for sensitivity analysis and for learning both off-line and on-line. On-line learning is particularly important since it is required by many real applications, such as digital signal processing, system identification and control, channel equalization, and predistortion.

  5. Predicting the response of seven Asian glaciers to future climate scenarios using a simple linear glacier model

    NASA Astrophysics Data System (ADS)

    Ren, Diandong; Karoly, David J.

    2008-03-01

    Observations from seven Central Asian glaciers (35-55°N; 70-95°E) are used, together with regional temperature data, to infer uncertain parameters for a simple linear model of the glacier length variations. The glacier model is based on first order glacier dynamics and requires the knowledge of reference states of forcing and glacier perturbation magnitude. An adjoint-based variational method is used to optimally determine the glacier reference states in 1900 and the uncertain glacier model parameters. The simple glacier model is then used to estimate the glacier length variations until 2060 using regional temperature projections from an ensemble of climate model simulations for a future climate change scenario (SRES A2). For the period 2000-2060, all glaciers are projected to experience substantial further shrinkage, especially those with gentle slopes (e.g., Glacier Chogo Lungma retreats ˜4 km). Although nearly one-third of the year 2000 length will be reduced for some small glaciers, the existence of the glaciers studied here is not threatened by year 2060. The differences between the individual glacier responses are large. No straightforward relationship is found between glacier size and the projected fractional change of its length.

  6. Direct Monte Carlo simulation of chemical reaction systems: Simple bimolecular reactions

    NASA Astrophysics Data System (ADS)

    Piersall, Shannon D.; Anderson, James B.

    1991-07-01

    In applications to several simple reaction systems we have explored a ``direct simulation'' method for predicting and understanding the behavior of gas phase chemical reaction systems. This Monte Carlo method, originated by Bird, has been found remarkably successful in treating a number of difficult problems in rarefied dynamics. Extension to chemical reactions offers a powerful tool for treating reaction systems with nonthermal distributions, with coupled gas-dynamic and reaction effects, with emission and adsorption of radiation, and with many other effects difficult to treat in any other way. The usual differential equations of chemical kinetics are eliminated. For a bimolecular reaction of the type A+B→C+D with a rate sufficiently low to allow a continued thermal equilibrium of reactants we find that direct simulation reproduces the expected second order kinetics. Simulations for a range of temperatures yield the activation energies expected for the reaction models specified. For faster reactions under conditions leading to a depletion of energetic reactant species, the expected slowing of reaction rates and departures from equilibrium distributions are observed. The minimum sample sizes required for adequate simulations are as low as 1000 molecules for these cases. The calculations are found to be simple and straightforward for the homogeneous systems considered. Although computation requirements may be excessively high for very slow reactions, they are reasonably low for fast reactions, for which nonequilibrium effects are most important.

  7. eCAF: A New Tool for the Conversational Analysis of Electronic Communication

    ERIC Educational Resources Information Center

    Duncan-Howell, Jennifer

    2009-01-01

    Electronic communication is characteristically concerned with "the message" (eM), those who send them (S), and those who receive and read them (R). This relationship could be simplified into the equation eM = S + R. When this simple equation is applied to electronic communication, several elements are added that make this straightforward act of…

  8. Untangling the Licensing Web and Other Copyright Questions

    ERIC Educational Resources Information Center

    Sparkler, Andrew; Poliniak, Susan

    2010-01-01

    Copyright law is a daunting subject for most lawyers, so it's no surprise that many music educators feel uneasy dealing with it as well. But in truth, obtaining permissions for using copyrighted works can be a very simple and straightforward process. This article walks the readers through the steps of obtaining permission for a fictional piece of…

  9. X-ray emission from reverse-shocked ejecta in supernova remnants

    NASA Technical Reports Server (NTRS)

    Cioffi, Denis F.; Mckee, Christopher F.

    1990-01-01

    A simple physical model of the dynamics of a young supernova remnant is used to derive a straightforward kinematical description of the reverse shock. With suitable approximations, formulae can then be developed to give the X-ray emission of the reverse-shocked ejecta. The results are found to agree favorably with observations of SN1006.

  10. Reading Redefined for a Transmedia Universe

    ERIC Educational Resources Information Center

    Lamb, Annette

    2011-01-01

    Once upon a time, reading was as simple and straightforward as decoding words on a page. No more. Digital age technologies have made such an impact on the way people interact with content that the old definitions of "reading" and "books" no longer apply. Times, as they say, are changing. The digital age is transforming nearly every aspect of one's…

  11. Is this truly an international journal? [Editorial

    Treesearch

    William M. Block

    2007-01-01

    Although the Journal includes papers on species and habitats occurring in different countries and on different continents, and authors are from various places, we question whether or not it is truly an international journal. Our reasoning here is simple and straightforward. We received a fair number of submissions from across the globe. We see the same proportion of...

  12. Possibilities: A Financial Resource for Parents of Children with Disabilities

    ERIC Educational Resources Information Center

    PACER Center, 2010

    2010-01-01

    This publication was created for middle-income parents of children under the age of 18 who have disabilities. It is a simple, straightforward resource to help them manage their money, and plan for them and their children's financial future and overall well-being. The financial management techniques presented here can help parents, not just in…

  13. Behind the Curtain: Assessing the Case for National Curriculum Standards. Policy Analysis. No. 661

    ERIC Educational Resources Information Center

    McCluskey, Neal

    2010-01-01

    The argument for national curriculum standards sounds simple: set high standards, make all schools meet them, and watch American students achieve at high levels. It is straightforward and compelling, and it is driving a sea change in American education policy. Unfortunately, setting high standards and getting American students to hit them is…

  14. Space-time least-squares finite element method for convection-reaction system with transformed variables

    PubMed Central

    Nam, Jaewook

    2011-01-01

    We present a method to solve a convection-reaction system based on a least-squares finite element method (LSFEM). For steady-state computations, issues related to recirculation flow are stated and demonstrated with a simple example. The method can compute concentration profiles in open flow even when the generation term is small. This is the case for estimating hemolysis in blood. Time-dependent flows are computed with the space-time LSFEM discretization. We observe that the computed hemoglobin concentration can become negative in certain regions of the flow; it is a physically unacceptable result. To prevent this, we propose a quadratic transformation of variables. The transformed governing equation can be solved in a straightforward way by LSFEM with no sign of unphysical behavior. The effect of localized high shear on blood damage is shown in a circular Couette-flow-with-blade configuration, and a physiological condition is tested in an arterial graft flow. PMID:21709752

  15. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  16. Matching factorization theorems with an inverse-error weighting

    NASA Astrophysics Data System (ADS)

    Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; Pisano, Cristian; Signori, Andrea

    2018-06-01

    We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections to the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H0 boson and Drell-Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins-Soper-Sterman subtraction scheme. It is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.

  17. Matching factorization theorems with an inverse-error weighting

    DOE PAGES

    Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; ...

    2018-04-03

    We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less

  18. Matching factorization theorems with an inverse-error weighting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe

    We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less

  19. Fourier-based classification of protein secondary structures.

    PubMed

    Shu, Jian-Jun; Yong, Kian Yan

    2017-04-15

    The correct prediction of protein secondary structures is one of the key issues in predicting the correct protein folded shape, which is used for determining gene function. Existing methods make use of amino acids properties as indices to classify protein secondary structures, but are faced with a significant number of misclassifications. The paper presents a technique for the classification of protein secondary structures based on protein "signal-plotting" and the use of the Fourier technique for digital signal processing. New indices are proposed to classify protein secondary structures by analyzing hydrophobicity profiles. The approach is simple and straightforward. Results show that the more types of protein secondary structures can be classified by means of these newly-proposed indices. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Direct Alkynylation of 3H-Imidazo[4,5-b]pyridines Using gem-Dibromoalkenes as Alkynes Source.

    PubMed

    Aziz, Jessy; Baladi, Tom; Piguel, Sandrine

    2016-05-20

    C2 direct alkynylation of 3H-imidazo[4,5-b]pyridine derivatives is explored for the first time. Stable and readily available 1,1-dibromo-1-alkenes, electrophilic alkyne precursors, are used as coupling partners. The simple reaction conditions include an inexpensive copper catalyst (CuBr·SMe2 or Cu(OAc)2), a phosphine ligand (DPEphos) and a base (LiOtBu) in 1,4-dioxane at 120 °C. This C-H alkynylation method revealed to be compatible with a variety of substitutions on both coupling partners: heteroarenes and gem-dibromoalkenes. This protocol allows the straightforward synthesis of various 2-alkynyl-3H-imidazo[4,5-b]pyridines, a valuable scaffold in drug design.

  1. Ultrasound diagnosis of penile fracture.

    PubMed

    Nomura, Jason T; Sierzenski, Paul R

    2010-04-01

    Rupture of the corpus cavernosum, penile fracture, is an uncommon occurrence. Diagnosis is straightforward when classical historical and physical examination findings are present. However, atypical presentations can make the diagnosis difficult. Review the literature supporting use of ultrasound for the diagnosis of penile fracture. Review of the ultrasonographic findings in patients with penile fracture. A 32-year-old man presented with penile ecchymosis after sex but lacking several historical and physical examination elements for a diagnosis of penile fracture. Ultrasound performed by the treating physician revealed rupture of the tunica albuginea and presence of a hematoma, leading to a diagnosis of penile fracture. Ultrasound is a simple, efficient, and non-invasive imaging method to assist in the diagnosis of penile fracture. Copyright 2010 Elsevier Inc. All rights reserved.

  2. PCTDSE: A parallel Cartesian-grid-based TDSE solver for modeling laser-atom interactions

    NASA Astrophysics Data System (ADS)

    Fu, Yongsheng; Zeng, Jiaolong; Yuan, Jianmin

    2017-01-01

    We present a parallel Cartesian-grid-based time-dependent Schrödinger equation (TDSE) solver for modeling laser-atom interactions. It can simulate the single-electron dynamics of atoms in arbitrary time-dependent vector potentials. We use a split-operator method combined with fast Fourier transforms (FFT), on a three-dimensional (3D) Cartesian grid. Parallelization is realized using a 2D decomposition strategy based on the Message Passing Interface (MPI) library, which results in a good parallel scaling on modern supercomputers. We give simple applications for the hydrogen atom using the benchmark problems coming from the references and obtain repeatable results. The extensions to other laser-atom systems are straightforward with minimal modifications of the source code.

  3. Measuring droplet size distributions from overlapping interferometric particle images.

    PubMed

    Bocanegra Evans, Humberto; Dam, Nico; van der Voort, Dennis; Bertens, Guus; van de Water, Willem

    2015-02-01

    Interferometric particle imaging provides a simple way to measure the probability density function (PDF) of droplet sizes from out-focus images. The optical setup is straightforward, but the interpretation of the data is a problem when particle images overlap. We propose a new way to analyze the images. The emphasis is not on a precise identification of droplets, but on obtaining a good estimate of the PDF of droplet sizes in the case of overlapping particle images. The algorithm is tested using synthetic and experimental data. We next use these methods to measure the PDF of droplet sizes produced by spinning disk aerosol generators. The mean primary droplet diameter agrees with predictions from the literature, but we find a broad distribution of satellite droplet sizes.

  4. Enantioconvergent Cross-Couplings of Racemic Alkylmetal Reagents with Unactivated Secondary Alkyl Electrophiles: Catalytic Asymmetric Negishi α-Alkylations of N-Boc-pyrrolidine

    PubMed Central

    Cordier, Christopher J.; Lundgren, Rylan J.; Fu, Gregory C.

    2013-01-01

    Although enantioconvergent alkyl-alkyl couplings of racemic electrophiles have been developed, there have been no reports of the corresponding reactions of racemic nucleophiles. Herein, we describe Negishi cross-couplings of racemic α-zincated N-Boc-pyrrolidine with unactivated secondary halides, thus providing a one-pot, catalytic asymmetric method for the synthesis of a range of 2-alkylpyrrolidines (an important family of target molecules) from N-Boc-pyrrolidine, a commercially available precursor. Preliminary mechanistic studies indicate that two of the most straightforward mechanisms for enantioconvergence (a dynamic kinetic resolution of the organometallic coupling partner and a simple β-hydride elimination/β-migratory insertion pathway) are unlikely to be operative. PMID:23869442

  5. Probing the type of anomalous diffusion with single-particle tracking.

    PubMed

    Ernst, Dominique; Köhler, Jürgen; Weiss, Matthias

    2014-05-07

    Many reactions in complex fluids, e.g. signaling cascades in the cytoplasm of living cells, are governed by a diffusion-driven encounter of reactants. Yet, diffusion in complex fluids often exhibits an anomalous characteristic ('subdiffusion'). Since different types of subdiffusion have distinct effects on timing and equilibria of chemical reactions, a thorough determination of the reactants' type of random walk is key to a quantitative understanding of reactions in complex fluids. Here we introduce a straightforward and simple approach for determining the type of subdiffusion from single-particle tracking data. Unlike previous approaches, our method also is sensitive to transient subdiffusion phenomena, e.g. obstructed diffusion below the percolation threshold. We validate our strategy with data from experiment and simulation.

  6. Rapid trifluoromethylation and perfluoroalkylation of five-membered heterocycles by photoredox catalysis in continuous flow.

    PubMed

    Straathof, Natan J W; Gemoets, Hannes P L; Wang, Xiao; Schouten, Jaap C; Hessel, Volker; Noël, Timothy

    2014-06-01

    Trifluoromethylated and perfluoroalkylated heterocycles are important building blocks for the synthesis of numerous pharmaceutical products, agrochemicals and are widely applied in material sciences. To date, trifluoromethylated and perfluoroalkylated hetero-aromatic systems can be prepared utilizing visible light photoredox catalysis methodologies in batch. While several limitations are associated with these batch protocols, the application of microflow technology could greatly enhance and intensify these reactions. A simple and straightforward photocatalytic trifluoromethylation and perfluoroalkylation method has been developed in continuous microflow, using commercially available photocatalysts and microflow components. A selection of five-membered hetero-aromatics were successfully trifluoromethylated (12 examples) and perfluoroalkylated (5 examples) within several minutes (8-20 min). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Simple and Reliable Method to Quantify the Hepatitis B Viral Load and Replicative Capacity in Liver Tissue and Blood Leukocytes

    PubMed Central

    Minosse, Claudia; Coen, Sabrina; Visco Comandini, Ubaldo; Lionetti, Raffaella; Montalbano, Marzia; Cerilli, Stefano; Vincenti, Donatella; Baiocchini, Andrea; Capobianchi, Maria R.; Menzo, Stefano

    2016-01-01

    Background A functional cure of chronic hepatitis B (CHB) is feasible, but a clear view of the intrahepatic viral dynamics in each patient is needed. Intrahepatic covalently closed circular DNA (cccDNA) is the stable form of the viral genome in infected cells, and represents the ideal marker of parenchymal colonization. Its relationships with easily accessible peripheral parameters need to be elucidated in order to avoid invasive procedures in patients. Objectives The goal of this study was to design, set up, and validate a reliable and straightforward method for the quantification of the cccDNA and total DNA of the hepatitis B virus (HBV) in a variety of clinical samples. Patients and Methods Clinical samples from a cohort of CHB patients, including liver biopsies in some, were collected for the analysis of intracellular HBV molecular markers using novel molecular assays. Results A plasmid construct, including sequences from the HBV genome and from the human gene hTERT, was generated as an isomolar multi-standard for HBV quantitation and normalization to the cellular contents. The specificity of the real-time assay for the cccDNA was assessed using Dane particles isolated on a density gradient. A comparison of liver tissue from 6 untreated and 6 treated patients showed that the treatment deeply reduced the replicative capacity (total DNA/cccDNA), but had limited impact on the parenchymal colonization. The peripheral blood mononuclear cells (PBMCs) and granulocytes from the treated and untreated patients were also analyzed. Conclusions A straightforward method for the quantification of intracellular HBV molecular parameters in clinical samples was developed and validated. The widespread use of such versatile assays could better define the prognosis of CHB, and allow a more rational approach to time-limited tailored treatment strategies. PMID:27882060

  8. SU-F-T-486: A Simple Approach to Performing Light Versus Radiation Field Coincidence Quality Assurance Using An Electronic Portal Imaging Device (EPID)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herchko, S; Ding, G

    2016-06-15

    Purpose: To develop an accurate, straightforward, and user-independent method for performing light versus radiation field coincidence quality assurance utilizing EPID images, a simple phantom made of readily-accessible materials, and a free software program. Methods: A simple phantom consisting of a blocking tray, graph paper, and high-density wire was constructed. The phantom was used to accurately set the size of a desired light field and imaged on the electronic portal imaging device (EPID). A macro written for use in ImageJ, a free image processing software, was then use to determine the radiation field size utilizing the high density wires on themore » phantom for a pixel to distance calibration. The macro also performs an analysis on the measured radiation field utilizing the tolerances recommended in the AAPM Task Group #142. To verify the accuracy of this method, radiochromic film was used to qualitatively demonstrate agreement between the film and EPID results, and an additional ImageJ macro was used to quantitatively compare the radiation field sizes measured both with the EPID and film images. Results: The results of this technique were benchmarked against film measurements, which have been the gold standard for testing light versus radiation field coincidence. The agreement between this method and film measurements were within 0.5 mm. Conclusion: Due to the operator dependency associated with tracing light fields and measuring radiation fields by hand when using film, this method allows for a more accurate comparison between the light and radiation fields with minimal operator dependency. Removing the need for radiographic or radiochromic film also eliminates a reoccurring cost and increases procedural efficiency.« less

  9. Generalized type II hybrid ARQ scheme using punctured convolutional coding

    NASA Astrophysics Data System (ADS)

    Kallel, Samir; Haccoun, David

    1990-11-01

    A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.

  10. Efficient correction of wavefront inhomogeneities in X-ray holographic nanotomography by random sample displacement

    NASA Astrophysics Data System (ADS)

    Hubert, Maxime; Pacureanu, Alexandra; Guilloud, Cyril; Yang, Yang; da Silva, Julio C.; Laurencin, Jerome; Lefebvre-Joud, Florence; Cloetens, Peter

    2018-05-01

    In X-ray tomography, ring-shaped artifacts present in the reconstructed slices are an inherent problem degrading the global image quality and hindering the extraction of quantitative information. To overcome this issue, we propose a strategy for suppression of ring artifacts originating from the coherent mixing of the incident wave and the object. We discuss the limits of validity of the empty beam correction in the framework of a simple formalism. We then deduce a correction method based on two-dimensional random sample displacement, with minimal cost in terms of spatial resolution, acquisition, and processing time. The method is demonstrated on bone tissue and on a hydrogen electrode of a ceramic-metallic solid oxide cell. Compared to the standard empty beam correction, we obtain high quality nanotomography images revealing detailed object features. The resulting absence of artifacts allows straightforward segmentation and posterior quantification of the data.

  11. The S-Tunnel for tunnelled dialysis catheter: an alternative approach to the prevention of displacement.

    PubMed

    Jenkins, Glyndwr W; Kelly, Michael; Anwar, Siddiq; Ahmed, Saeed S

    2015-01-01

    Vascular access has been described in the literature anywhere from the 'Achilles Heel' to the 'Cornerstone' of haemodialysis. Displacement of a central venous catheter is not an uncommon occurrence. We discuss an alternative method of placement for the tunnelled central venous catheter to prevent displacement in those patients with excess anterior chest wall soft tissue. A new surgical technique for placement of a tunnelled central venous catheter was developed in an attempt to reduce the number of displacements. This involved the creation of a second tunnel at a 90° angle to the original retrograde tunnelled path. The authors have currently placed five 'S-Line' tunnelled central venous catheters with no reports of displacement or line infection over a period of 18 months. The 'S-Line' offers a simple, straightforward and most importantly safe method to reduce the incidence of tunnelled right internal jugular central venous catheter displacement.

  12. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  13. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  14. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  15. Deep sequencing approaches for the analysis of prokaryotic transcriptional boundaries and dynamics.

    PubMed

    James, Katherine; Cockell, Simon J; Zenkin, Nikolay

    2017-05-01

    The identification of the protein-coding regions of a genome is straightforward due to the universality of start and stop codons. However, the boundaries of the transcribed regions, conditional operon structures, non-coding RNAs and the dynamics of transcription, such as pausing of elongation, are non-trivial to identify, even in the comparatively simple genomes of prokaryotes. Traditional methods for the study of these areas, such as tiling arrays, are noisy, labour-intensive and lack the resolution required for densely-packed bacterial genomes. Recently, deep sequencing has become increasingly popular for the study of the transcriptome due to its lower costs, higher accuracy and single nucleotide resolution. These methods have revolutionised our understanding of prokaryotic transcriptional dynamics. Here, we review the deep sequencing and data analysis techniques that are available for the study of transcription in prokaryotes, and discuss the bioinformatic considerations of these analyses. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Core-mass nonadiabatic corrections to molecules: H2, H2+, and isotopologues.

    PubMed

    Diniz, Leonardo G; Alijah, Alexander; Mohallem, José Rachid

    2012-10-28

    For high-precision calculations of rovibrational states of light molecules, it is essential to include non-adiabatic corrections. In the absence of crossings of potential energy surfaces, they can be incorporated in a single surface picture through coordinate-dependent vibrational and rotational reduced masses. We present a compact method for their evaluation and relate in particular the vibrational mass to a well defined nuclear core mass derived from a Mulliken analysis of the electronic density. For the rotational mass we propose a simple, but very effective parametrization. The use of these masses in the nuclear Schrödinger equation yields numerical data for the corrections of a much higher quality than can be obtained with optimized constant masses, typically better than 0.1 cm(-1). We demonstrate the method for H(2), H(2)(+), and singly deuterated isotopologues. Isotopic asymmetry does not present any particular difficulty. Generalization to polyatomic molecules is straightforward.

  17. Protecting privacy of shared epidemiologic data without compromising analysis potential.

    PubMed

    Cologne, John; Grant, Eric J; Nakashima, Eiji; Chen, Yun; Funamoto, Sachiyo; Katayama, Hiroaki

    2012-01-01

    Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relative accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs.

  18. History in Perspective (HIP): A Collaborative Project between the University of New Hampshire, SAU #56, and 13 Other School Districts

    ERIC Educational Resources Information Center

    Moyer, Judith; Onosko, Joseph; Forcey, Charles; Cobb, Casey

    2003-01-01

    This article discusses the History in Perspective Project (HIP), a collaborative project between the University of New Hampshire (UNH), its Supervisory Administration Unit #56 (SAU #56), and 13 other school districts. The authors' three-pronged plan was simple, straightforward, and, in some ways, experimental. From observation and experience, they…

  19. Using Modern Solid-State Analytical Tools for Investigations of an Advanced Carbon Capture Material: Experiments for the Inorganic Chemistry Laboratory

    ERIC Educational Resources Information Center

    Wriedt, Mario; Sculley, Julian P.; Aulakh, Darpandeep; Zhou, Hong-Cai

    2016-01-01

    A simple and straightforward synthesis of an ultrastable porous metal-organic framework (MOF) based on copper(II) and a mixed N donor ligand system is described as a laboratory experiment for chemistry undergraduate students. These experiments and the resulting analysis are designed to teach students basic research tools and procedures while…

  20. Toward a formal definition of water scarcity in natural human systems

    Treesearch

    W.K. Jaeger; A.J. Plantinga; H. Chang; K. Dello; G. Grant; D. Hulse; J.J. McDonnell; S. Lancaster; H. Moradkhani; A.T. Morzillo; P. Mote; A. Nolin; M. Santlemann; J. Wu

    2013-01-01

    Water scarcity may appear to be a simple concept, but it can be difficult to apply to complex natural-human systems. While aggregate scarcity indices are straightforward to compute, they do not adequately represent the spatial and temporal variations in water scarcity that arise from complex systems interactions. The uncertain effects of future climate change on water...

  1. Building a Phylogenetic Tree of the Human and Ape Superfamily Using DNA-DNA Hybridization Data

    ERIC Educational Resources Information Center

    Maier, Caroline Alexander

    2004-01-01

    The study describes the process of DNA-DNA hybridization and the history of its use by Sibley and Alquist in simple, straightforward, and interesting language that students easily understand to create their own phylogenetic tree of the hominoid superfamily. They calibrate the DNA clock and use it to estimate the divergence dates of the various…

  2. Towards the estimation of effect measures in studies using respondent-driven sampling.

    PubMed

    Rotondi, Michael A

    2014-06-01

    Respondent-driven sampling (RDS) is an increasingly common sampling technique to recruit hidden populations. Statistical methods for RDS are not straightforward due to the correlation between individual outcomes and subject weighting; thus, analyses are typically limited to estimation of population proportions. This manuscript applies the method of variance estimates recovery (MOVER) to construct confidence intervals for effect measures such as risk difference (difference of proportions) or relative risk in studies using RDS. To illustrate the approach, MOVER is used to construct confidence intervals for differences in the prevalence of demographic characteristics between an RDS study and convenience study of injection drug users. MOVER is then applied to obtain a confidence interval for the relative risk between education levels and HIV seropositivity and current infection with syphilis, respectively. This approach provides a simple method to construct confidence intervals for effect measures in RDS studies. Since it only relies on a proportion and appropriate confidence limits, it can also be applied to previously published manuscripts.

  3. Study on the interaction between hematoporphyrin monomethyl ether and DNA and the determination of hematoporphyrin monomethyl ether using the resonance light scattering technique

    NASA Astrophysics Data System (ADS)

    Chen, Zhanguang; Song, Tianhe; Chen, Xi; Wang, Shaobin; Chen, Junhui

    2010-10-01

    The interaction between photosensitizer anticancer drug hematoporphyrin monomethyl ether (HMME) and ctDNA has been studied based on the decreased resonance light scattering (RLS) phenomenon. The RLS, UV-vis and fluorescence spectra characteristics of the HMME-ctDNA system were investigated. Besides, the phosphodiesters quaternary ammonium salt (PQAS), a kind of new gemini surfactant synthesized recently, was used to determine anticancer drug HMME based on the increasing RLS intensity. Under the optimum assay conditions, the enhanced RLS intensity was proportional to the concentration of HMME. The linear range was 0.8-8.4 μg mL -1, with correlation coefficient R2 = 0.9913. The detection limit was 0.014 μg mL -1. The human serum samples and urine samples were determined satisfactorily, which proved that this method was reliable and applicable in the determination of HMME in body fluid. The presented method was simple, sensitive and straightforward and could be a significant method in clinical analysis.

  4. Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics.

    PubMed

    Ialongo, Cristiano; Bernardini, Sergio

    2016-01-01

    Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed.

  5. Digestion-ligation-only Hi-C is an efficient and cost-effective method for chromosome conformation capture.

    PubMed

    Lin, Da; Hong, Ping; Zhang, Siheng; Xu, Weize; Jamal, Muhammad; Yan, Keji; Lei, Yingying; Li, Liang; Ruan, Yijun; Fu, Zhen F; Li, Guoliang; Cao, Gang

    2018-05-01

    Chromosome conformation capture (3C) technologies can be used to investigate 3D genomic structures. However, high background noise, high costs, and a lack of straightforward noise evaluation in current methods impede the advancement of 3D genomic research. Here we developed a simple digestion-ligation-only Hi-C (DLO Hi-C) technology to explore the 3D landscape of the genome. This method requires only two rounds of digestion and ligation, without the need for biotin labeling and pulldown. Non-ligated DNA was efficiently removed in a cost-effective step by purifying specific linker-ligated DNA fragments. Notably, random ligation could be quickly evaluated in an early quality-control step before sequencing. Moreover, an in situ version of DLO Hi-C using a four-cutter restriction enzyme has been developed. We applied DLO Hi-C to delineate the genomic architecture of THP-1 and K562 cells and uncovered chromosomal translocations. This technology may facilitate investigation of genomic organization, gene regulation, and (meta)genome assembly.

  6. Detection of proteins using a colorimetric bio-barcode assay.

    PubMed

    Nam, Jwa-Min; Jang, Kyung-Jin; Groves, Jay T

    2007-01-01

    The colorimetric bio-barcode assay is a red-to-blue color change-based protein detection method with ultrahigh sensitivity. This assay is based on both the bio-barcode amplification method that allows for detecting miniscule amount of targets with attomolar sensitivity and gold nanoparticle-based colorimetric DNA detection method that allows for a simple and straightforward detection of biomolecules of interest (here we detect interleukin-2, an important biomarker (cytokine) for many immunodeficiency-related diseases and cancers). The protocol is composed of the following steps: (i) conjugation of target capture molecules and barcode DNA strands onto silica microparticles, (ii) target capture with probes, (iii) separation and release of barcode DNA strands from the separated probes, (iv) detection of released barcode DNA using DNA-modified gold nanoparticle probes and (v) red-to-blue color change analysis with a graphic software. Actual target detection and quantification steps with premade probes take approximately 3 h (whole protocol including probe preparations takes approximately 3 days).

  7. Edge theory approach to topological entanglement entropy, mutual information, and entanglement negativity in Chern-Simons theories

    NASA Astrophysics Data System (ADS)

    Wen, Xueda; Matsuura, Shunji; Ryu, Shinsei

    2016-06-01

    We develop an approach based on edge theories to calculate the entanglement entropy and related quantities in (2+1)-dimensional topologically ordered phases. Our approach is complementary to, e.g., the existing methods using replica trick and Witten's method of surgery, and applies to a generic spatial manifold of genus g , which can be bipartitioned in an arbitrary way. The effects of fusion and braiding of Wilson lines can be also straightforwardly studied within our framework. By considering a generic superposition of states with different Wilson line configurations, through an interference effect, we can detect, by the entanglement entropy, the topological data of Chern-Simons theories, e.g., the R symbols, monodromy, and topological spins of quasiparticles. Furthermore, by using our method, we calculate other entanglement/correlation measures such as the mutual information and the entanglement negativity. In particular, it is found that the entanglement negativity of two adjacent noncontractible regions on a torus provides a simple way to distinguish Abelian and non-Abelian topological orders.

  8. A simple method to separate red wine nonpolymeric and polymeric phenols by solid-phase extraction.

    PubMed

    Pinelo, Manuel; Laurie, V Felipe; Waterhouse, Andrew L

    2006-04-19

    Simple polyphenols and tannins differ in the way that they contribute to the organoleptic profile of wine and their effects on human health. Very few straightforward techniques to separate red wine nonpolymeric phenols from the polymeric fraction are available in the literature. In general, they are complex, time-consuming, and generate large amounts of waste. In this procedure, the separation of these compounds was achieved using C18 cartridges, three solvents with different elution strengths, and pH adjustments of the experimental matrices. Two full factorial 2(3) experimental designs were performed to find the optimal critical variables and their values, allowing for the maximization of tannin recovery and separation efficiency (SE). Nonpolymeric phenols such as phenolic acids, monomers, and oligomers of flavonol and flavan-3-ols and anthocyanins were removed from the column by means of an aqueous solvent followed by ethyl acetate. The polymeric fraction was then eluted with a combination of methanol/acetone/water. The best results were attained with 1 mL of wine sample, a 10% methanol/water solution (first eluant), ethyl acetate (second eluant), and 66% acetone/water as the polymeric phenols-eluting solution (third eluant), obtaining a SE of ca. 90%. Trials with this method on fruit juices also showed high separation efficiency. Hence, this solid-phase extraction method has been shown to be a simple and efficient alternative for the separation of nonpolymeric phenolic fractions and the polymeric ones, and this method could have important applications to sample purification prior to biological testing due to the nonspecific binding of polymeric phenolics to nearly all enzymes and receptor sites.

  9. Measurement of the airway surface liquid volume with simple light refraction microscopy.

    PubMed

    Harvey, Peter R; Tarran, Robert; Garoff, Stephen; Myerburg, Mike M

    2011-09-01

    In the cystic fibrosis (CF) lung, the airway surface liquid (ASL) volume is depleted, impairing mucus clearance from the lung and leading to chronic airway infection and obstruction. Several therapeutics have been developed that aim to restore normal airway surface hydration to the CF airway, yet preclinical evaluation of these agents is hindered by the paucity of methods available to directly measure the ASL. Therefore, we sought to develop a straightforward approach to measure the ASL volume that would serve as the basis for a standardized method to assess mucosal hydration using readily available resources. Primary human bronchial epithelial (HBE) cells cultured at an air-liquid interface develop a liquid meniscus at the edge of the culture. We hypothesized that the size of the fluid meniscus is determined by the ASL volume, and could be measured as an index of the epithelial surface hydration status. A simple method was developed to measure the volume of fluid present in meniscus by imaging the refraction of light at the ASL interface with the culture wall using low-magnification microscopy. Using this method, we found that primary CF HBE cells had a reduced ASL volume compared with non-CF HBE cells, and that known modulators of ASL volume caused the predicted responses. Thus, we have demonstrated that this method can detect physiologically relevant changes in the ASL volume, and propose that this novel approach may be used to rapidly assess the effects of airway hydration therapies in high-throughput screening assays.

  10. Arrival Time Tracking of Partially Resolved Acoustic Rays with Application to Ocean Acoustic Tomography

    DTIC Science & Technology

    1991-03-01

    ocean acoustic tomography. A straightforward method of arrival time estimation, based on locating the maximum value of an interpolated arrival, was...used with limited success for analysis of data from the December 1988 Monterey Bay Tomography Experiment. Close examination of the data revealed multiple...estimation of arrival times along an ocean acoustic ray path is an important component of ocean acoustic tomography. A straightforward method of arrival time

  11. Universal fragment descriptors for predicting properties of inorganic crystals

    NASA Astrophysics Data System (ADS)

    Isayev, Olexandr; Oses, Corey; Toher, Cormac; Gossett, Eric; Curtarolo, Stefano; Tropsha, Alexander

    2017-06-01

    Although historically materials discovery has been driven by a laborious trial-and-error process, knowledge-driven materials design can now be enabled by the rational combination of Machine Learning methods and materials databases. Here, data from the AFLOW repository for ab initio calculations is combined with Quantitative Materials Structure-Property Relationship models to predict important properties: metal/insulator classification, band gap energy, bulk/shear moduli, Debye temperature and heat capacities. The prediction's accuracy compares well with the quality of the training data for virtually any stoichiometric inorganic crystalline material, reciprocating the available thermomechanical experimental data. The universality of the approach is attributed to the construction of the descriptors: Property-Labelled Materials Fragments. The representations require only minimal structural input allowing straightforward implementations of simple heuristic design rules.

  12. An interative solution of an integral equation for radiative transfer by using variational technique

    NASA Technical Reports Server (NTRS)

    Yoshikawa, K. K.

    1973-01-01

    An effective iterative technique is introduced to solve a nonlinear integral equation frequently associated with radiative transfer problems. The problem is formulated in such a way that each step of an iterative sequence requires the solution of a linear integral equation. The advantage of a previously introduced variational technique which utilizes a stepwise constant trial function is exploited to cope with the nonlinear problem. The method is simple and straightforward. Rapid convergence is obtained by employing a linear interpolation of the iterative solutions. Using absorption coefficients of the Milne-Eddington type, which are applicable to some planetary atmospheric radiation problems. Solutions are found in terms of temperature and radiative flux. These solutions are presented numerically and show excellent agreement with other numerical solutions.

  13. Universal fragment descriptors for predicting properties of inorganic crystals.

    PubMed

    Isayev, Olexandr; Oses, Corey; Toher, Cormac; Gossett, Eric; Curtarolo, Stefano; Tropsha, Alexander

    2017-06-05

    Although historically materials discovery has been driven by a laborious trial-and-error process, knowledge-driven materials design can now be enabled by the rational combination of Machine Learning methods and materials databases. Here, data from the AFLOW repository for ab initio calculations is combined with Quantitative Materials Structure-Property Relationship models to predict important properties: metal/insulator classification, band gap energy, bulk/shear moduli, Debye temperature and heat capacities. The prediction's accuracy compares well with the quality of the training data for virtually any stoichiometric inorganic crystalline material, reciprocating the available thermomechanical experimental data. The universality of the approach is attributed to the construction of the descriptors: Property-Labelled Materials Fragments. The representations require only minimal structural input allowing straightforward implementations of simple heuristic design rules.

  14. Wall Layers

    DTIC Science & Technology

    1992-01-14

    modes. Nonlinearity 4, 697-726. Campbell, S. A . 1991. The Effects of Symmetry on Low Dimensional Modal Interactions. Ph. D. Thesis. (Theoretical and...et aL; they have a ready for submission entitled " Bifurcation from symmetric heteroclinic cycles with three interacting modes". The purpose of this...simple model for the effects of riblets on the growth and form of eigenstructures is under investigation. This model is a straight-forward extension of

  15. Coordinating Procedural and Conceptual Knowledge to Make Sense of Word Equations: Understanding the Complexity of a "Simple" Completion Task at the Learner's Resolution

    ERIC Educational Resources Information Center

    Taber, Keith S.; Bricheno, Pat

    2009-01-01

    The present paper discusses the conceptual demands of an apparently straightforward task set to secondary-level students--completing chemical word equations with a single omitted term. Chemical equations are of considerable importance in chemistry, and school students are expected to learn to be able to write and interpret them. However, it is…

  16. Copper catalyzed oxidative homocoupling of terminal alkynes to 1,3-diynes: a Cu3(BTC)2 MOF as an efficient and ligand free catalyst for Glaser-Hay coupling.

    PubMed

    Devarajan, Nainamalai; Karthik, Murugan; Suresh, Palaniswamy

    2017-11-07

    A straightforward and efficient method has been demonstrated for the oxidative coupling of terminal alkynes using a simple Cu 3 (BTC) 2 -metal organic framework as a sustainable heterogeneous copper catalyst. A series of symmetrical 1,3-diynes bearing diverse functional groups have been synthesized in moderate to excellent yields via a Cu 3 (BTC) 2 catalyzed Glaser-Hay reaction. The presence of the coordinatively unsaturated open Cu II sites in Cu 3 (BTC) 2 catalyzes the homocoupling in the presence of air, as an environment friendly oxidant without the use of external oxidants, ligands or any additives. The present methodology avoids stoichiometric reagents and harsher or special reaction conditions, and shows good functional group tolerance. The as-prepared catalyst could be separated easily by simple filtration and reused several times without any notable loss in activity. The hot filtration test has investigated the true heterogeneity of the catalyst. Additionally, the powder X-ray diffraction pattern of the reused catalyst revealed the high stability of the catalyst.

  17. Construction of mutually unbiased bases with cyclic symmetry for qubit systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seyfarth, Ulrich; Ranade, Kedar S.

    2011-10-15

    For the complete estimation of arbitrary unknown quantum states by measurements, the use of mutually unbiased bases has been well established in theory and experiment for the past 20 years. However, most constructions of these bases make heavy use of abstract algebra and the mathematical theory of finite rings and fields, and no simple and generally accessible construction is available. This is particularly true in the case of a system composed of several qubits, which is arguably the most important case in quantum information science and quantum computation. In this paper, we close this gap by providing a simple andmore » straightforward method for the construction of mutually unbiased bases in the case of a qubit register. We show that our construction is also accessible to experiments, since only Hadamard and controlled-phase gates are needed, which are available in most practical realizations of a quantum computer. Moreover, our scheme possesses the optimal scaling possible, i.e., the number of gates scales only linearly in the number of qubits.« less

  18. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  19. Metallization and Biopatterning on Ultra-Flexible Substrates via Dextran Sacrificial Layers

    PubMed Central

    Tseng, Peter; Pushkarsky, Ivan; Di Carlo, Dino

    2014-01-01

    Micro-patterning tools adopted from the semiconductor industry have mostly been optimized to pattern features onto rigid silicon and glass substrates, however, recently the need to pattern on soft substrates has been identified in simulating cellular environments or developing flexible biosensors. We present a simple method of introducing a variety of patterned materials and structures into ultra-flexible polydimethylsiloxane (PDMS) layers (elastic moduli down to 3 kPa) utilizing water-soluble dextran sacrificial thin films. Dextran films provided a stable template for photolithography, metal deposition, particle adsorption, and protein stamping. These materials and structures (including dextran itself) were then readily transferrable to an elastomer surface following PDMS (10 to 70∶1 base to crosslinker ratios) curing over the patterned dextran layer and after sacrificial etch of the dextran in water. We demonstrate that this simple and straightforward approach can controllably manipulate surface wetting and protein adsorption characteristics of PDMS, covalently link protein patterns for stable cell patterning, generate composite structures of epoxy or particles for study of cell mechanical response, and stably integrate certain metals with use of vinyl molecular adhesives. This method is compatible over the complete moduli range of PDMS, and potentially generalizable over a host of additional micro- and nano-structures and materials. PMID:25153326

  20. The Computation of Global Viscoelastic Co- and Post-seismic Displacement in a Realistic Earth Model by Straightforward Numerical Inverse Laplace Integration

    NASA Astrophysics Data System (ADS)

    Tang, H.; Sun, W.

    2016-12-01

    The theoretical computation of dislocation theory in a given earth model is necessary in the explanation of observations of the co- and post-seismic deformation of earthquakes. For this purpose, computation theories based on layered or pure half space [Okada, 1985; Okubo, 1992; Wang et al., 2006] and on spherically symmetric earth [Piersanti et al., 1995; Pollitz, 1997; Sabadini & Vermeersen, 1997; Wang, 1999] have been proposed. It is indicated that the compressibility, curvature and the continuous variation of the radial structure of Earth should be simultaneously taken into account for modern high precision displacement-based observations like GPS. Therefore, Tanaka et al. [2006; 2007] computed global displacement and gravity variation by combining the reciprocity theorem (RPT) [Okubo, 1993] and numerical inverse Laplace integration (NIL) instead of the normal mode method [Peltier, 1974]. Without using RPT, we follow the straightforward numerical integration of co-seismic deformation given by Sun et al. [1996] to present a straightforward numerical inverse Laplace integration method (SNIL). This method is used to compute the co- and post-seismic displacement of point dislocations buried in a spherically symmetric, self-gravitating viscoelastic and multilayered earth model and is easy to extended to the application of geoid and gravity. Comparing with pre-existing method, this method is relatively more straightforward and time-saving, mainly because we sum associated Legendre polynomials and dislocation love numbers before using Riemann-Merlin formula to implement SNIL.

  1. Practical Considerations in Pediatric Surgery

    PubMed Central

    Louis, Matthew R.; Meaike, Jesse D.; Chamata, Edward; Hollier, Larry H.

    2016-01-01

    The care of pediatric patients requires special considerations that are often not addressed in the literature. Relatively straightforward tasks such as clinical evaluation, antibiotic use, splinting, wound closure, and care of simple burns become complicated in the pediatric population for several reasons. The authors seek to demystify some of these topics using the senior author's years of clinical experience treating pediatric patients by giving practical advice and general considerations when treating children. PMID:27895539

  2. Rapid Fabrication of Cell-Laden Alginate Hydrogel 3D Structures by Micro Dip-Coating.

    PubMed

    Ghanizadeh Tabriz, Atabak; Mills, Christopher G; Mullins, John J; Davies, Jamie A; Shu, Wenmiao

    2017-01-01

    Development of a simple, straightforward 3D fabrication method to culture cells in 3D, without relying on any complex fabrication methods, remains a challenge. In this paper, we describe a new technique that allows fabrication of scalable 3D cell-laden hydrogel structures easily, without complex machinery: the technique can be done using only apparatus already available in a typical cell biology laboratory. The fabrication method involves micro dip-coating of cell-laden hydrogels covering the surface of a metal bar, into the cross-linking reagents calcium chloride or barium chloride to form hollow tubular structures. This method can be used to form single layers with thickness ranging from 126 to 220 µm or multilayered tubular structures. This fabrication method uses alginate hydrogel as the primary biomaterial and a secondary biomaterial can be added depending on the desired application. We demonstrate the feasibility of this method, with survival rate over 75% immediately after fabrication and normal responsiveness of cells within these tubular structures using mouse dermal embryonic fibroblast cells and human embryonic kidney 293 cells containing a tetracycline-responsive, red fluorescent protein (tHEK cells).

  3. Realistic absorption coefficient of each individual film in a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Cesaria, M.; Caricato, A. P.; Martino, M.

    2015-02-01

    A spectrophotometric strategy, termed multilayer-method (ML-method), is presented and discussed to realistically calculate the absorption coefficient of each individual layer embedded in multilayer architectures without reverse engineering, numerical refinements and assumptions about the layer homogeneity and thickness. The strategy extends in a non-straightforward way a consolidated route, already published by the authors and here termed basic-method, able to accurately characterize an absorbing film covering transparent substrates. The ML-method inherently accounts for non-measurable contribution of the interfaces (including multiple reflections), describes the specific film structure as determined by the multilayer architecture and used deposition approach and parameters, exploits simple mathematics, and has wide range of applicability (high-to-weak absorption regions, thick-to-ultrathin films). Reliability tests are performed on films and multilayers based on a well-known material (indium tin oxide) by deliberately changing the film structural quality through doping, thickness-tuning and underlying supporting-film. Results are found consistent with information obtained by standard (optical and structural) analysis, the basic-method and band gap values reported in the literature. The discussed example-applications demonstrate the ability of the ML-method to overcome the drawbacks commonly limiting an accurate description of multilayer architectures.

  4. Neonatal Ear Molding: Timing and Technique.

    PubMed

    Anstadt, Erin Elizabeth; Johns, Dana Nicole; Kwok, Alvin Chi-Ming; Siddiqi, Faizi; Gociman, Barbu

    2016-03-01

    The incidence of auricular deformities is believed to be ∼11.5 per 10,000 births, excluding children with microtia. Although not life-threatening, auricular deformities can cause undue distress for patients and their families. Although surgical procedures have traditionally been used to reconstruct congenital auricular deformities, ear molding has been gaining acceptance as an efficacious, noninvasive alternative for the treatment of newborns with ear deformations. We present the successful correction of bilateral Stahl's ear deformity in a newborn through a straightforward, nonsurgical method implemented on the first day of life. The aim of this report is to make pediatric practitioners aware of an effective and simple molding technique appropriate for correction of congenital auricular anomalies. In addition, it stresses the importance of very early initiation of ear cartilage molding for achieving the desired outcome. Copyright © 2016 by the American Academy of Pediatrics.

  5. Analysis of capture-recapture models with individual covariates using data augmentation

    USGS Publications Warehouse

    Royle, J. Andrew

    2009-01-01

    I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.

  6. Real-Time XRD Studies of Li-O2 Electrochemical Reaction in Nonaqueous Lithium-Oxygen Battery.

    PubMed

    Lim, Hyunseob; Yilmaz, Eda; Byon, Hye Ryung

    2012-11-01

    Understanding of electrochemical process in rechargeable Li-O2 battery has suffered from lack of proper analytical tool, especially related to the identification of chemical species and number of electrons involved in the discharge/recharge process. Here we present a simple and straightforward analytical method for simultaneously attaining chemical and quantified information of Li2O2 (discharge product) and byproducts using in situ XRD measurement. By real-time monitoring of solid-state Li2O2 peak area, the accurate efficiency of Li2O2 formation and the number of electrons can be evaluated during full discharge. Furthermore, by observation of sequential area change of Li2O2 peak during recharge, we found nonlinearity of Li2O2 decomposition rate for the first time in ether-based electrolyte.

  7. A Thermal Management Systems Model for the NASA GTX RBCC Concept

    NASA Technical Reports Server (NTRS)

    Traci, Richard M.; Farr, John L., Jr.; Laganelli, Tony; Walker, James (Technical Monitor)

    2002-01-01

    The Vehicle Integrated Thermal Management Analysis Code (VITMAC) was further developed to aid the analysis, design, and optimization of propellant and thermal management concepts for advanced propulsion systems. The computational tool is based on engineering level principles and models. A graphical user interface (GUI) provides a simple and straightforward method to assess and evaluate multiple concepts before undertaking more rigorous analysis of candidate systems. The tool incorporates the Chemical Equilibrium and Applications (CEA) program and the RJPA code to permit heat transfer analysis of both rocket and air breathing propulsion systems. Key parts of the code have been validated with experimental data. The tool was specifically tailored to analyze rocket-based combined-cycle (RBCC) propulsion systems being considered for space transportation applications. This report describes the computational tool and its development and verification for NASA GTX RBCC propulsion system applications.

  8. Hartree and Exchange in Ensemble Density Functional Theory: Avoiding the Nonuniqueness Disaster.

    PubMed

    Gould, Tim; Pittalis, Stefano

    2017-12-15

    Ensemble density functional theory is a promising method for the efficient and accurate calculation of excitations of quantum systems, at least if useful functionals can be developed to broaden its domain of practical applicability. Here, we introduce a guaranteed single-valued "Hartree-exchange" ensemble density functional, E_{Hx}[n], in terms of the right derivative of the universal ensemble density functional with respect to the coupling constant at vanishing interaction. We show that E_{Hx}[n] is straightforwardly expressible using block eigenvalues of a simple matrix [Eq. (14)]. Specialized expressions for E_{Hx}[n] from the literature, including those involving superpositions of Slater determinants, can now be regarded as originating from the unifying picture presented here. We thus establish a clear and practical description for Hartree and exchange in ensemble systems.

  9. Image Gently(SM): a national education and communication campaign in radiology using the science of social marketing.

    PubMed

    Goske, Marilyn J; Applegate, Kimberly E; Boylan, Jennifer; Butler, Priscilla F; Callahan, Michael J; Coley, Brian D; Farley, Shawn; Frush, Donald P; Hernanz-Schulman, Marta; Jaramillo, Diego; Johnson, Neil D; Kaste, Sue C; Morrison, Gregory; Strauss, Keith J

    2008-12-01

    Communication campaigns are an accepted method for altering societal attitudes, increasing knowledge, and achieving social and behavioral change particularly within public health and the social sciences. The Image Gently(SM) campaign is a national education and awareness campaign in radiology designed to promote the need for and opportunities to decrease radiation to children when CT scans are indicated. In this article, the relatively new science of social marketing is reviewed and the theoretical basis for an effective communication campaign in radiology is discussed. Communication strategies are considered and the type of outcomes that should be measured are reviewed. This methodology has demonstrated that simple, straightforward safety messages on radiation protection targeted to medical professionals throughout the radiology community, utilizing multiple media, can affect awareness potentially leading to change in practice.

  10. Modified overdentures for the management of oligodontia and developmental defects.

    PubMed

    Abadi, B J; Kimmel, N A; Falace, D A

    1982-01-01

    A technique for the construction of complete dentures over unaltered natural teeth has been described and illustrated for three different situations. The procedure is straightforward and simple and varies only slightly from conventional overdenture construction. The technique offers several advantages for a patient who wishes to keep the remaining natural teeth unaltered but who requires significant functional or esthetic improvement. Since the teeth are unaltered, any type of future treatment may be considered at any time without being compromised. This is an important factor to consider for the young patient. The cost, when compared to the fabrication of a fixed or cast removable prosthesis, is significantly less, while still providing acceptable esthetics and function. The versatility of this procedure allows its use in a number of situations which are not amenable to more complicated treatment methods.

  11. Simultaneous Determination of Procainamide and N-acetylprocainamide in Rat Plasma by Ultra-High-Pressure Liquid Chromatography Coupled with a Diode Array Detector and Its Application to a Pharmacokinetic Study in Rats.

    PubMed

    Balla, Anusha; Cho, Kwan Hyung; Kim, Yu Chul; Maeng, Han-Joo

    2018-03-30

    A simple, sensitive, and reliable reversed-phase, Ultra-High-Pressure Liquid Chromatography (UHPLC) coupled with a Diode Array Detector (DAD) method for the simultaneous determination of Procainamide (PA) and its major metabolite, N -acetylprocainamide (NAPA), in rat plasma was developed and validated. A simple deproteinization method with methanol was applied to the rat plasma samples, which were analyzed using UHPLC equipped with DAD at 280 nm, and a Synergi™ 4 µm polar, reversed-phase column using 1% acetic acid (pH 5.5) and methanol (76:24, v / v ) as eluent in isocratic mode at a flow rate 0.2 mL/min. The method showed good linearity ( r ² > 0.998) over the concentration range of 20-100,000 and 20-10,000 ng/mL for PA and NAPA, respectively. Intra- and inter-day accuracies ranged from 97.7 to 110.9%, and precision was <10.5% for PA and 99.7 to 109.2 and <10.5%, respectively, for NAPA. The lower limit of quantification was 20 ng/mL for both compounds. This is the first report of the UHPLC-DAD bioanalytical method for simultaneous measurement of PA and NAPA. The most obvious advantage of this method over previously reported HPLC methods is that it requires small sample and injection volumes, with a straightforward, one-step sample preparation. It overcomes the limitations of previous methods, which use large sample volume and complex sample preparation. The devised method was successfully applied to the quantification of PA and NAPA after an intravenous bolus administration of 10 mg/kg procainamide hydrochloride to rats.

  12. Eigenstates and dynamics of Hooke's atom: Exact results and path integral simulations

    NASA Astrophysics Data System (ADS)

    Gholizadehkalkhoran, Hossein; Ruokosenmäki, Ilkka; Rantala, Tapio T.

    2018-05-01

    The system of two interacting electrons in one-dimensional harmonic potential or Hooke's atom is considered, again. On one hand, it appears as a model for quantum dots in a strong confinement regime, and on the other hand, it provides us with a hard test bench for new methods with the "space splitting" arising from the one-dimensional Coulomb potential. Here, we complete the numerous previous studies of the ground state of Hooke's atom by including the excited states and dynamics, not considered earlier. With the perturbation theory, we reach essentially exact eigenstate energies and wave functions for the strong confinement regime as novel results. We also consider external perturbation induced quantum dynamics in a simple separable case. Finally, we test our novel numerical approach based on real-time path integrals (RTPIs) in reproducing the above. The RTPI turns out to be a straightforward approach with exact account of electronic correlations for solving the eigenstates and dynamics without the conventional restrictions of electronic structure methods.

  13. Determination of bulk and interface density of states in metal oxide semiconductor thin-film transistors by using capacitance-voltage characteristics

    NASA Astrophysics Data System (ADS)

    Wei, Xixiong; Deng, Wanling; Fang, Jielin; Ma, Xiaoyu; Huang, Junkai

    2017-10-01

    A physical-based straightforward extraction technique for interface and bulk density of states in metal oxide semiconductor thin film transistors (TFTs) is proposed by using the capacitance-voltage (C-V) characteristics. The interface trap density distribution with energy has been extracted from the analysis of capacitance-voltage characteristics. Using the obtained interface state distribution, the bulk trap density has been determined. With this method, for the interface trap density, it is found that deep state density nearing the mid-gap is approximately constant and tail states density increases exponentially with energy; for the bulk trap density, it is a superposition of exponential deep states and exponential tail states. The validity of the extraction is verified by comparisons with the measured current-voltage (I-V) characteristics and the simulation results by the technology computer-aided design (TCAD) model. This extraction method uses non-numerical iteration which is simple, fast and accurate. Therefore, it is very useful for TFT device characterization.

  14. Robots In War: Issues Of Risk And Ethics

    DTIC Science & Technology

    2009-01-01

    unexpected, untested ways. (And even straightforward, simple rules such as Asimov’s Laws of Robotics ( Asimov , 1950) can create unexpected dilemmas...stories (e. g., Asimov , 1950). Likewise, we may understand each rule of engagement and believe them to be sensible, but are they truly consistent...Netherlands: lOS Press. Asimov , I. (1950).1, Robot (2004 edition), New York, NY: Bantam Dell. BBC (2005). SLA Confirm Spy Plane Crash. BBC.com. Retrieved

  15. Exploiting the acylating nature of the imide-Ugi intermediate: a straightforward synthesis of tetrahydro-1,4-benzodiazepin-2-ones.

    PubMed

    Mossetti, Riccardo; Saggiorato, Dèsirèe; Tron, Gian Cesare

    2011-12-16

    We describe a simple and novel protocol for the synthesis of tetrahydro-1,4-benzodiazepin-2-ones with three points of diversity, exploiting the acylating properties of the recently rediscovered Ugi-imide. The final compounds can be easily prepared in three synthetic steps using a multicomponent reaction, a Staudinger reduction, and an acylative protocol, with good to excellent yields for each synthetic step.

  16. The Right to Fair and Equal Treatment: A Straightforward Guide to Human Rights and the Canadian Human Rights Act.

    ERIC Educational Resources Information Center

    G. Allan Roeher Inst., Toronto (Ontario).

    This book, written in simple language, explains the Canadian Human Rights Act and how and when it can be used to assist individuals with mental handicaps. The book is designed to help people learn their rights as citizens of Canada and learn that if something wrong is done to them they can do something to change it. It explains what human rights…

  17. Acoustic levitation and the Boltzmann-Ehrenfest principle

    NASA Technical Reports Server (NTRS)

    Putterman, S.; Rudnick, Joseph; Barmatz, M.

    1989-01-01

    The Boltzmann-Ehrenfest principle of adiabatic invariance relates the acoustic potential acting on a sample positioned in a single-mode cavity to the shift in resonant frequency caused by the presence of this sample. This general and simple relation applies to samples and cavities of arbitrary shape, dimension, and compressibility. Positioning forces and torques can, therefore, be determined from straightforward measurements of frequency shifts. Applications to the Rayleigh disk phenomenon and levitated cylinders are presented.

  18. A Model for S&T Information Provision to Small R&D Systems in Developing Countries with Case Studies in Ethiopia and Tanzania. Stockholm Papers in Library and Information Science.

    ERIC Educational Resources Information Center

    Winkel, Annette; Schwarz, Stephan

    By carefully considering the special characteristics of two small African scientific and technical (S&T) information systems for research and development (R&D), this report defines a simple and straightforward model which can be easily implemented in similar situations with a minimum of external support. The model is designed to build up a…

  19. Low-Level Graphics Cues For Solicit Image Interpretation

    NASA Astrophysics Data System (ADS)

    McAnulty, Michael A.; Gemmill, Jill P.; Kegley, Kathleen A.; Chiu, Haw-Tsang

    1984-08-01

    Several straightforward techniques for displaying arbitrary solids of the sort encountered in the life sciences are presented, all variations of simple three-dimensional scatter plots. They are all targeted for a medium cost raster display (an AED-5l2 has been used here). Practically any host computer may be used to implement them. All techniques are broadly applicable and were implemented as Master Degree projects. The major hardware constraint is data transmission speed, and this is met by minimizing the amount of graphical data, ignoring enhancement of the data, and using terminal scan-conversion and aspect firmware wherever possible. Three simple rendering techniques and the use of several graphics cues are described.

  20. Time of Flight Electrochemistry: Diffusion Coefficient Measurements Using Interdigitated Array (IDA) Electrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Fei; Kolesov, Grigory; Parkinson, Bruce A.

    2014-09-26

    A simple and straightforward method for measuring diffusion coefficients using interdigitated array (IDA) electrodes is reported. The method does not require that the exact electrode area be known but depends only the size of the gap between the IDA electrode pairs. Electroactive molecules produced at the generator electrode of the IDA by a voltage step or scan can diffuse to the collector electrode and the time delay before the current for the reverse electrochemical reaction is detected at the collector is used to calculate the diffusion coefficient. The measurement of the diffusion rate of Ru(NH3)6+2 in aqueous solution has beenmore » used as an example measuring diffusion coefficients using this method. Additionally, a digital simulation of the electrochemical response of the IDA electrodes was used to simulate the entire current/voltage/time behavior of the system and verify the experimentally measured diffusion coefficients. This work was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the Department of Energy, Office of Science, Office of Basic Energy Sciences.« less

  1. Differential Variance Analysis: a direct method to quantify and visualize dynamic heterogeneities

    NASA Astrophysics Data System (ADS)

    Pastore, Raffaele; Pesce, Giuseppe; Caggioni, Marco

    2017-03-01

    Many amorphous materials show spatially heterogenous dynamics, as different regions of the same system relax at different rates. Such a signature, known as Dynamic Heterogeneity, has been crucial to understand the nature of the jamming transition in simple model systems and is currently considered very promising to characterize more complex fluids of industrial and biological relevance. Unfortunately, measurements of dynamic heterogeneities typically require sophisticated experimental set-ups and are performed by few specialized groups. It is now possible to quantitatively characterize the relaxation process and the emergence of dynamic heterogeneities using a straightforward method, here validated on video microscopy data of hard-sphere colloidal glasses. We call this method Differential Variance Analysis (DVA), since it focuses on the variance of the differential frames, obtained subtracting images at different time-lags. Moreover, direct visualization of dynamic heterogeneities naturally appears in the differential frames, when the time-lag is set to the one corresponding to the maximum dynamic susceptibility. This approach opens the way to effectively characterize and tailor a wide variety of soft materials, from complex formulated products to biological tissues.

  2. Taking Halo-Independent Dark Matter Methods Out of the Bin

    DOE PAGES

    Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew

    2014-10-30

    We develop a new halo-independent strategy for analyzing emerging DM hints, utilizing the method of extended maximum likelihood. This approach does not require the binning of events, making it uniquely suited to the analysis of emerging DM direct detection hints. It determines a preferred envelope, at a given confidence level, for the DM velocity integral which best fits the data using all available information and can be used even in the case of a single anomalous scattering event. All of the halo-independent information from a direct detection result may then be presented in a single plot, allowing simple comparisons betweenmore » multiple experiments. This results in the halo-independent analogue of the usual mass and cross-section plots found in typical direct detection analyses, where limit curves may be compared with best-fit regions in halo-space. The method is straightforward to implement, using already-established techniques, and its utility is demonstrated through the first unbinned halo-independent comparison of the three anomalous events observed in the CDMS-Si detector with recent limits from the LUX experiment.« less

  3. Analysis of volatile compounds by open-air ionization mass spectrometry.

    PubMed

    Meher, Anil Kumar; Chen, Yu-Chie

    2017-05-08

    This study demonstrates a simple method for rapid and in situ identification of volatile and endogenous compounds in culinary spice samples through mass spectrometry (MS). This method only requires a holder for solid spice sample (2-3 mm) that is placed close to a mass spectrometer inlet, which is applied with a high voltage. Volatile species responsible for the aroma of the spice samples can be readily detected by the mass spectrometer. Sample pretreatment is not required prior to MS analysis, and no solvent was used during MS analysis. The high voltage applied to the inlet of the mass spectrometer induces the ionization of volatile compounds released from the solid spice samples. Furthermore, moisture in the air also contributes to the ionization of volatile compounds. Dried spices including cinnamon and cloves are used as the model sample to demonstrate this straightforward MS analysis, which can be completed within few seconds. Furthermore, we also demonstrate the suitability of the current method for rapid screening of cinnamon quality through detection of the presence of a hepatotoxic agent, i.e. coumarin. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A rapid, straightforward, and print house compatible mass fabrication method for integrating 3D paper-based microfluidics.

    PubMed

    Xiao, Liangpin; Liu, Xianming; Zhong, Runtao; Zhang, Kaiqing; Zhang, Xiaodi; Zhou, Xiaomian; Lin, Bingcheng; Du, Yuguang

    2013-11-01

    Three-dimensional (3D) paper-based microfluidics, which is featured with high performance and speedy determination, promise to carry out multistep sample pretreatment and orderly chemical reaction, which have been used for medical diagnosis, cell culture, environment determination, and so on with broad market prospect. However, there are some drawbacks in the existing fabrication methods for 3D paper-based microfluidics, such as, cumbersome and time-consuming device assembly; expensive and difficult process for manufacture; contamination caused by organic reagents from their fabrication process. Here, we present a simple printing-bookbinding method for mass fabricating 3D paper-based microfluidics. This approach involves two main steps: (i) wax-printing, (ii) bookbinding. We tested the delivery capability, diffusion rate, homogeneity and demonstrated the applicability of the device to chemical analysis by nitrite colorimetric assays. The described method is rapid (<30 s), cheap, easy to manipulate, and compatible with the flat stitching method that is common in a print house, making itself an ideal scheme for large-scale production of 3D paper-based microfluidics. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Finite difference time domain calculation of transients in antennas with nonlinear loads

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent

    1991-01-01

    Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.

  6. Extending the range of real time density matrix renormalization group simulations

    NASA Astrophysics Data System (ADS)

    Kennes, D. M.; Karrasch, C.

    2016-03-01

    We discuss a few simple modifications to time-dependent density matrix renormalization group (DMRG) algorithms which allow to access larger time scales. We specifically aim at beginners and present practical aspects of how to implement these modifications within any standard matrix product state (MPS) based formulation of the method. Most importantly, we show how to 'combine' the Schrödinger and Heisenberg time evolutions of arbitrary pure states | ψ 〉 and operators A in the evaluation of 〈A〉ψ(t) = 〈 ψ | A(t) | ψ 〉 . This includes quantum quenches. The generalization to (non-)thermal mixed state dynamics 〈A〉ρ(t) =Tr [ ρA(t) ] induced by an initial density matrix ρ is straightforward. In the context of linear response (ground state or finite temperature T > 0) correlation functions, one can extend the simulation time by a factor of two by 'exploiting time translation invariance', which is efficiently implementable within MPS DMRG. We present a simple analytic argument for why a recently-introduced disentangler succeeds in reducing the effort of time-dependent simulations at T > 0. Finally, we advocate the python programming language as an elegant option for beginners to set up a DMRG code.

  7. Ultrasonic monitoring of droplets' evaporation: Application to human whole blood.

    PubMed

    Laux, D; Ferrandis, J Y; Brutin, D

    2016-09-01

    During a colloidal droplet evaporation, a sol-gel transition can be observed and is described by the desiccation time τD and the gelation time τG. These characteristic times, which can be linked to viscoelastic properties of the droplet and to its composition, are classically rated by analysis of mass droplet evolution during evaporation. Even if monitoring mass evolution versus time seems straightforward, this approach is very sensitive to environmental conditions (vibrations, air flow…) as mass has to be evaluated very accurately using ultra-sensitive weighing scales. In this study we investigated the potentialities of ultrasonic shear reflectometry to assess τD and τG in a simple and reliable manner. In order to validate this approach, our study has focused on blood droplets evaporation on which a great deal of work has recently been published. Desiccation and gelation times measured with shear ultrasonic reflectometry have been perfectly correlated to values obtained from mass versus time analysis. This ultrasonic method which is not very sensitive to environmental perturbations is therefore very interesting to monitor the drying of blood droplets in a simple manner and is more generally suitable for complex fluid droplets evaporation investigation. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Scalability problems of simple genetic algorithms.

    PubMed

    Thierens, D

    1999-01-01

    Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.

  9. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    PubMed

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a software packet.

  10. Elements of a next generation time-series ASCII data file format for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Webster, C. J.

    2015-12-01

    Data in ASCII comma separated value (CSV) format are recognized as the most simple, straightforward and readable type of data present in the geosciences. Many scientific workflows developed over the years rely on data using this simple format. However, there is a need for a lightweight ASCII header format standard that is easy to create and easy to work with. Current OGC grade XML standards are complex and difficult to implement for researchers with few resources. Ideally, such a format should provide the data in CSV for easy consumption by generic applications such as spreadsheets. The format should use an existing time standard. The header should be easily human readable as well as machine parsable. The metadata format should be extendable to allow vocabularies to be adopted as they are created by external standards bodies. The creation of such a format will increase the productivity of software engineers and scientists because fewer translators and checkers would be required. Data in ASCII comma separated value (CSV) format are recognized as the most simple, straightforward and readable type of data present in the geosciences. Many scientific workflows developed over the years rely on data using this simple format. However, there is a need for a lightweight ASCII header format standard that is easy to create and easy to work with. Current OGC grade XML standards are complex and difficult to implement for researchers with few resources. Ideally, such a format would provide the data in CSV for easy consumption by generic applications such as spreadsheets. The format would use existing time standard. The header would be easily human readable as well as machine parsable. The metadata format would be extendable to allow vocabularies to be adopted as they are created by external standards bodies. The creation of such a format would increase the productivity of software engineers and scientists because fewer translators would be required.

  11. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Fagundo, Arturo

    1994-01-01

    Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

  12. Image denoising for real-time MRI.

    PubMed

    Klosowski, Jakob; Frahm, Jens

    2017-03-01

    To develop an image noise filter suitable for MRI in real time (acquisition and display), which preserves small isolated details and efficiently removes background noise without introducing blur, smearing, or patch artifacts. The proposed method extends the nonlocal means algorithm to adapt the influence of the original pixel value according to a simple measure for patch regularity. Detail preservation is improved by a compactly supported weighting kernel that closely approximates the commonly used exponential weight, while an oracle step ensures efficient background noise removal. Denoising experiments were conducted on real-time images of healthy subjects reconstructed by regularized nonlinear inversion from radial acquisitions with pronounced undersampling. The filter leads to a signal-to-noise ratio (SNR) improvement of at least 60% without noticeable artifacts or loss of detail. The method visually compares to more complex state-of-the-art filters as the block-matching three-dimensional filter and in certain cases better matches the underlying noise model. Acceleration of the computation to more than 100 complex frames per second using graphics processing units is straightforward. The sensitivity of nonlocal means to small details can be significantly increased by the simple strategies presented here, which allows partial restoration of SNR in iteratively reconstructed images without introducing a noticeable time delay or image artifacts. Magn Reson Med 77:1340-1352, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  13. A novel technique to solve nonlinear higher-index Hessenberg differential-algebraic equations by Adomian decomposition method.

    PubMed

    Benhammouda, Brahim

    2016-01-01

    Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.

  14. Dynamic optimization of chemical processes using ant colony framework.

    PubMed

    Rajesh, J; Gupta, K; Kusumakar, H S; Jayaraman, V K; Kulkarni, B D

    2001-11-01

    Ant colony framework is illustrated by considering dynamic optimization of six important bench marking examples. This new computational tool is simple to implement and can tackle problems with state as well as terminal constraints in a straightforward fashion. It requires fewer grid points to reach the global optimum at relatively very low computational effort. The examples with varying degree of complexities, analyzed here, illustrate its potential for solving a large class of process optimization problems in chemical engineering.

  15. Enrico Fermi, flaws and all

    NASA Astrophysics Data System (ADS)

    Formato, Megan

    2018-01-01

    With the title The Last Man Who Knew Everything and a first chapter entitled “Prodigy,” a reader could be forgiven for ex­pecting David Schwartz’s new biography of Enrico Fermi to be a straightforward hagiography. Luckily, Schwartz’s ambitions are not as simple as providing yet another account of a great man of 20th-century physics. He has other, thornier questions in mind, some of which he credibly addresses and others that he handles less convincingly.

  16. A straightforward, validated liquid chromatography coupled to tandem mass spectrometry method for the simultaneous detection of nine drugs of abuse and their metabolites in hair and nails.

    PubMed

    Cappelle, Delphine; De Doncker, Mireille; Gys, Celine; Krysiak, Kamelia; De Keukeleire, Steven; Maho, Walid; Crunelle, Cleo L; Dom, Geert; Covaci, Adrian; van Nuijs, Alexander L N; Neels, Hugo

    2017-04-01

    Hair and nails allow for a stable accumulation of compounds over time and retrospective investigation of past exposure and/or consumption. Owing to their long window of detection (weeks to months), analysis of these matrices can provide information complementary to blood and urine analysis or can be used in standalone when e.g. elimination from the body has already occurred. Drugs of abuse are often used together and, therefore, multi-analyte methods capable of detecting several substances and their metabolites in a single run are of importance. This paper presents the development and validation of a method based on liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) for the simultaneous detection of nine drugs of abuse and their metabolites in hair and nails. We focused on a simple and straightforward sample preparation to reduce costs, and allow application in routine laboratory practice. Chromatographic and mass spectrometric parameters, such as column type, mobile phase, and multiple reaction monitoring transitions were optimized. The method was validated according to the European Medicine Agency guidelines with an assessment of specificity, limit of quantification (LOQ), linearity, accuracy, precision, carry-over, matrix effects, recovery, and process efficiency. Linearity ranged from 25 to 20 000 pg mg -1 hair and from 50 to 20 000 pg mg -1 nails, and the lowest calibration point achieved the requirements for the LOQ (25 pg mg -1 for hair and 50 pg mg -1 for nails). Although it was not the main focus of the article, the reliability of the method was proven through successful participation in a proficiency test, and by investigation of authentic hair and nail samples from self-reported drug users. In the future, the method should allow comparison between the two matrices to acquire an in-depth knowledge of nail analysis and to define cutoff levels for nail analysis, as they exist for hair. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Kolmogorov-Smirnov statistical test for analysis of ZAP-70 expression in B-CLL, compared with quantitative PCR and IgV(H) mutation status.

    PubMed

    Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan

    2006-07-15

    ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.

  18. Simple citric acid-catalyzed surface esterification of cellulose nanocrystals.

    PubMed

    Ávila Ramírez, Jhon Alejandro; Fortunati, Elena; Kenny, José María; Torre, Luigi; Foresti, María Laura

    2017-02-10

    A simple straightforward route for the surface esterification of cellulose nanocrystals (CNC) is herein proposed. CNC obtained from microcrystalline cellulose were acetylated using as catalyst citric acid, a α-hydroxy acid present in citrus fruits and industrially produced by certain molds in sucrose or glucose-containing medium. No additional solvent was added to the system; instead, the acylant (acetic anhydride) was used in sufficient excess to allow CNC dispersion and proper suspension agitation. By tuning the catalyst load, CNC with two different degree of substitution (i.e. DS=0.18 and 0.34) were obtained. Acetylated cellulose nanocrystals were characterized in terms of chemical structure, crystallinity, morphology, thermal decomposition and dispersion in a non-polar solvent. Results illustrated for the first time the suitability of the protocol proposed for the simple surface acetylation of cellulose nanocrystals. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. fluff: exploratory analysis and visualization of high-throughput sequencing data

    PubMed Central

    Georgiou, Georgios

    2016-01-01

    Summary. In this article we describe fluff, a software package that allows for simple exploration, clustering and visualization of high-throughput sequencing data mapped to a reference genome. The package contains three command-line tools to generate publication-quality figures in an uncomplicated manner using sensible defaults. Genome-wide data can be aggregated, clustered and visualized in a heatmap, according to different clustering methods. This includes a predefined setting to identify dynamic clusters between different conditions or developmental stages. Alternatively, clustered data can be visualized in a bandplot. Finally, fluff includes a tool to generate genomic profiles. As command-line tools, the fluff programs can easily be integrated into standard analysis pipelines. The installation is straightforward and documentation is available at http://fluff.readthedocs.org. Availability. fluff is implemented in Python and runs on Linux. The source code is freely available for download at https://github.com/simonvh/fluff. PMID:27547532

  20. One-step, simple, and green synthesis of tin dioxide/graphene nanocomposites and their application to lithium-ion battery anodes

    NASA Astrophysics Data System (ADS)

    Jiang, Zaixing; Zhang, Dongjie; Li, Yue; Cheng, Hao; Wang, Mingqiang; Wang, Xueqin; Bai, Yongping; Lv, Haibao; Yao, Yongtao; Shao, Lu; Huang, Yudong

    2014-10-01

    Graphene with extraordinary thermal, mechanical and electrical properties offers possibilities in a variety of applications. Recent advances in the synthesis of graphene composites using supercritical fluids are highlighted. Supercritical fluids exhibit unique features for the synthesis of composites due to its low viscosity, high diffusivity, near-zero surface tension, and tunability. Here, we report the preparation of tin dioxide (SnO2)/graphene nanocomposite through supercritical CO2 method. It demonstrates that the SnO2 nanoparticles are homogeneously dispersed on the surface of graphene sheets with a particle size of 2.3-2.6 nm. The SnO2/graphene nanocomposites exhibit higher lithium storage capacity and better cycling performance compared to that of the similar CNT nanocomposites. The reported synthetic procedure is straightforward, green and inexpensive. And it may be readily adopted to produce large quantities of graphene based nanocomposites.

  1. The multilayer temporal network of public transport in Great Britain

    NASA Astrophysics Data System (ADS)

    Gallotti, Riccardo; Barthelemy, Marc

    2015-01-01

    Despite the widespread availability of information concerning public transport coming from different sources, it is extremely hard to have a complete picture, in particular at a national scale. Here, we integrate timetable data obtained from the United Kingdom open-data program together with timetables of domestic flights, and obtain a comprehensive snapshot of the temporal characteristics of the whole UK public transport system for a week in October 2010. In order to focus on multi-modal aspects of the system, we use a coarse graining procedure and define explicitly the coupling between different transport modes such as connections at airports, ferry docks, rail, metro, coach and bus stations. The resulting weighted, directed, temporal and multilayer network is provided in simple, commonly used formats, ensuring easy access and the possibility of a straightforward use of old or specifically developed methods on this new and extensive dataset.

  2. Selection and characterization of a DNA aptamer to crystal violet.

    PubMed

    Chen, Yang; Wang, Jine; Zhang, Yajie; Xu, Lijun; Gao, Tian; Wang, Bing; Pei, Renjun

    2018-06-13

    Aptamers are short single-stranded DNA or RNA, which can be selected in vitro by systematic evolution of ligands by exponential enrichment (SELEX). In order to develop novel light-up probes to substitute G-quadruplex (G4), we selected a DNA aptamer for crystal violet (CV), a triphenylmethane light-up dye, by a modified affinity chromatography-based SELEX. The ssDNA pool was first coupled on streptavidin-coated agarose beads through a biotin labeled complementary oligonucleotide, and then the aptamer sequences would be released from agarose beads by CV affinity. This method is simple, straightforward and effective. The aptamer sequence with a low micromolar dissociation constant (Kd) and good specificity was achieved after 11 rounds of selection. The light-up properties of the CV-aptamer were also investigated, and the CV showed dramatic fluorescence enhancement. The CV-aptamer pair could be further used as a novel light-up fluorescent probe to design biosensors.

  3. Generation of genome-modified Drosophila cell lines using SwAP.

    PubMed

    Franz, Alexandra; Brunner, Erich; Basler, Konrad

    2017-10-02

    The ease of generating genetically modified animals and cell lines has been markedly increased by the recent development of the versatile CRISPR/Cas9 tool. However, while the isolation of isogenic cell populations is usually straightforward for mammalian cell lines, the generation of clonal Drosophila cell lines has remained a longstanding challenge, hampered by the difficulty of getting Drosophila cells to grow at low densities. Here, we describe a highly efficient workflow to generate clonal Cas9-engineered Drosophila cell lines using a combination of cell pools, limiting dilution in conditioned medium and PCR with allele-specific primers, enabling the efficient selection of a clonal cell line with a suitable mutation profile. We validate the protocol by documenting the isolation, selection and verification of eight independently Cas9-edited armadillo mutant Drosophila cell lines. Our method provides a powerful and simple workflow that improves the utility of Drosophila cells for genetic studies with CRISPR/Cas9.

  4. Synthesis and luminescent properties of uniform monodisperse LuPO4:Eu3+/Tb3+ hollow microspheres

    PubMed Central

    Gao, Yu; Yu, He; Shi, Cheng; Zhao, Guiyan; Bi, Yanfeng; Ding, Fu; Sun, Yaguang; Xu, Zhenhe

    2017-01-01

    Uniform monodisperse LuPO4:Eu3+/Tb3+ hollow microspheres with diameters of about 2.4 µm have been successfully synthesized by the combination of a facile homogeneous precipitation approach, an ion-exchange process and a calcination process. The possible formation mechanism for the hollow microspheres was presented. Furthermore, the luminescence properties revealed that the LuPO4:Eu3+ and LuPO4:Tb3+ phosphors show strong orange-red and green emissions under ultraviolet excitation, respectively, which endows this material with potential application in many fields, such as light display systems and optoelectronic devices. Since the synthetic process can be carried out at mild conditions, it should be straightforward to scale up the entire process for large-scale production of the LuPO4 hollow microspheres. Furthermore, this general and simple method may be of much significance in the synthesis of many other inorganic materials. PMID:29308268

  5. Fast self-assembly of silver nanoparticle monolayer in hydrophobic environment and its application as SERS substrate

    NASA Astrophysics Data System (ADS)

    Leiterer, Christian; Zopf, David; Seise, Barbara; Jahn, Franka; Weber, Karina; Popp, Jürgen; Cialla-May, Dana; Fritzsche, Wolfgang

    2014-09-01

    We present a method which allows the straightforward wet-chemical synthesis of silver nanoparticles (AgNPs), hydrophobic coating assembling into monolayer, and their utilization as substrates for surface-enhanced Raman spectroscopy (SERS). In order to fabricate the SERS-active substrates, AgNPs were synthesized in water by chemical reduction of Ag+, coated with a hydrophobic shell (dodecanethiol), transferred to a non-polar solvent, and finally assembled through precipitation into a SERS-active self-assembled monolayer (SAM). Simple approaches for concentration and purification of the coated AgNPs are shown. The synthesized particles and SAMs were characterized by transmission electron microscopy, optical imaging, and spectroscopic measurements. This manuscript can be used as a do-it-yourself (DIY) tutorial which allows making SAMs from coated AgNPs (<15 nm) in every laboratory within less than 1 h and their utilization as potential low-cost SERS substrates (movie 1-4).

  6. Topological Classification of Crystalline Insulators through Band Structure Combinatorics

    NASA Astrophysics Data System (ADS)

    Kruthoff, Jorrit; de Boer, Jan; van Wezel, Jasper; Kane, Charles L.; Slager, Robert-Jan

    2017-10-01

    We present a method for efficiently enumerating all allowed, topologically distinct, electronic band structures within a given crystal structure in all physically relevant dimensions. The algorithm applies to crystals without time-reversal, particle-hole, chiral, or any other anticommuting or anti-unitary symmetries. The results presented match the mathematical structure underlying the topological classification of these crystals in terms of K -theory and therefore elucidate this abstract mathematical framework from a simple combinatorial perspective. Using a straightforward counting procedure, we classify all allowed topological phases of spinless particles in crystals in class A . Employing this classification, we study transitions between topological phases within class A that are driven by band inversions at high-symmetry points in the first Brillouin zone. This enables us to list all possible types of phase transitions within a given crystal structure and to identify whether or not they give rise to intermediate Weyl semimetallic phases.

  7. Structure in the 3D Galaxy Distribution. III. Fourier Transforming the Universe: Phase and Power Spectra

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.; Way, M. J.; Gazis, P. G.

    2017-01-01

    We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform of finely binned galaxy positions. In both cases, deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multipoint hierarchy. We identify some threads of modern large-scale inference methodology that will presumably yield detections in new wider and deeper surveys.

  8. Electroformation of Janus and patchy capsules

    NASA Astrophysics Data System (ADS)

    Rozynek, Zbigniew; Mikkelsen, Alexander; Dommersnes, Paul; Fossum, Jon Otto

    2014-05-01

    Janus and patchy particles have designed heterogeneous surfaces that consist of two or several patches with different materials properties. These particles are emerging as building blocks for a new class of soft matter and functional materials. Here we introduce a route for forming heterogeneous capsules by producing highly ordered jammed colloidal shells of various shapes with domains of controlled size and composition. These structures combine the functionalities offered by Janus or patchy particles, and those given by permeable shells such as colloidosomes. The simple assembly route involves the synergetic action of electro-hydrodynamic flow and electro-coalescence. We demonstrate that the method is robust and straightforwardly extendable to production of multi-patchy capsules. This forms a starting point for producing patchy colloidosomes with domains of anisotropic chemical surface properties, permeability or mixed liquid-solid phase domains, which could be exploited to produce functional emulsions, light and hollow supra-colloidosome structures, or scaffolds.

  9. Modeling individual effects in the Cormack-Jolly-Seber Model: A state-space formulation

    USGS Publications Warehouse

    Royle, J. Andrew

    2008-01-01

    In population and evolutionary biology, there exists considerable interest in individual heterogeneity in parameters of demographic models for open populations. However, flexible and practical solutions to the development of such models have proven to be elusive. In this article, I provide a state-space formulation of open population capture-recapture models with individual effects. The state-space formulation provides a generic and flexible framework for modeling and inference in models with individual effects, and it yields a practical means of estimation in these complex problems via contemporary methods of Markov chain Monte Carlo. A straightforward implementation can be achieved in the software package WinBUGS. I provide an analysis of a simple model with constant parameter detection and survival probability parameters. A second example is based on data from a 7-year study of European dippers, in which a model with year and individual effects is fitted.

  10. Synthesis and luminescent properties of uniform monodisperse LuPO4:Eu3+/Tb3+ hollow microspheres

    NASA Astrophysics Data System (ADS)

    Gao, Yu; Yu, He; Shi, Cheng; Zhao, Guiyan; Bi, Yanfeng; Xu, Baotong; Ding, Fu; Sun, Yaguang; Xu, Zhenhe

    2017-12-01

    Uniform monodisperse LuPO4:Eu3+/Tb3+ hollow microspheres with diameters of about 2.4 µm have been successfully synthesized by the combination of a facile homogeneous precipitation approach, an ion-exchange process and a calcination process. The possible formation mechanism for the hollow microspheres was presented. Furthermore, the luminescence properties revealed that the LuPO4:Eu3+ and LuPO4:Tb3+ phosphors show strong orange-red and green emissions under ultraviolet excitation, respectively, which endows this material with potential application in many fields, such as light display systems and optoelectronic devices. Since the synthetic process can be carried out at mild conditions, it should be straightforward to scale up the entire process for large-scale production of the LuPO4 hollow microspheres. Furthermore, this general and simple method may be of much significance in the synthesis of many other inorganic materials.

  11. Acoustic impulse response method as a source of undergraduate research projects and advanced laboratory experiments.

    PubMed

    Robertson, W M; Parker, J M

    2012-03-01

    A straightforward and inexpensive implementation of acoustic impulse response measurement is described utilizing the signal processing technique of coherent averaging. The technique is capable of high signal-to-noise measurements with personal computer data acquisition equipment, an amplifier/speaker, and a high quality microphone. When coupled with simple waveguide test systems fabricated from commercial PVC plumbing pipe, impulse response measurement has proven to be ideal for undergraduate research projects-often of publishable quality-or for advanced laboratory experiments. The technique provides important learning objectives for science or engineering students in areas such as interfacing and computer control of experiments; analog-to-digital conversion and sampling; time and frequency analysis using Fourier transforms; signal processing; and insight into a variety of current research areas such as acoustic bandgap materials, acoustic metamaterials, and fast and slow wave manipulation. © 2012 Acoustical Society of America

  12. Zipf's word frequency law in natural language: a critical review and future directions.

    PubMed

    Piantadosi, Steven T

    2014-10-01

    The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.

  13. The enhanced Software Life Cyle Support Environment (ProSLCSE): Automation for enterprise and process modeling

    NASA Technical Reports Server (NTRS)

    Milligan, James R.; Dutton, James E.

    1993-01-01

    In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.

  14. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  15. Calibration of strain-gage installations in aircraft structures for the measurement of flight loads

    NASA Technical Reports Server (NTRS)

    Skopinski, T H; Aiken, William S , Jr; Huston, Wilber B

    1954-01-01

    A general method has been developed for calibrating strain-gage installations in aircraft structures, which permits the measurement in flight of the shear or lift, the bending moment, and the torque or pitching moment on the principal lifting or control surfaces. Although the stress in structural members may not be a simple function of the three loads of interest, a straightforward procedure is given for numerically combining the outputs of several bridges in such a way that the loads may be obtained. Extensions of the basic procedure by means of electrical combination of the strain-gage bridges are described which permit compromises between strain-gage installation time, availability of recording instruments, and data reduction time. The basic principles of strain-gage calibration procedures are illustrated by reference to the data for two aircraft structures of typical construction, one a straight and the other a swept horizontal stabilizer.

  16. Simultaneous Quantification of Multiple Alternatively Spliced mRNA Transcripts Using Droplet Digital PCR.

    PubMed

    Sun, Bing; Zheng, Yun-Ling

    2018-01-01

    Currently there is no sensitive, precise, and reproducible method to quantitate alternative splicing of mRNA transcripts. Droplet digital™ PCR (ddPCR™) analysis allows for accurate digital counting for quantification of gene expression. Human telomerase reverse transcriptase (hTERT) is one of the essential components required for telomerase activity and for the maintenance of telomeres. Several alternatively spliced forms of hTERT mRNA in human primary and tumor cells have been reported in the literature. Using one pair of primers and two probes for hTERT, four alternatively spliced forms of hTERT (α-/β+, α+/β- single deletions, α-/β- double deletion, and nondeletion α+/β+) were accurately quantified through a novel analysis method via data collected from a single ddPCR reaction. In this chapter, we describe this ddPCR method that enables direct quantitative comparison of four alternatively spliced forms of the hTERT messenger RNA without the need for internal standards or multiple pairs of primers specific for each variant, eliminating the technical variation due to differential PCR amplification efficiency for different amplicons and the challenges of quantification using standard curves. This simple and straightforward method should have general utility for quantifying alternatively spliced gene transcripts.

  17. Simple and effective graphene laser processing for neuron patterning application

    NASA Astrophysics Data System (ADS)

    Lorenzoni, Matteo; Brandi, Fernando; Dante, Silvia; Giugni, Andrea; Torre, Bruno

    2013-06-01

    A straightforward fabrication technique to obtain patterned substrates promoting ordered neuron growth is presented. Chemical vapor deposition (CVD) single layer graphene (SLG) was machined by means of single pulse UV laser ablation technique at the lowest effective laser fluence in order to minimize laser damage effects. Patterned substrates were then coated with poly-D-lysine by means of a simple immersion in solution. Primary embryonic hippocampal neurons were cultured on our substrate, demonstrating an ordered interconnected neuron pattern mimicking the pattern design. Surprisingly, the functionalization is more effective on the SLG, resulting in notably higher alignment for neuron adhesion and growth. Therefore the proposed technique should be considered a valuable candidate to realize a new generation of highly specialized biosensors.

  18. Minimalist design of a robust real-time quantum random number generator

    NASA Astrophysics Data System (ADS)

    Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.

    2015-08-01

    We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.

  19. Simple and effective graphene laser processing for neuron patterning application

    PubMed Central

    Lorenzoni, Matteo; Brandi, Fernando; Dante, Silvia; Giugni, Andrea; Torre, Bruno

    2013-01-01

    A straightforward fabrication technique to obtain patterned substrates promoting ordered neuron growth is presented. Chemical vapor deposition (CVD) single layer graphene (SLG) was machined by means of single pulse UV laser ablation technique at the lowest effective laser fluence in order to minimize laser damage effects. Patterned substrates were then coated with poly-D-lysine by means of a simple immersion in solution. Primary embryonic hippocampal neurons were cultured on our substrate, demonstrating an ordered interconnected neuron pattern mimicking the pattern design. Surprisingly, the functionalization is more effective on the SLG, resulting in notably higher alignment for neuron adhesion and growth. Therefore the proposed technique should be considered a valuable candidate to realize a new generation of highly specialized biosensors. PMID:23739674

  20. Endobronchial valves for bronchopleural fistula: pitfalls and principles.

    PubMed

    Gaspard, Dany; Bartter, Thaddeus; Boujaoude, Ziad; Raja, Haroon; Arya, Rohan; Meena, Nikhil; Abouzgheib, Wissam

    2017-01-01

    Placement of endobronchial valves for bronchopleural fistula (BPF) is not always straightforward. A simple guide to the steps for an uncomplicated procedure does not encompass pitfalls that need to be understood and overcome to maximize the efficacy of this modality. The objective of this study was to discuss examples of difficult cases for which the placement of endobronchial valves was not straightforward and required alterations in the usual basic steps. Subsequently, we aimed to provide guiding principles for a successful procedure. Six illustrative cases were selected to demonstrate issues that can arise during endobronchial valve placement. In each case, a real or apparent lack of decrease in airflow through a BPF was diagnosed and addressed. We have used the selected problem cases to illustrate principles, with the goal of helping to increase the success rate for endobronchial valve placement in the treatment of BPF. This series demonstrates issues that complicate effective placement of endobronchial valves for BPF. These issues form the basis for troubleshooting steps that complement the basic procedural steps.

  1. Games among relatives revisited.

    PubMed

    Allen, Benjamin; Nowak, Martin A

    2015-08-07

    We present a simple model for the evolution of social behavior in family-structured, finite sized populations. Interactions are represented as evolutionary games describing frequency-dependent selection. Individuals interact more frequently with siblings than with members of the general population, as quantified by an assortment parameter r, which can be interpreted as "relatedness". Other models, mostly of spatially structured populations, have shown that assortment can promote the evolution of cooperation by facilitating interaction between cooperators, but this effect depends on the details of the evolutionary process. For our model, we find that sibling assortment promotes cooperation in stringent social dilemmas such as the Prisoner's Dilemma, but not necessarily in other situations. These results are obtained through straightforward calculations of changes in gene frequency. We also analyze our model using inclusive fitness. We find that the quantity of inclusive fitness does not exist for general games. For special games, where inclusive fitness exists, it provides less information than the straightforward analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Breakdown of the classical description of a local system.

    PubMed

    Kot, Eran; Grønbech-Jensen, Niels; Nielsen, Bo M; Neergaard-Nielsen, Jonas S; Polzik, Eugene S; Sørensen, Anders S

    2012-06-08

    We provide a straightforward demonstration of a fundamental difference between classical and quantum mechanics for a single local system: namely, the absence of a joint probability distribution of the position x and momentum p. Elaborating on a recently reported criterion by Bednorz and Belzig [Phys. Rev. A 83, 052113 (2011)] we derive a simple criterion that must be fulfilled for any joint probability distribution in classical physics. We demonstrate the violation of this criterion using the homodyne measurement of a single photon state, thus proving a straightforward signature of the breakdown of a classical description of the underlying state. Most importantly, the criterion used does not rely on quantum mechanics and can thus be used to demonstrate nonclassicality of systems not immediately apparent to exhibit quantum behavior. The criterion is directly applicable to any system described by the continuous canonical variables x and p, such as a mechanical or an electrical oscillator and a collective spin of a large ensemble.

  3. Advanced data assimilation in strongly nonlinear dynamical systems

    NASA Technical Reports Server (NTRS)

    Miller, Robert N.; Ghil, Michael; Gauthiez, Francois

    1994-01-01

    Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence. Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided. Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method--based on an empirical statistical model derived from a Monte Carlo simulation--is formulated, and shown to work very well. Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.

  4. Arterial input function of an optical tracer for dynamic contrast enhanced imaging can be determined from pulse oximetry oxygen saturation measurements

    NASA Astrophysics Data System (ADS)

    Elliott, Jonathan T.; Wright, Eric A.; Tichauer, Kenneth M.; Diop, Mamadou; Morrison, Laura B.; Pogue, Brian W.; Lee, Ting-Yim; St. Lawrence, Keith

    2012-12-01

    In many cases, kinetic modeling requires that the arterial input function (AIF)—the time-dependent arterial concentration of a tracer—be characterized. A straightforward method to measure the AIF of red and near-infrared optical dyes (e.g., indocyanine green) using a pulse oximeter is presented. The method is motivated by the ubiquity of pulse oximeters used in both preclinical and clinical applications, as well as the gap in currently available technologies to measure AIFs in small animals. The method is based on quantifying the interference that is observed in the derived arterial oxygen saturation (SaO2) following a bolus injection of a light-absorbing dye. In other words, the change in SaO2 can be converted into dye concentration knowing the chromophore-specific extinction coefficients, the true arterial oxygen saturation, and total hemoglobin concentration. A simple error analysis was performed to highlight potential limitations of the approach, and a validation of the method was conducted in rabbits by comparing the pulse oximetry method with the AIF acquired using a pulse dye densitometer. Considering that determining the AIF is required for performing quantitative tracer kinetics, this method provides a flexible tool for measuring the arterial dye concentration that could be used in a variety of applications.

  5. Arterial input function of an optical tracer for dynamic contrast enhanced imaging can be determined from pulse oximetry oxygen saturation measurements.

    PubMed

    Elliott, Jonathan T; Wright, Eric A; Tichauer, Kenneth M; Diop, Mamadou; Morrison, Laura B; Pogue, Brian W; Lee, Ting-Yim; St Lawrence, Keith

    2012-12-21

    In many cases, kinetic modeling requires that the arterial input function (AIF)--the time-dependent arterial concentration of a tracer--be characterized. A straightforward method to measure the AIF of red and near-infrared optical dyes (e.g., indocyanine green) using a pulse oximeter is presented. The method is motivated by the ubiquity of pulse oximeters used in both preclinical and clinical applications, as well as the gap in currently available technologies to measure AIFs in small animals. The method is based on quantifying the interference that is observed in the derived arterial oxygen saturation (SaO₂) following a bolus injection of a light-absorbing dye. In other words, the change in SaO₂ can be converted into dye concentration knowing the chromophore-specific extinction coefficients, the true arterial oxygen saturation, and total hemoglobin concentration. A simple error analysis was performed to highlight potential limitations of the approach, and a validation of the method was conducted in rabbits by comparing the pulse oximetry method with the AIF acquired using a pulse dye densitometer. Considering that determining the AIF is required for performing quantitative tracer kinetics, this method provides a flexible tool for measuring the arterial dye concentration that could be used in a variety of applications.

  6. Thermodynamic equilibrium solubility measurements in simulated fluids by 96-well plate method in early drug discovery.

    PubMed

    Bharate, Sonali S; Vishwakarma, Ram A

    2015-04-01

    An early prediction of solubility in physiological media (PBS, SGF and SIF) is useful to predict qualitatively bioavailability and absorption of lead candidates. Despite of the availability of multiple solubility estimation methods, none of the reported method involves simplified fixed protocol for diverse set of compounds. Therefore, a simple and medium-throughput solubility estimation protocol is highly desirable during lead optimization stage. The present work introduces a rapid method for assessment of thermodynamic equilibrium solubility of compounds in aqueous media using 96-well microplate. The developed protocol is straightforward to set up and takes advantage of the sensitivity of UV spectroscopy. The compound, in stock solution in methanol, is introduced in microgram quantities into microplate wells followed by drying at an ambient temperature. Microplates were shaken upon addition of test media and the supernatant was analyzed by UV method. A plot of absorbance versus concentration of a sample provides saturation point, which is thermodynamic equilibrium solubility of a sample. The established protocol was validated using a large panel of commercially available drugs and with conventional miniaturized shake flask method (r(2)>0.84). Additionally, the statistically significant QSPR models were established using experimental solubility values of 52 compounds. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Substitution of the nitro group with Grignard reagents: facile arylation and alkenylation of pyridine N-oxides.

    PubMed

    Zhang, Fang; Zhang, Song; Duan, Xin-Fang

    2012-11-02

    The unprecedented substitution of a nitro group with aryl or alkenyl groups of Grignard reagents affords 2-aryl or alkenylpyridine N-oxides in modest to high yields with high chemoselectivity. This protocol allows a simple and clean synthesis of various 2-substituted pyridine N-oxides and the corresponding pyridine derivatives. Furthermore, straightforward one-pot iterative functionality of pyridine N-oxides could also be achieved simply by successive applications of two Grignard reagents.

  8. Phase-space quantum mechanics study of two identical particles in an external oscillatory potential

    NASA Technical Reports Server (NTRS)

    Nieto, Luis M.; Gadella, Manuel

    1993-01-01

    This simple example is used to show how the formalism of Moyal works when it is applied to systems of identical particles. The symmetric and antisymmetric Moyal propagators are evaluated for this case; from them, the correct energy levels of energy are obtained, as well as the Wigner functions for the symmetric and antisymmetric states of the two identical particle system. Finally, the solution of the Bloch equation is straightforwardly obtained from the expressions of the Moyal propagators.

  9. Statistical methods to estimate treatment effects from multichannel electroencephalography (EEG) data in clinical trials.

    PubMed

    Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir

    2010-07-15

    With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.

  10. Intercomparison of gamma scattering, gammatography, and radiography techniques for mild steel nonuniform corrosion detection

    NASA Astrophysics Data System (ADS)

    Priyada, P.; Margret, M.; Ramar, R.; Shivaramu, Menaka, M.; Thilagam, L.; Venkataraman, B.; Raj, Baldev

    2011-03-01

    This paper focuses on the mild steel (MS) corrosion detection and intercomparison of results obtained by gamma scattering, gammatography, and radiography techniques. The gamma scattering non-destructive evaluation (NDE) method utilizes scattered gamma radiation for the detection of corrosion, and the scattering experimental setup is an indigenously designed automated personal computer (PC) controlled scanning system consisting of computerized numerical control (CNC) controlled six-axis source detector system and four-axis job positioning system. The system has been successfully used to quantify the magnitude of corrosion and the thickness profile of a MS plate with nonuniform corrosion, and the results are correlated with those obtained from the conventional gammatography and radiography imaging measurements. A simple and straightforward reconstruction algorithm to reconstruct the densities of the objects under investigation and an unambiguous interpretation of the signal as a function of material density at any point of the thick object being inspected is described. In this simple and straightforward method the density of the target need not be known and only the knowledge of the target material's mass attenuation coefficients (composition) for the incident and scattered energies is enough to reconstruct the density of the each voxel of the specimen being studied. The Monte Carlo (MC) numerical simulation of the phenomena is done using the Monte Carlo N-Particle Transport Code (MCNP) and the quantitative estimates of the values of signal-to-noise ratio for different percentages of MS corrosion derived from these simulations are presented and the spectra are compared with the experimental data. The gammatography experiments are carried out using the same PC controlled scanning system in a narrow beam, good geometry setup, and the thickness loss is estimated from the measured transmitted intensity. Radiography of the MS plates is carried out using 160 kV x-ray machine. The digitized radiographs with a resolution of 50 μm are processed for the detection of corrosion damage in five different locations. The thickness losses due to the corrosion of the MS plate obtained by gamma scattering method are compared with those values obtained by gammatography and radiography techniques. The percentage thickness loss estimated at different positions of the corroded MS plate varies from 17.78 to 27.0, from 18.9 to 24.28, and from 18.9 to 24.28 by gamma scattering, gammatography, and radiography techniques, respectively. Overall, these results are consistent and in line with each other.

  11. On the `simple' form of the gravitational action and the self-interacting graviton

    NASA Astrophysics Data System (ADS)

    Tomboulis, E. T.

    2017-09-01

    The so-called ΓΓ-form of the gravitational Lagrangian, long known to provide its most compact expression as well as the most efficient generation of the graviton vertices, is taken as the starting point for discussing General Relativity as a theory of the self-interacting graviton. A straightforward but general method of converting to a covariant formulation by the introduction of a reference metric is given. It is used to recast the Einstein field equation as the equation of motion of a spin-2 particle interacting with the canonical energy-momentum tensor symmetrized by the standard Belinfante method applicable to any field carrying nonzero spin. This represents the graviton field equation in a form complying with the precepts of standard field theory. It is then shown how representations based on other, at face value completely unrelated definitions of energy-momentum (pseudo)tensors are all related by the addition of appropriate superpotential terms. Specifically, the superpotentials are explicitly constructed which connect to: i) the common definition consisting simply of the nonlinear part of the Einstein tensor; ii) the Landau-Lifshitz definition.

  12. Use of a Corona Discharge to Selectively Pattern a Hydrophilic/Hydrophobic Interface for Integrating Segmented Flow with Microchip Electrophoresis and Electrochemical Detection

    PubMed Central

    Filla, Laura A.; Kirkpatrick, Douglas C.; Martin, R. Scott

    2011-01-01

    Segmented flow in microfluidic devices involves the use of droplets that are generated either on- or off-chip. When used with off-chip sampling methods, segmented flow has been shown to prevent analyte dispersion and improve temporal resolution by periodically surrounding an aqueous flow stream with an immiscible carrier phase as it is transferred to the microchip. To analyze the droplets by methods such as electrochemistry or electrophoresis, a method to “desegment” the flow into separate aqueous and immiscible carrier phase streams is needed. In this paper, a simple and straightforward approach for this desegmentation process was developed by first creating an air/water junction in natively hydrophobic and perpendicular PDMS channels. The air-filled channel was treated with a corona discharge electrode to create a hydrophilic/hydrophobic interface. When a segmented flow stream encounters this interface, only the aqueous sample phase enters the hydrophilic channel, where it can be subsequently analyzed by electrochemistry or microchip-based electrophoresis with electrochemical detection. It is shown that the desegmentation process does not significantly degrade the temporal resolution of the system, with rise times as low as 12 s reported after droplets are recombined into a continuous flow stream. This approach demonstrates significant advantages over previous studies in that the treatment process takes only a few minutes, fabrication is relatively simple, and reversible sealing of the microchip is possible. This work should enable future studies where off-chip processes such as microdialysis can be integrated with segmented flow and electrochemical-based detection. PMID:21718004

  13. Straightforward rapid spectrophotometric quantification of total cyanogenic glycosides in fresh and processed cassava products.

    PubMed

    Tivana, Lucas Daniel; Da Cruz Francisco, Jose; Zelder, Felix; Bergenståhl, Bjorn; Dejmek, Petr

    2014-09-01

    In this study, we extend pioneering studies and demonstrate straightforward applicability of the corrin-based chemosensor, aquacyanocobyrinic acid (ACCA), for the instantaneous detection and rapid quantification of endogenous cyanide in fresh and processed cassava roots. Hydrolytically liberated endogenous cyanide from cyanogenic glycosides (CNp) reacts with ACCA to form dicyanocobyrinic acid (DCCA), accompanied by a change of colour from orange to violet. The method was successfully tested on various cassava samples containing between 6 and 200 mg equiv. HCN/kg as verified with isonicotinate/1,3-dimethylbarbiturate as an independent method. The affinity of ACCA sensor to cyanide is high, coordination occurs fast and the colorimetric response can therefore be instantaneously monitored with spectrophotometric methods. Direct applications of the sensor without need of extensive and laborious extraction processes are demonstrated in water-extracted samples, in acid-extracted samples, and directly on juice drops. ACCA showed high precision with a standard deviation (STDV) between 0.03 and 0.06 and high accuracy (93-96%). Overall, the ACCA procedure is straightforward, safe and easily performed. In a proof-of-concept study, rapid screening of ten samples within 20 min has been tested. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. A Simple Model to Quantify Radiolytic Production following Electron Emission from Heavy-Atom Nanoparticles Irradiated in Liquid Suspensions.

    PubMed

    Wardlow, Nathan; Polin, Chris; Villagomez-Bernabe, Balder; Currell, Fred

    2015-11-01

    We present a simple model for a component of the radiolytic production of any chemical species due to electron emission from irradiated nanoparticles (NPs) in a liquid environment, provided the expression for the G value for product formation is known and is reasonably well characterized by a linear dependence on beam energy. This model takes nanoparticle size, composition, density and a number of other readily available parameters (such as X-ray and electron attenuation data) as inputs and therefore allows for the ready determination of this contribution. Several approximations are used, thus this model provides an upper limit to the yield of chemical species due to electron emission, rather than a distinct value, and this upper limit is compared with experimental results. After the general model is developed we provide details of its application to the generation of HO• through irradiation of gold nanoparticles (AuNPs), a potentially important process in nanoparticle-based enhancement of radiotherapy. This model has been constructed with the intention of making it accessible to other researchers who wish to estimate chemical yields through this process, and is shown to be applicable to NPs of single elements and mixtures. The model can be applied without the need to develop additional skills (such as using a Monte Carlo toolkit), providing a fast and straightforward method of estimating chemical yields. A simple framework for determining the HO• yield for different NP sizes at constant NP concentration and initial photon energy is also presented.

  15. Equivalent circuit models for interpreting impedance perturbation spectroscopy data

    NASA Astrophysics Data System (ADS)

    Smith, R. Lowell

    2004-07-01

    As in-situ structural integrity monitoring disciplines mature, there is a growing need to process sensor/actuator data efficiently in real time. Although smaller, faster embedded processors will contribute to this, it is also important to develop straightforward, robust methods to reduce the overall computational burden for practical applications of interest. This paper addresses the use of equivalent circuit modeling techniques for inferring structure attributes monitored using impedance perturbation spectroscopy. In pioneering work about ten years ago significant progress was associated with the development of simple impedance models derived from the piezoelectric equations. Using mathematical modeling tools currently available from research in ultrasonics and impedance spectroscopy is expected to provide additional synergistic benefits. For purposes of structural health monitoring the objective is to use impedance spectroscopy data to infer the physical condition of structures to which small piezoelectric actuators are bonded. Features of interest include stiffness changes, mass loading, and damping or mechanical losses. Equivalent circuit models are typically simple enough to facilitate the development of practical analytical models of the actuator-structure interaction. This type of parametric structure model allows raw impedance/admittance data to be interpreted optimally using standard multiple, nonlinear regression analysis. One potential long-term outcome is the possibility of cataloging measured viscoelastic properties of the mechanical subsystems of interest as simple lists of attributes and their statistical uncertainties, whose evolution can be followed in time. Equivalent circuit models are well suited for addressing calibration and self-consistency issues such as temperature corrections, Poisson mode coupling, and distributed relaxation processes.

  16. Study and selection of in vivo protein interactions by coupling bimolecular fluorescence complementation and flow cytometry.

    PubMed

    Morell, Montse; Espargaro, Alba; Aviles, Francesc Xavier; Ventura, Salvador

    2008-01-01

    We present a high-throughput approach to study weak protein-protein interactions by coupling bimolecular fluorescent complementation (BiFC) to flow cytometry (FC). In BiFC, the interaction partners (bait and prey) are fused to two rationally designed fragments of a fluorescent protein, which recovers its function upon the binding of the interacting proteins. For weak protein-protein interactions, the detected fluorescence is proportional to the interaction strength, thereby allowing in vivo discrimination between closely related binders with different affinity for the bait protein. FC provides a method for high-speed multiparametric data acquisition and analysis; the assay is simple, thousands of cells can be analyzed in seconds and, if required, selected using fluorescence-activated cell sorting (FACS). The combination of both methods (BiFC-FC) provides a technically straightforward, fast and highly sensitive method to validate weak protein interactions and to screen and identify optimal ligands in biologically synthesized libraries. Once plasmids encoding the protein fusions have been obtained, the evaluation of a specific interaction, the generation of a library and selection of active partners using BiFC-FC can be accomplished in 5 weeks.

  17. Demographic stability metrics for conservation prioritization of isolated populations.

    PubMed

    Finn, Debra S; Bogan, Michael T; Lytle, David A

    2009-10-01

    Systems of geographically isolated habitat patches house species that occur naturally as small, disjunct populations. Many of these species are of conservation concern, particularly under the interacting influences of isolation and rapid global change. One potential conservation strategy is to prioritize the populations most likely to persist through change and act as sources for future recolonization of less stable localities. We propose an approach to classify long-term population stability (and, presumably, future persistence potential) with composite demographic metrics derived from standard population-genetic data. Stability metrics can be related to simple habitat measures for a straightforward method of classifying localities to inform conservation management. We tested these ideas in a system of isolated desert headwater streams with mitochondrial sequence data from 16 populations of a flightless aquatic insect. Populations exhibited a wide range of stability scores, which were significantly predicted by dry-season aquatic habitat size. This preliminary test suggests strong potential for our proposed method of classifying isolated populations according to persistence potential. The approach is complementary to existing methods for prioritizing local habitats according to diversity patterns and should be tested further in other systems and with additional loci to inform composite demographic stability scores.

  18. Quantitative Analysis of Homogeneous Electrocatalytic Reactions at IDA Electrodes: The Example of [Ni(PPh2NBn2)2]2+

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Fei; Parkinson, B. A.; Divan, Ralu

    Interdigitated array (IDA) electrodes have been applied to study the EC’ (electron transfer reaction followed by a catalytic reaction) reactions and a new method of quantitative analysis of IDA results was developed. In this new method, currents on IDA generator and collector electrodes for an EC’ mechanism are derived from the number of redox cycles and the contribution of non-catalytic current. And the fractions of bipotential recycling species and catalytic-active species are calculated, which helps understanding the catalytic reaction mechanism. The homogeneous hydrogen evolution reaction catalyzed by [Ni(PPh2NBn2)2]2+ (where PPh2NBn2 is 1,5-dibenzyl-3,7-diphenyl-1,5-diaza-3,7-diphosphacyclooctane) electrocatalyst was examined and analyzed with IDA electrodes.more » Besides, the existence of reaction intermediates in the catalytic cycle is inferred from the electrochemical behavior of a glassy carbon disk electrodes and carbon IDA electrodes. This quantitative analysis of IDA electrode cyclic voltammetry currents can be used as a simple and straightforward method for determining reaction mechanism in other catalytic systems as well.« less

  19. Potential of far-ultraviolet absorption spectroscopy as a highly sensitive qualitative and quantitative analysis method for polymer films, part I: classification of commercial food wrap films.

    PubMed

    Sato, Harumi; Higashi, Noboru; Ikehata, Akifumi; Koide, Noriko; Ozaki, Yukihiro

    2007-07-01

    The aim of the present study is to propose a totally new technique for the utilization of far-ultraviolet (UV) spectroscopy in polymer thin film analysis. Far-UV spectra in the 120-300 nm region have been measured in situ for six kinds of commercial polymer wrap films by use of a novel type of far-UV spectrometer that does not need vacuum evaporation. These films can be straightforwardly classified into three groups, polyethylene (PE) films, polyvinyl chloride (PVC) films, and polyvinylidene chloride (PVDC) films, by using the raw spectra. The differences in the wavelength of the absorption band due to the sigma-sigma* transition of the C-C bond have been used for the classification of the six kinds of films. Using this method, it was easy to distinguish the three kinds of PE films and to separate the two kinds of PVDC films. Compared with other spectroscopic methods, the advantages of this technique include nondestructive analysis, easy spectral measurement, high sensitivity, and simple spectral analysis. The present study has demonstrated that far-UV spectroscopy is a very promising technique for polymer film analysis.

  20. Adaptive behaviour and multiple equilibrium states in a predator-prey model.

    PubMed

    Pimenov, Alexander; Kelly, Thomas C; Korobeinikov, Andrei; O'Callaghan, Michael J A; Rachinskii, Dmitrii

    2015-05-01

    There is evidence that multiple stable equilibrium states are possible in real-life ecological systems. Phenomenological mathematical models which exhibit such properties can be constructed rather straightforwardly. For instance, for a predator-prey system this result can be achieved through the use of non-monotonic functional response for the predator. However, while formal formulation of such a model is not a problem, the biological justification for such functional responses and models is usually inconclusive. In this note, we explore a conjecture that a multitude of equilibrium states can be caused by an adaptation of animal behaviour to changes of environmental conditions. In order to verify this hypothesis, we consider a simple predator-prey model, which is a straightforward extension of the classic Lotka-Volterra predator-prey model. In this model, we made an intuitively transparent assumption that the prey can change a mode of behaviour in response to the pressure of predation, choosing either "safe" of "risky" (or "business as usual") behaviour. In order to avoid a situation where one of the modes gives an absolute advantage, we introduce the concept of the "cost of a policy" into the model. A simple conceptual two-dimensional predator-prey model, which is minimal with this property, and is not relying on odd functional responses, higher dimensionality or behaviour change for the predator, exhibits two stable co-existing equilibrium states with basins of attraction separated by a separatrix of a saddle point. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Progress on CD-DVD laser microfabrication method to develop cell culture scaffolds integrating biomimetic characteristics

    NASA Astrophysics Data System (ADS)

    Hautefeuille, Mathieu; Vázquez-Victorio, Genaro; Cruz-Ramírez, Aaron; Cabriales, Lucia; Jiménez-Diaz, Edgar; Escutia-Guadarrama, Lidia; López-Aparicio, Jehú; Pérez-Calixto, Daniel; Cano-Jorge, Mariel; Nieto-Rivera, Brenda; Sánchez-Olvera, Raúl

    2018-02-01

    The development of organ-on-chip and biological scaffolds is currently requiring simpler methods to microstructure biocompatible materials in three dimensions, fabricate structural and functional elements in biomaterials or modify the physicochemical properties of desired substrates. With the aim of creating simple, cost-effective alternatives to conventional existing techniques to produce such platforms with very specific properties, a low-power CD-DVD laser pickup head was recycled and mounted on a programmable three-axis micro-displacement system in order to modify the surface of polymeric materials in a local fashion. Thanks to a specially-designed method using a strongly absorbing additive coating the materials of interest, it has been possible to establish and precisely control processes useful in microtechnology for biomedical applications and normally restricted to much less affordable high-power lasers. In this work, we present our latest progress regarding the application of our fabrication technique to the development of organ-on-chip platforms thanks to the simple integration of several biomimetic characteristics typically achieved with traditional, less cost-effective microtechnology methods in one step or through replica-molding. Our straightforward approach indeed enables great control of local laser microablation for true on-demand biomimetic micropatterned designs in several transparent polymers and hydrogels of tunable stiffness and is allowing integration of microfluidics, microelectronics, optical waveguides, surface microstructuring and even transfer of superficial protein micropatterns on a variety of biocompatible materials. The results presented here were validated using hepatic and fibroblasts cell lines to demonstrate the viability of our procedure for organ-on-chip development and show the impact of such features in cell culture.

  2. Simultaneous determination of V, Ni and Fe in fuel fly ash using solid sampling high resolution continuum source graphite furnace atomic absorption spectrometry.

    PubMed

    Cárdenas Valdivia, A; Vereda Alonso, E; López Guerrero, M M; Gonzalez-Rodriguez, J; Cano Pavón, J M; García de Torres, A

    2018-03-01

    A green and simple method has been proposed in this work for the simultaneous determination of V, Ni and Fe in fuel ash samples by solid sampling high resolution continuum source graphite furnace atomic absorption spectrometry (SS HR CS GFAAS). The application of fast programs in combination with direct solid sampling allows eliminating pretreatment steps, involving minimal manipulation of sample. Iridium treated platforms were applied throughout the present study, enabling the use of aqueous standards for calibration. Correlation coefficients for the calibration curves were typically better than 0.9931. The concentrations found in the fuel ash samples analysed ranged from 0.66% to 4.2% for V, 0.23-0.7% for Ni and 0.10-0.60% for Fe. Precision (%RSD) were 5.2%, 10.0% and 9.8% for V, Ni and Fe, respectively, obtained as the average of the %RSD of six replicates of each fuel ash sample. The optimum conditions established were applied to the determination of the target analytes in fuel ash samples. In order to test the accuracy and applicability of the proposed method in the analysis of samples, five ash samples from the combustion of fuel in power stations, were analysed. The method accuracy was evaluated by comparing the results obtained using the proposed method with the results obtained by ICP OES previous acid digestion. The results showed good agreement between them. The goal of this work has been to develop a fast and simple methodology that permits the use of aqueous standards for straightforward calibration and the simultaneous determination of V, Ni and Fe in fuel ash samples by direct SS HR CS GFAAS. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Photometric correction for an optical CCD-based system based on the sparsity of an eight-neighborhood gray gradient.

    PubMed

    Zhang, Yuzhong; Zhang, Yan

    2016-07-01

    In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.

  4. Statistical Approaches for Spatiotemporal Prediction of Low Flows

    NASA Astrophysics Data System (ADS)

    Fangmann, A.; Haberlandt, U.

    2017-12-01

    An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.

  5. The Use of the Puzzle Box as a Means of Assessing the Efficacy of Environmental Enrichment

    PubMed Central

    O'Connor, Angela M.; Burton, Thomas J.; Leamey, Catherine A.; Sawatari, Atomu

    2014-01-01

    Environmental enrichment can dramatically influence the development and function of neural circuits. Further, enrichment has been shown to successfully delay the onset of symptoms in models of Huntington’s disease 1-4, suggesting environmental factors can evoke a neuroprotective effect against the progressive, cellular level damage observed in neurodegenerative disorders. The ways in which an animal can be environmentally enriched, however, can vary considerably. Further, there is no straightforward manner in which the effects of environmental enrichment can be assessed: most methods require either fairly complicated behavioral paradigms and/or postmortem anatomical/physiological analyses. This protocol describes the use of a simple and inexpensive behavioral assay, the Puzzle Box 5-7 as a robust means of determining the efficacy of increased social, sensory and motor stimulation on mice compared to cohorts raised in standard laboratory conditions. This simple problem solving task takes advantage of a rodent’s innate desire to avoid open enclosures by seeking shelter. Cognitive ability is assessed by adding increasingly complex impediments to the shelter’s entrance. The time a given subject takes to successfully remove the obstructions and enter the shelter serves as the primary metric for task performance. This method could provide a reliable means of rapidly assessing the efficacy of different enrichment protocols on cognitive function, thus paving the way for systematically determining the role specific environmental factors play in delaying the onset of neurodevelopmental and neurodegenerative disease. PMID:25590345

  6. TEMPERATURE SCENARIO DEVELOPMENT USING REGRESSION METHODS

    EPA Science Inventory

    A method of developing scenarios of future temperature conditions resulting from climatic change is presented. he method is straightforward and can be used to provide information about daily temperature variations and diurnal ranges, monthly average high, and low temperatures, an...

  7. Targeted Analyte Detection by Standard Addition Improves Detection Limits in MALDI Mass Spectrometry

    PubMed Central

    Eshghi, Shadi Toghi; Li, Xingde; Zhang, Hui

    2014-01-01

    Matrix-assisted laser desorption/ionization has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications. PMID:22877355

  8. Targeted analyte detection by standard addition improves detection limits in matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Toghi Eshghi, Shadi; Li, Xingde; Zhang, Hui

    2012-09-18

    Matrix-assisted laser desorption/ionization (MALDI) has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications.

  9. Variable tunneling barriers in FEBID based PtC metal-matrix nanocomposites as a transducing element for humidity sensing.

    PubMed

    Kolb, Florian; Schmoltner, Kerstin; Huth, Michael; Hohenau, Andreas; Krenn, Joachim; Klug, Andreas; List, Emil J W; Plank, Harald

    2013-08-02

    The development of simple gas sensing concepts is still of great interest for science and technology. The demands on an ideal device would be a single-step fabrication method providing a device which is sensitive, analyte-selective, quantitative, and reversible without special operating/reformation conditions such as high temperatures or special environments. In this study we demonstrate a new gas sensing concept based on a nanosized PtC metal-matrix system fabricated in a single step via focused electron beam induced deposition (FEBID). The sensors react selectively on polar H2O molecules quantitatively and reversibly without any special reformation conditions after detection events, whereas non-polar species (O2, CO2, N2) produce no response. The key elements are isolated Pt nanograins (2-3 nm) which are embedded in a dielectric carbon matrix. The electrical transport in such materials is based on tunneling effects in the correlated variable range hopping regime, where the dielectric carbon matrix screens the electric field between the particles, which governs the final conductivity. The specific change of these dielectric properties by the physisorption of polar gas molecules (H2O) can change the tunneling probability and thus the overall conductivity, allowing their application as a simple and straightforward sensing concept.

  10. Does linear separability really matter? Complex visual search is explained by simple search

    PubMed Central

    Vighneshvel, T.; Arun, S. P.

    2013-01-01

    Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822

  11. DDOT MXD+ method development report.

    DOT National Transportation Integrated Search

    2015-09-01

    Mixed-use development has become increasingly common across the country, including Washington, D.C. : However, a straightforward and empirically validated method for evaluating the traffic impacts of such : projects is still needed. The data presente...

  12. Implementation of Basic and Universal Gates In a single Circuit Based On Quantum-dot Cellular Automata Using Multi-Layer Crossbar Wire

    NASA Astrophysics Data System (ADS)

    Bhowmik, Dhrubajyoti; Saha, Apu Kr; Dutta, Paramartha; Nandi, Supratim

    2017-08-01

    Quantum-dot Cellular Automata (QCA) is one of the most substitutes developing nanotechnologies for electronic circuits, as a result of lower force utilization, higher speed and smaller size in correlation with CMOS innovation. The essential devices, a Quantum-dot cell can be utilized to logic gates and wires. As it is the key building block on nanotechnology circuits. By applying simple gates, the hardware requirements for a QCA circuit can be decreased and circuits can be less complex as far as level, delay and cell check. This article exhibits an unobtrusive methodology for actualizing novel upgraded simple and universal gates, which can be connected to outline numerous variations of complex QCA circuits. Proposed gates are straightforward in structure and capable as far as implementing any digital circuits. The main aim is to build all basic and universal gates in a simple circuit with and without crossbar-wire. Simulation results and physical relations affirm its handiness in actualizing each advanced circuit.

  13. A smart sensor architecture based on emergent computation in an array of outer-totalistic cells

    NASA Astrophysics Data System (ADS)

    Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred

    2005-06-01

    A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.

  14. QR Codes: Outlook for Food Science and Nutrition.

    PubMed

    Sanz-Valero, Javier; Álvarez Sabucedo, Luis M; Wanden-Berghe, Carmina; Santos Gago, Juan M

    2016-01-01

    QR codes opens up the possibility to develop simple-to-use, cost-effective-cost, and functional systems based on the optical recognition of inexpensive tags attached to physical objects. These systems, combined with Web platforms, can provide us with advanced services that are already currently broadly used on many contexts of the common life. Due to its philosophy, based on the automatic recognition of messages embedded on simple graphics by means of common devices such as mobile phones, QR codes are very convenient for the average user. Regretfully, its potential has not yet been fully exploited in the domains of food science and nutrition. This paper points out some applications to make the most of this technology for these domains in a straightforward manner. For its characteristics, we are addressing systems with low barriers to entry and high scalability for its deployment. Therefore, its launching among professional and final users is quite simple. The paper also provides high-level indications for the evaluation of the technological frame required to implement the identified possibilities of use.

  15. Bifurcations of solitary wave solutions for (two and three)-dimensional nonlinear partial differential equation in quantum and magnetized plasma by using two different methods

    NASA Astrophysics Data System (ADS)

    Khater, Mostafa M. A.; Seadawy, Aly R.; Lu, Dianchen

    2018-06-01

    In this research, we study new two techniques that called the extended simple equation method and the novel (G‧/G) -expansion method. The extended simple equation method depend on the auxiliary equation (dϕ/dξ = α + λϕ + μϕ2) which has three ways for solving depends on the specific condition on the parameters as follow: When (λ = 0) this auxiliary equation reduces to Riccati equation, when (α = 0) this auxiliary equation reduces to Bernoulli equation and when (α ≠ 0, λ ≠ 0, μ ≠ 0) we the general solutions of this auxiliary equation while the novel (G‧/G) -expansion method depends also on similar auxiliary equation (G‧/G)‧ = μ + λ(G‧/G) + (v - 1)(G‧/G) 2 which depend also on the value of (λ2 - 4 (v - 1) μ) and the specific condition on the parameters as follow: When (λ = 0) this auxiliary equation reduces to Riccati equation, when (μ = 0) this auxiliary equation reduces to Bernoulli equation and when (λ2 ≠ 4 (v - 1) μ) we the general solutions of this auxiliary equation. This show how both of these auxiliary equation are special cases of Riccati equation. We apply these methods on two dimensional nonlinear Kadomtsev-Petviashvili Burgers equation in quantum plasma and three-dimensional nonlinear modified Zakharov-Kuznetsov equation of ion-acoustic waves in a magnetized plasma. We obtain the exact traveling wave solutions of these important models and under special condition on the parameters, we get solitary traveling wave solutions. All calculations in this study have been established and verified back with the aid of the Maple package program. The executed method is powerful, effective and straightforward for solving nonlinear partial differential equations to obtain more and new solutions.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armour, M.A.; Nelson, C.; Sather, P. Briker, Y.

    Users of pesticides may have waste or surplus quantities or spills for disposal. One alternative is to deactivate the pesticide at the handling site by using a straightforward chemical reaction. This option can be practical for those who use relatively small quantities of a large variety of pesticides, for example, greenhouse workers, small farmers, and agricultural researchers. This paper describes practical on-site methods for the disposal of spills or small waste quantities of five commonly used pesticides, Diazinon, Chlorpyrifos, Iprodione, 2,4-D, and Captan. These have been tested in the laboratory for the rate of disappearance of the pesticide, the degreemore » of conversion to nontoxic products, the nature and identity of the products, the practicality of the method, and the ease of reproducibility. Methods selected were shown to be safe for the operator, reliable, and reproducible. Greater than 99% of the starting material had to be reacted under reasonable conditions and length of time. Detailed descriptions of the reactions are presented, so that they can be performed with reproducible results. Protective clothing worn during the handling and application of pesticides may become contaminated. Simple laundering does not always remove all of the pesticide residues. Thus, chronic dermal exposure may result from the pesticide-contaminated clothing. Appropriate methods of laundering using specific pretreatments have been determined. 7 refs.« less

  17. Vacuum-assisted fluid flow in microchannels to pattern substrates and cells.

    PubMed

    Shrirao, Anil B; Kung, Frank H; Yip, Derek; Cho, Cheul H; Townes-Anderson, Ellen

    2014-09-01

    Substrate and cell patterning are widely used techniques in cell biology to study cell-to-cell and cell-substrate interactions. Conventional patterning techniques work well only with simple shapes, small areas and selected bio-materials. This paper describes a method to distribute cell suspensions as well as substrate solutions into complex, long, closed (dead-end) polydimethylsiloxane (PDMS) microchannels using negative pressure. Our method builds upon a previous vacuum-assisted method used for micromolding (Jeon et al 1999 Adv. Mater 11 946) and successfully patterned collagen-I, fibronectin and Sal-1 substrates on glass and polystyrene surfaces, filling microchannels with lengths up to 120 mm and covering areas up to 13 × 10 mm(2). Vacuum-patterned substrates were subsequently used to culture mammalian PC12 and fibroblast cells and amphibian neurons. Cells were also patterned directly by injecting cell suspensions into microchannels using vacuum. Fibroblast and neuronal cells patterned using vacuum showed normal growth and minimal cell death indicating no adverse effects of vacuum on cells. Our method fills reversibly sealed PDMS microchannels. This enables the user to remove the PDMS microchannel cast and access the patterned biomaterial or cells for further experimental purposes. Overall, this is a straightforward technique that has broad applicability for cell biology.

  18. Vacuum-assisted Fluid Flow in Microchannels to Pattern Substrates and Cells

    PubMed Central

    Shrirao, Anil B.; Kung, Frank H.; Yip, Derek; Cho, Cheul H.; Townes-Anderson, Ellen

    2014-01-01

    Substrate and cell patterning are widely used techniques in cell biology to study cell-to-cell and cell-to-substrate interactions. Conventional patterning techniques work well only with simple shapes, small areas and selected bio-materials. This paper describes a method to distribute cell suspensions as well as substrate solutions into complex, long, closed (dead-end) polydimethylsiloxane (PDMS) microchannels using negative pressure. Our method builds upon a previous vacuum-assisted method used for micromolding (Jeon, Choi et al. 1999) and successfully patterned collagen-I, fibronectin and Sal-1 substrates on glass and polystyrene surfaces, filling microchannels with lengths up to 120 mm and covering areas up to 13 × 10 mm2. Vacuum-patterned substrates were subsequently used to culture mammalian PC12 and fibroblast cells and amphibian neurons. Cells were also patterned directly by injecting cell suspensions into microchannels using vacuum. Fibroblast and neuronal cells patterned using vacuum showed normal growth and minimal cell death indicating no adverse effects of vacuum on cells. Our method fills reversibly sealed PDMS microchannels. This enables the user to remove the PDMS microchannel cast and access the patterned biomaterial or cells for further experimental purposes. Overall, this is a straightforward technique that has broad applicability for cell biology. PMID:24989641

  19. The need for a usable assessment tool to analyse the efficacy of emergency care systems in developing countries: proposal to use the TEWS methodology.

    PubMed

    Sun, Jared H; Twomey, Michele; Tran, Jeffrey; Wallis, Lee A

    2012-11-01

    Ninety percent of emergency incidents occur in developing countries, and this is only expected to get worse as these nations develop. As a result, governments in developing countries are establishing emergency care systems. However, there is currently no widely-usable, objective method to monitor or research the rapid growth of emergency care in the developing world. Analysis of current quantitative methods to assess emergency care in developing countries, and the proposal of a more appropriate method. Currently accepted methods to quantitatively assess the efficacy of emergency care systems cannot be performed in most developing countries due to weak record-keeping infrastructure and the inappropriateness of applying Western derived coefficients to developing country conditions. As a result, although emergency care in the developing world is rapidly growing, researchers and clinicians are unable to objectively measure its progress or determine which policies work best in their respective countries. We propose the TEWS methodology, a simple analytical tool that can be handled by low-resource, developing countries. By relying on the most basic universal parameters, simplest calculations and straightforward protocol, the TEWS methodology allows for widespread analysis of emergency care in the developing world. This could become essential in the establishment and growth of new emergency care systems worldwide.

  20. Software Models Impact Stresses

    NASA Technical Reports Server (NTRS)

    Hanshaw, Timothy C.; Roy, Dipankar; Toyooka, Mark

    1991-01-01

    Generalized Impact Stress Software designed to assist engineers in predicting stresses caused by variety of impacts. Program straightforward, simple to implement on personal computers, "user friendly", and handles variety of boundary conditions applied to struck body being analyzed. Applications include mathematical modeling of motions and transient stresses of spacecraft, analysis of slamming of piston, of fast valve shutoffs, and play of rotating bearing assembly. Provides fast and inexpensive analytical tool for analysis of stresses and reduces dependency on expensive impact tests. Written in FORTRAN 77. Requires use of commercial software package PLOT88.

  1. Direct conversion of rheological compliance measurements into storage and loss moduli.

    PubMed

    Evans, R M L; Tassieri, Manlio; Auhl, Dietmar; Waigh, Thomas A

    2009-07-01

    We remove the need for Laplace/inverse-Laplace transformations of experimental data, by presenting a direct and straightforward mathematical procedure for obtaining frequency-dependent storage and loss moduli [G'(omega) and G''(omega), respectively], from time-dependent experimental measurements. The procedure is applicable to ordinary rheological creep (stress-step) measurements, as well as all microrheological techniques, whether they access a Brownian mean-square displacement, or a forced compliance. Data can be substituted directly into our simple formula, thus eliminating traditional fitting and smoothing procedures that disguise relevant experimental noise.

  2. Direct conversion of rheological compliance measurements into storage and loss moduli

    NASA Astrophysics Data System (ADS)

    Evans, R. M. L.; Tassieri, Manlio; Auhl, Dietmar; Waigh, Thomas A.

    2009-07-01

    We remove the need for Laplace/inverse-Laplace transformations of experimental data, by presenting a direct and straightforward mathematical procedure for obtaining frequency-dependent storage and loss moduli [ G'(ω) and G″(ω) , respectively], from time-dependent experimental measurements. The procedure is applicable to ordinary rheological creep (stress-step) measurements, as well as all microrheological techniques, whether they access a Brownian mean-square displacement, or a forced compliance. Data can be substituted directly into our simple formula, thus eliminating traditional fitting and smoothing procedures that disguise relevant experimental noise.

  3. Controlling Disulfide Bond Formation and Crystal Growth from 2-Mercaptobenzoic Acid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowland, Clare E.; Cantos, P. M.; Toby, B. H.

    2011-03-02

    We report disulfide bond formation from 2-mercaptobenzoic acid (2-MBA) under hydrothermal conditions as a function of pH. Under acidic conditions, 2-MBA remains unchanged. Upon increasing pH, however, we observe 50% oxidation to 2,2'-disulfanediyldibenzoic acid (2,2'-DSBA), which is isolated as a cocrystal of both the thiol and disulfide molecules. At neutral pH, we observe complete oxidation and concurrent crystal growth. The pH sensitivity of this system allows targeting crystals of specific composition from simple building units through a straightforward pH manipulation.

  4. Complex and open fractures: a straightforward approach to management in the cat.

    PubMed

    Corr, Sandra

    2012-01-01

    Cats often present with traumatic injuries of the limbs, including complex and open fractures, frequently as a result of road traffic accidents. On initial assessment, complex and open fractures may appear to require expertise beyond the experience of the general practitioner and, in some cases, referral to a specialist may be indicated or amputation should be considered. Many cases, however, can be managed using straightforward principles. This review describes a logical and practical approach to treating such injuries. It discusses general principles of fracture management, highlights the treatment of open fractures, and describes the use of external skeletal fixation for stabilisation of both open and complex fractures. Most fractures can be stabilised using equipment and expertise available in general practice if the basic principles of fracture fixation are understood and rigorously applied. Many textbooks and journal articles have been published on the management of fractures in companion animals, presenting case studies, case series and original biomechanical research. The simple strategy for managing complex injuries that is provided in this review is based on the published literature and the author's clinical experience.

  5. Straightforward and effective protein encapsulation in polypeptide-based artificial cells.

    PubMed

    Zhi, Zheng-Liang; Haynie, Donald T

    2006-01-01

    A simple and straightforward approach to encapsulating an enzyme and preserving its function in polypeptide-based artificial cells is demonstrated. A model enzyme, glucose oxidase (GOx), was encapsulated by repeated stepwise adsorption of poly(L-lysine) and poly(L-glutamic acid) onto GOx-coated CaCO3 templates. These polypeptides are known from previous research to exhibit nanometer-scale organization in multilayer films. Templates were dissolved by ethylenediaminetetraacetic acid (EDTA) at neutral pH. Addition of polyethylene glycol (PEG) to the polypeptide assembly solutions greatly increased enzyme retention on the templates, resulting in high-capacity, high-activity loading of the enzyme into artificial cells. Assay of enzyme activity showed that over 80 mg-mL(-1) GOx was retained in artificial cells after polypeptide multilayer film formation and template dissolution in the presence of PEG, but only one-fifth as much was retained in the absence of PEG. Encapsulation is a means of improving the availability of therapeutic macromolecules in biomedicine. This work therefore represents a means of developing polypeptide-based artificial cells for use as therapeutic biomacromolecule delivery vehicles.

  6. A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.

    Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less

  7. A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies

    DOE PAGES

    Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.; ...

    2017-10-01

    Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less

  8. Molecular Diagnosis and Biomarker Identification on SELDI proteomics data by ADTBoost method.

    PubMed

    Wang, Lu-Yong; Chakraborty, Amit; Comaniciu, Dorin

    2005-01-01

    Clinical proteomics is an emerging field that will have great impact on molecular diagnosis, identification of disease biomarkers, drug discovery and clinical trials in the post-genomic era. Protein profiling in tissues and fluids in disease and pathological control and other proteomics techniques will play an important role in molecular diagnosis with therapeutics and personalized healthcare. We introduced a new robust diagnostic method based on ADTboost algorithm, a novel algorithm in proteomics data analysis to improve classification accuracy. It generates classification rules, which are often smaller and easier to interpret. This method often gives most discriminative features, which can be utilized as biomarkers for diagnostic purpose. Also, it has a nice feature of providing a measure of prediction confidence. We carried out this method in amyotrophic lateral sclerosis (ALS) disease data acquired by surface enhanced laser-desorption/ionization-time-of-flight mass spectrometry (SELDI-TOF MS) experiments. Our method is shown to have outstanding prediction capacity through the cross-validation, ROC analysis results and comparative study. Our molecular diagnosis method provides an efficient way to distinguish ALS disease from neurological controls. The results are expressed in a simple and straightforward alternating decision tree format or conditional format. We identified most discriminative peaks in proteomic data, which can be utilized as biomarkers for diagnosis. It will have broad application in molecular diagnosis through proteomics data analysis and personalized medicine in this post-genomic era.

  9. Interpreting medium ring canonical conformers by a triangular plane tessellation of the macrocycle

    NASA Astrophysics Data System (ADS)

    Khalili, Pegah; Barnett, Christopher B.; Naidoo, Kevin J.

    2013-05-01

    Cyclic conformational coordinates are essential for the distinction of molecular ring conformers as the use of Cremer-Pople coordinates have illustrated for five- and six-membered rings. Here, by tessellating medium rings into triangular planes and using the relative angles made between triangular planes we are able to assign macrocyclic pucker conformations into canonical pucker conformers such as chairs, boats, etc. We show that the definition is straightforward compared with other methods popularly used for small rings and that it is computationally simple to implement for complex macrocyclic rings. These cyclic conformational coordinates directly couple to the motion of individual nodes of a ring. Therefore, they are useful for correlating the physical properties of macrocycles with their ring pucker and measuring the dynamic ring conformational behavior. We illustrate the triangular tessellation, assignment, and pucker analysis on 7- and 8-membered rings. Sets of canonical states are given for cycloheptane and cyclooctane that have been previously experimentally analysed.

  10. STRUCTURE IN THE 3D GALAXY DISTRIBUTION: III. FOURIER TRANSFORMING THE UNIVERSE: PHASE AND POWER SPECTRA.

    PubMed

    Scargle, Jeffrey D; Way, M J; Gazis, P R

    2017-04-10

    We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multi-point hierarchy. We identify some threads of modern large scale inference methodology that will presumably yield detections in new wider and deeper surveys.

  11. Measuring community bicycle helmet use among children.

    PubMed Central

    Schieber, R. A.; Sacks, J. J.

    2001-01-01

    Bicycling is a popular recreational activity and a principal mode of transportation for children in the United States, yet about 300 children die and 430,000 are injured annually. Wearing a bicycle helmet is an important countermeasure, since it reduces the risk of serious brain injury by up to 85%. The Centers for Disease Control and Prevention (CDC) have funded state health departments to conduct bicycle helmet programs, and their effectiveness has been evaluated by monitoring community bicycle helmet use. Although it would appear that measuring bicycle helmet use is easy, it is actually neither simple nor straightforward. The authors describe what they have learned about assessing helmet use and what methods have been most useful. They also detail several key practical decisions that define the current CDC position regarding helmet use assessment. Although important enough in their own right, the lessons learned in the CDC's bicycle helmet evaluation may serve as a model for evaluating other injury prevention and public health programs. PMID:11847297

  12. Photocatalytic activity of low temperature oxidized Ti-6Al-4V.

    PubMed

    Unosson, Erik; Persson, Cecilia; Welch, Ken; Engqvist, Håkan

    2012-05-01

    Numerous advanced surface modification techniques exist to improve bone integration and antibacterial properties of titanium based implants and prostheses. A simple and straightforward method of obtaining uniform and controlled TiO(2) coatings of devices with complex shapes is H(2)O(2)-oxidation and hot water aging. Based on the photoactivated bactericidal properties of TiO(2), this study was aimed at optimizing the treatment to achieve high photocatalytic activity. Ti-6Al-4V samples were H(2)O(2)-oxidized and hot water aged for up to 24 and 72 h, respectively. Degradation measurements of rhodamine B during UV-A illumination of samples showed a near linear relationship between photocatalytic activity and total treatment time, and a nanoporous coating was observed by scanning electron microscopy. Grazing incidence X-ray diffraction showed a gradual decrease in crystallinity of the surface layer, suggesting that the increase in surface area rather than anatase formation was responsible for the increase in photocatalytic activity.

  13. High Glass Transition Temperature Renewable Polymers via Biginelli Multicomponent Polymerization.

    PubMed

    Boukis, Andreas C; Llevot, Audrey; Meier, Michael A R

    2016-04-01

    A novel and straightforward one-pot multicomponent polycondensation method was established in this work. The Biginelli reaction is a versatile multicomponent reaction of an aldehyde, a β-ketoester (acetoacetate) and urea, which can all be obtained from renewable resources, yielding diversely substituted 3,4-dihydropyrimidin-2(1H)-ones (DHMPs). In this study, renewable diacetoacetate monomers with different spacer chain lengths (C3, C6, C10, C20) were prepared via simple transesterification of renewable diols and commercial acetoacetates. The diacetoacetate monomers were then reacted with renewable dialdehydes, i.e., terephthalaldehyde and divanillin in a Biginelli type step-growth polymerization. The obtained DHMP polymers (polyDHMPs) displayed high molar masses, high glass transition temperatures (Tg) up to 203 °C and good thermal stability (Td5%) of 280 °C. The Tg of the polyDHMPs could be tuned by variation of the structure of the dialdehyde or the diacetoacetate component. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Zipf’s word frequency law in natural language: A critical review and future directions

    PubMed Central

    2014-01-01

    The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf ’ s law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf’s law and are then used to evaluate many of the theoretical explanations of Zipf’s law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf’s law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data. PMID:24664880

  15. Visibility graphlet approach to chaotic time series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mutua, Stephen; Computer Science Department, Masinde Muliro University of Science and Technology, P.O. Box 190-50100, Kakamega; Gu, Changgui, E-mail: gu-changgui@163.com, E-mail: hjyang@ustc.edu.cn

    Many novel methods have been proposed for mapping time series into complex networks. Although some dynamical behaviors can be effectively captured by existing approaches, the preservation and tracking of the temporal behaviors of a chaotic system remains an open problem. In this work, we extended the visibility graphlet approach to investigate both discrete and continuous chaotic time series. We applied visibility graphlets to capture the reconstructed local states, so that each is treated as a node and tracked downstream to create a temporal chain link. Our empirical findings show that the approach accurately captures the dynamical properties of chaotic systems.more » Networks constructed from periodic dynamic phases all converge to regular networks and to unique network structures for each model in the chaotic zones. Furthermore, our results show that the characterization of chaotic and non-chaotic zones in the Lorenz system corresponds to the maximal Lyapunov exponent, thus providing a simple and straightforward way to analyze chaotic systems.« less

  16. Atlas of relations between climatic parameters and distributions of important trees and shrubs in North America—Modern data for climatic estimation from vegetation inventories

    USGS Publications Warehouse

    Thompson, Robert S.; Anderson, Katherine H.; Pelltier, Richard T.; Strickland, Laura E.; Shafer, Sarah L.; Bartlein, Patrick J.

    2012-01-01

    Vegetation inventories (plant taxa present in a vegetation assemblage at a given site) can be used to estimate climatic parameters based on the identification of the range of a given parameter where all taxa in an assemblage overlap ("Mutual Climatic Range"). For the reconstruction of past climates from fossil or subfossil plant assemblages, we assembled the data necessary for such analyses for 530 woody plant taxa and eight climatic parameters in North America. Here we present examples of how these data can be used to obtain paleoclimatic estimates from botanical data in a straightforward, simple, and robust fashion. We also include matrices of climate parameter versus occurrence or nonoccurrence of the individual taxa. These relations are depicted graphically as histograms of the population distributions of the occurrences of a given taxon plotted against a given climatic parameter. This provides a new method for quantification of paleoclimatic parameters from fossil plant assemblages.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curceanu, C.; Bragadireanu, M.; Sirghi, D.

    The Pauli Exclusion Principle (PEP) is one of the basic principles of modern physics and, even if there are no compelling reasons to doubt its validity, it is still debated today because an intuitive, elementary explanation is still missing, and because of its unique stand among the basic symmetries of physics. We present an experimental test of the validity of the Pauli Exclusion Principle for electrons based on a straightforward idea put forward a few years ago by Ramberg and Snow (E. Ramberg and G. A. Snow 1990 Phys. Lett. B 238 438). We performed a very accurate search ofmore » X-rays from the Pauli-forbidden atomic transitions of electrons in the already filled 1S shells of copper atoms. Although the experiment has a very simple structure, it poses deep conceptual and interpretational problems. Here we describe the experimental method and recent experimental results interpreted as an upper limit for the probability to violate the Pauli Exclusion Principle. We also present future plans to upgrade the experimental apparatus.« less

  18. Fourier-interpolation superresolution optical fluctuation imaging (fSOFi) (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Enderlein, Joerg; Stein, Simon C.; Huss, Anja; Hähnel, Dirk; Gregor, Ingo

    2016-02-01

    Stochastic Optical Fluctuation Imaging (SOFI) is a superresolution fluorescence microscopy technique which allows to enhance the spatial resolution of an image by evaluating the temporal fluctuations of blinking fluorescent emitters. SOFI is not based on the identification and localization of single molecules such as in the widely used Photoactivation Localization Microsopy (PALM) or Stochastic Optical Reconstruction Microscopy (STORM), but computes a superresolved image via temporal cumulants from a recorded movie. A technical challenge hereby is that, when directly applying the SOFI algorithm to a movie of raw images, the pixel size of the final SOFI image is the same as that of the original images, which becomes problematic when the final SOFI resolution is much smaller than this value. In the past, sophisticated cross-correlation schemes have been used for tackling this problem. Here, we present an alternative, exact, straightforward, and simple solution using an interpolation scheme based on Fourier transforms. We exemplify the method on simulated and experimental data.

  19. STRUCTURE IN THE 3D GALAXY DISTRIBUTION: III. FOURIER TRANSFORMING THE UNIVERSE: PHASE AND POWER SPECTRA

    PubMed Central

    Scargle, Jeffrey D.; Way, M. J.; Gazis, P. R.

    2017-01-01

    We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multi-point hierarchy. We identify some threads of modern large scale inference methodology that will presumably yield detections in new wider and deeper surveys. PMID:29628519

  20. Structure in the 3D Galaxy Distribution. III. Fourier Transforming the Universe: Phase and Power Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scargle, Jeffrey D.; Way, M. J.; Gazis, P. R., E-mail: Jeffrey.D.Scargle@nasa.gov, E-mail: Michael.J.Way@nasa.gov, E-mail: PGazis@sbcglobal.net

    We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform of finely binned galaxy positions. In both cases, deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fouriermore » transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multipoint hierarchy. We identify some threads of modern large-scale inference methodology that will presumably yield detections in new wider and deeper surveys.« less

  1. Co3 O4 Nanowire Arrays toward Superior Water Oxidation Electrocatalysis in Alkaline Media by Surface Amorphization.

    PubMed

    Zhou, Dan; He, Liangbo; Zhang, Rong; Hao, Shuai; Hou, Xiandeng; Liu, Zhiang; Du, Gu; Asiri, Abdullah M; Zheng, Chengbin; Sun, Xuping

    2017-11-07

    It is highly desirable to develop a simple, fast and straightforward method to boost the alkaline water oxidation of metal oxide catalysts. In this communication, we report our recent finding that the generation of amorphous Co-borate layer on Co 3 O 4 nanowire arrays supported on Ti mesh (Co 3 O 4 @Co-Bi NA/TM) leads to significantly boosted OER activity. The as-prepared Co 3 O 4 @Co-Bi NA/TM demands overpotential of 304 mV to drive a geometrical current density of 20 mA cm -2 in 1.0 M KOH, which is 109 mV less than that for Co 3 O 4 NA/TM, with its catalytic activity being preserved for at least 20 h. It suggests that the existence of amorphous Co-Bi layer promotes more CoO x (OH) y generation on Co 3 O 4 surface. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Graphene oxide--MnO2 nanocomposites for supercapacitors.

    PubMed

    Chen, Sheng; Zhu, Junwu; Wu, Xiaodong; Han, Qiaofeng; Wang, Xin

    2010-05-25

    A composite of graphene oxide supported by needle-like MnO(2) nanocrystals (GO-MnO(2) nanocomposites) has been fabricated through a simple soft chemical route in a water-isopropyl alcohol system. The formation mechanism of these intriguing nanocomposites investigated by transmission electron microscopy and Raman and ultraviolet-visible absorption spectroscopy is proposed as intercalation and adsorption of manganese ions onto the GO sheets, followed by the nucleation and growth of the crystal species in a double solvent system via dissolution-crystallization and oriented attachment mechanisms, which in turn results in the exfoliation of GO sheets. Interestingly, it was found that the electrochemical performance of as-prepared nanocomposites could be enhanced by the chemical interaction between GO and MnO(2). This method provides a facile and straightforward approach to deposit MnO(2) nanoparticles onto the graphene oxide sheets (single layer of graphite oxide) and may be readily extended to the preparation of other classes of hybrids based on GO sheets for technological applications.

  3. Ultrasensitive aptamer biosensor for malathion detection based on cationic polymer and gold nanoparticles.

    PubMed

    Bala, Rajni; Kumar, Munish; Bansal, Kavita; Sharma, Rohit K; Wangoo, Nishima

    2016-11-15

    In this work, we have demonstrated a novel sensing strategy for an organophosphorus pesticide namely, malathion, employing unmodified gold nanoparticles, aptamer and a positively charged, water-soluble polyelectrolyte Polydiallyldimethylammonium chloride (PDDA). The PDDA when associated with the aptamer prevents the aggregation of the gold-nanoparticles while no such inhibition is observed when the aptamer specific pesticide is added to the solution, thereby changing the color of the solution from red to blue. This type of biosensor is quite simple and straightforward and can be completed in a few minutes without the need of any expensive equipment or trained personnel. The proposed method was linear in the concentration range of 0.5-1000pM with 0.06pM as the limit of detection. Moreover, the proposed assay selectively recognized malathion in the presence of other interfering substances and thus, can be applied to real samples for the rapid screening of malathion. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A plasmid-based lacZα gene assay for DNA polymerase fidelity measurement

    PubMed Central

    Keith, Brian J.; Jozwiakowski, Stanislaw K.; Connolly, Bernard A.

    2013-01-01

    A significantly improved DNA polymerase fidelity assay, based on a gapped plasmid containing the lacZα reporter gene in a single-stranded region, is described. Nicking at two sites flanking lacZα, and removing the excised strand by thermocycling in the presence of complementary competitor DNA, is used to generate the gap. Simple methods are presented for preparing the single-stranded competitor. The gapped plasmid can be purified, in high amounts and in a very pure state, using benzoylated–naphthoylated DEAE–cellulose, resulting in a low background mutation frequency (∼1 × 10−4). Two key parameters, the number of detectable sites and the expression frequency, necessary for measuring polymerase error rates have been determined. DNA polymerase fidelity is measured by gap filling in vitro, followed by transformation into Escherichia coli and scoring of blue/white colonies and converting the ratio to error rate. Several DNA polymerases have been used to fully validate this straightforward and highly sensitive system. PMID:23098700

  5. C2 Arylated Benzo[b]thiophene Derivatives as Staphylococcus aureus NorA Efflux Pump Inhibitors.

    PubMed

    Liger, François; Bouhours, Pascale; Ganem-Elbaz, Carine; Jolivalt, Claude; Pellet-Rostaing, Stéphane; Popowycz, Florence; Paris, Jean-Marc; Lemaire, Marc

    2016-02-04

    An innovative and straightforward synthesis of second-generation 2-arylbenzo[b]thiophenes as structural analogues of INF55 and the first generation of our laboratory-made molecules was developed. The synthesis of C2-arylated benzo[b]thiophene derivatives was achieved through a method involving direct arylation, followed by simple structural modifications. Among the 34 compounds tested, two of them were potent NorA pump inhibitors, which led to a 16-fold decrease in the ciprofloxacin minimum inhibitory concentration (MIC) against the SA-1199B strain at concentrations of 0.25 and 0.5 μg mL(-1) (1 and 1.5 μm, respectively). This is a promising result relative to that obtained for reserpine (MIC=20 μg mL(-1)), a reference compound amongst NorA pump inhibitors. These molecules thus represent promising candidates to be used in combination with ciprofloxacin against fluoroquinolone-resistant strains. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habercorn, Lasse; Merkl, Jan-Philip; Kloust, Hauke Christian

    With the polymer encapsulation of quantum dots via seeded emulsion polymerization we present a powerful tool for the preparation of fluorescent nanoparticles with an extraordinary stability in aqueous solution. The method of the seeded emulsion polymerization allows a straightforward and simple in situ functionalization of the polymer shell under preserving the optical properties of the quantum dots. These requirements are inevitable for the application of semiconductor nanoparticles as markers for biomedical applications. Polymer encapsulated quantum dots have shown only a marginal loss of quantum yields when they were exposed to copper(II)-ions. Under normal conditions the quantum dots were totally quenchedmore » in presence of copper(II)-ions. Furthermore, a broad range of in situ functionalized polymer-coated quantum dots were obtained by addition of functional monomers or surfactants like fluorescent dye molecules, antibodies or specific DNA aptamers. Furthermore the emulsion polymerization can be used to prepare multifunctional hybrid systems, combining different nanoparticles within one construct without any adverse effect of the properties of the starting materials.{sup 1,2}.« less

  7. A simple approach to the joint inversion of seismic body and surface waves applied to the southwest U.S.

    NASA Astrophysics Data System (ADS)

    West, Michael; Gao, Wei; Grand, Stephen

    2004-08-01

    Body and surface wave tomography have complementary strengths when applied to regional-scale studies of the upper mantle. We present a straight-forward technique for their joint inversion which hinges on treating surface waves as horizontally-propagating rays with deep sensitivity kernels. This formulation allows surface wave phase or group measurements to be integrated directly into existing body wave tomography inversions with modest effort. We apply the joint inversion to a synthetic case and to data from the RISTRA project in the southwest U.S. The data variance reductions demonstrate that the joint inversion produces a better fit to the combined dataset, not merely a compromise. For large arrays, this method offers an improvement over augmenting body wave tomography with a one-dimensional model. The joint inversion combines the absolute velocity of a surface wave model with the high resolution afforded by body waves-both qualities that are required to understand regional-scale mantle phenomena.

  8. Structure in the 3D Galaxy Distribution: III. Fourier Transforming the Universe: Phase and Power Spectra

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.; Way, M. J.; Gazis, P. R.

    2017-01-01

    We demonstrate the effectiveness of a relatively straightforward analysis of the complex 3D Fourier transform of galaxy coordinates derived from redshift surveys. Numerical demonstrations of this approach are carried out on a volume-limited sample of the Sloan Digital Sky Survey redshift survey. The direct unbinned transform yields a complex 3D data cube quite similar to that from the Fast Fourier Transform (FFT) of finely binned galaxy positions. In both cases deconvolution of the sampling window function yields estimates of the true transform. Simple power spectrum estimates from these transforms are roughly consistent with those using more elaborate methods. The complex Fourier transform characterizes spatial distributional properties beyond the power spectrum in a manner different from (and we argue is more easily interpreted than) the conventional multi-point hierarchy. We identify some threads of modern large scale inference methodology that will presumably yield detections in new wider and deeper surveys.

  9. A model of color vision with a robot system

    NASA Astrophysics Data System (ADS)

    Wang, Haihui

    2006-01-01

    In this paper, we propose to generalize the saccade target method and state that perceptual stability in general arises by learning the effects one's actions have on sensor responses. The apparent visual stability of color percept across saccadic eye movements can be explained by positing that perception involves observing how sensory input changes in response to motor activities. The changes related to self-motion can be learned, and once learned, used to form stable percepts. The variation of sensor data in response to a motor act is therefore a requirement for stable perception rather than something that has to be compensated for in order to perceive a stable world. In this paper, we have provided a simple implementation of this sensory-motor contingency view of perceptual stability. We showed how a straightforward application of the temporal difference enhancement learning technique yielding color percepts that are stable across saccadic eye movements, even though the raw sensor input may change radically.

  10. Scanning micro-resonator direct-comb absolute spectroscopy

    PubMed Central

    Gambetta, Alessio; Cassinerio, Marco; Gatti, Davide; Laporta, Paolo; Galzerano, Gianluca

    2016-01-01

    Direct optical Frequency Comb Spectroscopy (DFCS) is proving to be a fundamental tool in many areas of science and technology thanks to its unique performance in terms of ultra-broadband, high-speed detection and frequency accuracy, allowing for high-fidelity mapping of atomic and molecular energy structure. Here we present a novel DFCS approach based on a scanning Fabry-Pérot micro-cavity resonator (SMART) providing a simple, compact and accurate method to resolve the mode structure of an optical frequency comb. The SMART approach, while drastically reducing system complexity, allows for a straightforward absolute calibration of the optical-frequency axis with an ultimate resolution limited by the micro-resonator resonance linewidth and can be used in any spectral region from UV to THz. We present an application to high-precision spectroscopy of acetylene at 1.54 μm, demonstrating performances comparable or even better than current state-of-the-art DFCS systems in terms of sensitivity, optical bandwidth and frequency-resolution. PMID:27752132

  11. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  12. Electron beam induced strong organic/inorganic grafting for thermally stable lithium-ion battery separators

    NASA Astrophysics Data System (ADS)

    Choi, Yunah; Kim, Jin Il; Moon, Jungjin; Jeong, Jongyeob; Park, Jong Hyeok

    2018-06-01

    A tailored interface between organic and inorganic materials is of great importance to maximize the synergistic effects from hybridization. Polyethylene separators over-coated with inorganic thin films are the state-of-the art technology for preparing various secondary batteries with high safety. Unfortunately, the organic/inorganic hybrid separators have the drawback of a non-ideal interface, thus causing poor thermal/dimensional stability. Here, we report a straightforward method to resolve the drawback of the non-ideal interface between vapor deposited SiO2 and polyethylene separators, to produce a highly stable lithium-ion battery separator through strong chemical linking generated by direct electron beam irradiation. The simple treatment with an electron beam with an optimized dose generates thermally stable polymer separators, which may enhance battery safety under high-temperature conditions. Additionally, the newly formed Si-O-C or Si-CH3 chemical bonding enhances electrolyte-separator compatibility and thus may provide a better environment for ionic transport between the cathode and anode, thereby leading to better charge/discharge behaviors.

  13. Simple prescription for computing the interparticle potential energy for D-dimensional gravity systems

    NASA Astrophysics Data System (ADS)

    Accioly, Antonio; Helayël-Neto, José; Barone, F. E.; Herdy, Wallace

    2015-02-01

    A straightforward prescription for computing the D-dimensional potential energy of gravitational models, which is strongly based on the Feynman path integral, is built up. Using this method, the static potential energy for the interaction of two masses is found in the context of D-dimensional higher-derivative gravity models, and its behavior is analyzed afterwards in both ultraviolet and infrared regimes. As a consequence, two new gravity systems in which the potential energy is finite at the origin, respectively, in D = 5 and D = 6, are found. Since the aforementioned prescription is equivalent to that based on the marriage between quantum mechanics (to leading order, i.e., in the first Born approximation) and the nonrelativistic limit of quantum field theory, and bearing in mind that the latter relies basically on the calculation of the nonrelativistic Feynman amplitude ({{M}NR}), a trivial expression for computing {{M}NR} is obtained from our prescription as an added bonus.

  14. Modeling of proton-induced radioactivation background in hard X-ray telescopes: Geant4-based simulation and its demonstration by Hitomi's measurement in a low Earth orbit

    NASA Astrophysics Data System (ADS)

    Odaka, Hirokazu; Asai, Makoto; Hagino, Kouichi; Koi, Tatsumi; Madejski, Greg; Mizuno, Tsunefumi; Ohno, Masanori; Saito, Shinya; Sato, Tamotsu; Wright, Dennis H.; Enoto, Teruaki; Fukazawa, Yasushi; Hayashi, Katsuhiro; Kataoka, Jun; Katsuta, Junichiro; Kawaharada, Madoka; Kobayashi, Shogo B.; Kokubun, Motohide; Laurent, Philippe; Lebrun, Francois; Limousin, Olivier; Maier, Daniel; Makishima, Kazuo; Mimura, Taketo; Miyake, Katsuma; Mori, Kunishiro; Murakami, Hiroaki; Nakamori, Takeshi; Nakano, Toshio; Nakazawa, Kazuhiro; Noda, Hirofumi; Ohta, Masayuki; Ozaki, Masanobu; Sato, Goro; Sato, Rie; Tajima, Hiroyasu; Takahashi, Hiromitsu; Takahashi, Tadayuki; Takeda, Shin'ichiro; Tanaka, Takaaki; Tanaka, Yasuyuki; Terada, Yukikatsu; Uchiyama, Hideki; Uchiyama, Yasunobu; Watanabe, Shin; Yamaoka, Kazutaka; Yasuda, Tetsuya; Yatsu, Yoichi; Yuasa, Takayuki; Zoglauer, Andreas

    2018-05-01

    Hard X-ray astronomical observatories in orbit suffer from a significant amount of background due to radioactivation induced by cosmic-ray protons and/or geomagnetically trapped protons. Within the framework of a full Monte Carlo simulation, we present modeling of in-orbit instrumental background which is dominated by radioactivation. To reduce the computation time required by straightforward simulations of delayed emissions from activated isotopes, we insert a semi-analytical calculation that converts production probabilities of radioactive isotopes by interaction of the primary protons into decay rates at measurement time of all secondary isotopes. Therefore, our simulation method is separated into three steps: (1) simulation of isotope production, (2) semi-analytical conversion to decay rates, and (3) simulation of decays of the isotopes at measurement time. This method is verified by a simple setup that has a CdTe semiconductor detector, and shows a 100-fold improvement in efficiency over the straightforward simulation. To demonstrate its experimental performance, the simulation framework was tested against data measured with a CdTe sensor in the Hard X-ray Imager onboard the Hitomi X-ray Astronomy Satellite, which was put into a low Earth orbit with an altitude of 570 km and an inclination of 31°, and thus experienced a large amount of irradiation from geomagnetically trapped protons during its passages through the South Atlantic Anomaly. The simulation is able to treat full histories of the proton irradiation and multiple measurement windows. The simulation results agree very well with the measured data, showing that the measured background is well described by the combination of proton-induced radioactivation of the CdTe detector itself and thick Bi4Ge3O12 scintillator shields, leakage of cosmic X-ray background and albedo gamma-ray radiation, and emissions from naturally contaminated isotopes in the detector system.

  15. Modeling of proton-induced radioactivation background in hard X-ray telescopes: Geant4-based simulation and its demonstration by Hitomi ’s measurement in a low Earth orbit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odaka, Hirokazu; Asai, Makoto; Hagino, Kouichi

    Hard X-ray astronomical observatories in orbit suffer from a significant amount of background due to radioactivation induced by cosmic-ray protons and/or geomagnetically trapped protons. Within the framework of a full Monte Carlo simulation, we present modeling of in-orbit instrumental background which is dominated by radioactivation. To reduce the computation time required by straightforward simulations of delayed emissions from activated isotopes, we insert a semi-analytical calculation that converts production probabilities of radioactive isotopes by interaction of the primary protons into decay rates at measurement time of all secondary isotopes. Therefore, our simulation method is separated into three steps: (1) simulation ofmore » isotope production, (2) semi-analytical conversion to decay rates, and (3) simulation of decays of the isotopes at measurement time. This method is verified by a simple setup that has a CdTe semiconductor detector, and shows a 100-fold improvement in efficiency over the straightforward simulation. To demonstrate its experimental performance, the simulation framework was tested against data measured with a CdTe sensor in the Hard X-ray Imager onboard the Hitomi X-ray Astronomy Satellite, which was put into a low Earth orbit with an altitude of 570 km and an inclination of 31°, and thus experienced a large amount of irradiation from geomagnetically trapped protons during its passages through the South Atlantic Anomaly. The simulation is able to treat full histories of the proton irradiation and multiple measurement windows. As a result, the simulation results agree very well with the measured data, showing that the measured background is well described by the combination of proton-induced radioactivation of the CdTe detector itself and thick Bi 4Ge 3O 12 scintillator shields, leakage of cosmic X-ray background and albedo gamma-ray radiation, and emissions from naturally contaminated isotopes in the detector system.« less

  16. Modeling of proton-induced radioactivation background in hard X-ray telescopes: Geant4-based simulation and its demonstration by Hitomi ’s measurement in a low Earth orbit

    DOE PAGES

    Odaka, Hirokazu; Asai, Makoto; Hagino, Kouichi; ...

    2018-02-19

    Hard X-ray astronomical observatories in orbit suffer from a significant amount of background due to radioactivation induced by cosmic-ray protons and/or geomagnetically trapped protons. Within the framework of a full Monte Carlo simulation, we present modeling of in-orbit instrumental background which is dominated by radioactivation. To reduce the computation time required by straightforward simulations of delayed emissions from activated isotopes, we insert a semi-analytical calculation that converts production probabilities of radioactive isotopes by interaction of the primary protons into decay rates at measurement time of all secondary isotopes. Therefore, our simulation method is separated into three steps: (1) simulation ofmore » isotope production, (2) semi-analytical conversion to decay rates, and (3) simulation of decays of the isotopes at measurement time. This method is verified by a simple setup that has a CdTe semiconductor detector, and shows a 100-fold improvement in efficiency over the straightforward simulation. To demonstrate its experimental performance, the simulation framework was tested against data measured with a CdTe sensor in the Hard X-ray Imager onboard the Hitomi X-ray Astronomy Satellite, which was put into a low Earth orbit with an altitude of 570 km and an inclination of 31°, and thus experienced a large amount of irradiation from geomagnetically trapped protons during its passages through the South Atlantic Anomaly. The simulation is able to treat full histories of the proton irradiation and multiple measurement windows. As a result, the simulation results agree very well with the measured data, showing that the measured background is well described by the combination of proton-induced radioactivation of the CdTe detector itself and thick Bi 4Ge 3O 12 scintillator shields, leakage of cosmic X-ray background and albedo gamma-ray radiation, and emissions from naturally contaminated isotopes in the detector system.« less

  17. Exploring Physics with Computer Animation and PhysGL

    NASA Astrophysics Data System (ADS)

    Bensky, T. J.

    2016-10-01

    This book shows how the web-based PhysGL programming environment (http://physgl.org) can be used to teach and learn elementary mechanics (physics) using simple coding exercises. The book's theme is that the lessons encountered in such a course can be used to generate physics-based animations, providing students with compelling and self-made visuals to aid their learning. Topics presented are parallel to those found in a traditional physics text, making for straightforward integration into a typical lecture-based physics course. Users will appreciate the ease at which compelling OpenGL-based graphics and animations can be produced using PhysGL, as well as its clean, simple language constructs. The author argues that coding should be a standard part of lower-division STEM courses, and provides many anecdotal experiences and observations, that include observed benefits of the coding work.

  18. A constraint on antigravity of antimatter from precision spectroscopy of simple atoms

    NASA Astrophysics Data System (ADS)

    Karshenboim, S. G.

    2009-10-01

    Consideration of antigravity for antiparticles is an attractive target for various experimental projects. There are a number of theoretical arguments against it but it is not quite clear what kind of experimental data and theoretical suggestions are involved. In this paper we present straightforward arguments against a possibility of antigravity based on a few simple theoretical suggestions and some experimental data. The data are: astrophysical data on rotation of the Solar System in respect to the center of our galaxy and precision spectroscopy data on hydrogen and positronium. The theoretical suggestions for the case of absence of the gravitational field are: equality of electron and positron mass and equality of proton and positron charge. We also assume that QED is correct at the level of accuracy where it is clearly confirmed experimentally.

  19. A simple tagging system for protein encapsulation.

    PubMed

    Seebeck, Florian P; Woycechowsky, Kenneth J; Zhuang, Wei; Rabe, Jürgen P; Hilvert, Donald

    2006-04-12

    Molecular containers that encapsulate specific cargo can be useful for many natural and non-natural processes. We report a simple system, based on charge complementarity, for the encapsulation of appropriately tagged proteins within an engineered, proteinaceous capsid. Four negative charges per monomer were added to the lumazine synthase from Aquifex aeolicus (AaLS). The capsids formed by the engineered AaLS associate with green fluorescent protein bearing a positively charged deca-arginine tag upon coproduction in Escherichia coli. Analytical ultracentrifugation and scanning force microscopy studies indicated that the engineered AaLS retains the ability to form capsids, but that their average size was substantially increased. The success of this strategy demonstrates that both the container and guest components of protein-based encapsulation systems can be convergently designed in a straightforward manner, which may help to extend their versatility.

  20. Virtual reality systems

    NASA Technical Reports Server (NTRS)

    Johnson, David W.

    1992-01-01

    Virtual realities are a type of human-computer interface (HCI) and as such may be understood from a historical perspective. In the earliest era, the computer was a very simple, straightforward machine. Interaction was human manipulation of an inanimate object, little more than the provision of an explicit instruction set to be carried out without deviation. In short, control resided with the user. In the second era of HCI, some level of intelligence and control was imparted to the system to enable a dialogue with the user. Simple context sensitive help systems are early examples, while more sophisticated expert system designs typify this era. Control was shared more equally. In this, the third era of the HCI, the constructed system emulates a particular environment, constructed with rules and knowledge about 'reality'. Control is, in part, outside the realm of the human-computer dialogue. Virtual reality systems are discussed.

  1. The simple procedure for the fluxgate magnetometers calibration

    NASA Astrophysics Data System (ADS)

    Marusenkov, Andriy

    2014-05-01

    The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the Coil Calibration system reveals, that the achieved accuracy (<0.04 % for scale factors and 0.03 degrees of arc for angle errors) is sufficient for many applications, particularly for satisfying the INTERMAGNET requirements to 1-second instruments.

  2. Single molecule photobleaching (SMPB) technology for counting of RNA, DNA, protein and other molecules in nanoparticles and biological complexes by TIRF instrumentation.

    PubMed

    Zhang, Hui; Guo, Peixuan

    2014-05-15

    Direct counting of biomolecules within biological complexes or nanomachines is demanding. Single molecule counting using optical microscopy is challenging due to the diffraction limit. The single molecule photobleaching (SMPB) technology for direct counting developed by our team (Shu et al., 2007 [18]; Zhang et al., 2007 [19]) offers a simple and straightforward method to determine the stoichiometry of molecules or subunits within biocomplexes or nanomachines at nanometer scales. Stoichiometry is determined by real-time observation of the number of descending steps resulted from the photobleaching of individual fluorophore. This technology has now been used extensively for single molecule counting of protein, RNA, and other macromolecules in a variety of complexes or nanostructures. Here, we elucidate the SMPB technology, using the counting of RNA molecules within a bacteriophage phi29 DNA-packaging biomotor as an example. The method described here can be applied to the single molecule counting of other molecules in other systems. The construction of a concise, simple and economical single molecule total internal reflection fluorescence (TIRF) microscope combining prism-type and objective-type TIRF is described. The imaging system contains a deep-cooled sensitive EMCCD camera with single fluorophore detection sensitivity, a laser combiner for simultaneous dual-color excitation, and a Dual-View™ imager to split the multiple outcome signals to different detector channels based on their wavelengths. Methodology of the single molecule photobleaching assay used to elucidate the stoichiometry of RNA on phi29 DNA packaging motor and the mechanism of protein/RNA interaction are described. Different methods for single fluorophore labeling of RNA molecules are reviewed. The process of statistical modeling to reveal the true copy number of the biomolecules based on binomial distribution is also described. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Genetic toxicity assessment: employing the best science for human safety evaluation part IV: Recommendation of a working group of the Gesellschaft fuer Umwelt-Mutationsforschung (GUM) for a simple and straightforward approach to genotoxicity testing.

    PubMed

    Pfuhler, Stefan; Albertini, Silvio; Fautz, Rolf; Herbold, Bernd; Madle, Stephan; Utesch, Dietmar; Poth, Albrecht

    2007-06-01

    Based on new scientific developments and experience of the regulation of chemical compounds, a working group of the Gesellschaft fuer Umweltmutationsforschung (GUM), a German-speaking section of the European Environmental Mutagen Society, proposes a simple and straightforward approach to genotoxicity testing. This strategy is divided into basic testing (stage I) and follow-up testing (stage II). Stage I consists of a bacterial gene mutation test plus an in vitro micronucleus test, therewith covering all mutagenicity endpoints. Stage II testing is in general required only if relevant positive results occur in stage I testing and will usually be in vivo. However, an isolated positive bacterial gene mutation test in stage I can be followed up with a gene mutation assay in mammalian cells. If this assay turns out negative and there are no compound-specific reasons for concern, in vivo follow-up testing may not be required. In those cases where in vivo testing is indicated, a single study combining the analysis of micronuclei in bone marrow with the comet assay in appropriately selected tissues is suggested. Negative results for both end points in relevant tissues will generally provide sufficient evidence to conclude that the test compound is nongenotoxic in vivo. Compounds which were recognized as in vivo somatic cell mutagens/genotoxicants in this hazard identification step will need further testing. In the absence of additional data, such compounds will have to be assumed to be potential genotoxic carcinogens and potential germ cell mutagens.

  4. Fractal Hypothesis of the Pelagic Microbial Ecosystem-Can Simple Ecological Principles Lead to Self-Similar Complexity in the Pelagic Microbial Food Web?

    PubMed

    Våge, Selina; Thingstad, T Frede

    2015-01-01

    Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales.

  5. Fractal Hypothesis of the Pelagic Microbial Ecosystem—Can Simple Ecological Principles Lead to Self-Similar Complexity in the Pelagic Microbial Food Web?

    PubMed Central

    Våge, Selina; Thingstad, T. Frede

    2015-01-01

    Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales. PMID:26648929

  6. Development and validation of an LC-MS/MS method for the toxicokinetic study of deoxynivalenol and its acetylated derivatives in chicken and pig plasma.

    PubMed

    Broekaert, N; Devreese, M; De Mil, T; Fraeyman, S; De Baere, S; De Saeger, S; De Backer, P; Croubels, S

    2014-11-15

    This study aims to develop an LC-MS/MS method allowing the determination of 3-acetyl-deoxynivalenol, 15-acetyl-deoxynivalenol, deoxynivalenol and its main in vivo metabolite, deepoxy-deoxynivalenol, in broiler chickens and pigs. These species have a high exposure to these toxins, given their mainly cereal based diet. Several sample cleanup strategies were tested and further optimized by means of fractional factorial designs. A simple and straightforward sample preparation method was developed consisting out of a deproteinisation step with acetonitrile, followed by evaporation of the supernatant and reconstitution in water. The method was single laboratory validated according to European guidelines and found to be applicable for the intended purpose, with a linear response up to 200ngml(-1) and limits of quantification of 0.1-2ngml(-1). As a proof of concept, biological samples from a broiler chicken that received either deoxynivalenol, 3- or 15-acetyl-deoxynivalenol were analyzed. Preliminary results indicate nearly complete hydrolysis of 3-acetyl-deoxynivalenol to deoxynivalenol; and to a lesser extent of 15-acetyl-deoxynivalenol to deoxynivalenol. No deepoxy-deoxynivalenol was detected in any of the plasma samples. The method will be applied to study full toxicokinetic properties of deoxynivalenol, 3-acetyl-deoxynivalenol and 15-acetyl-deoxynivalenol in broiler chickens and pigs. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Automatic and rapid identification of glycopeptides by nano-UPLC-LTQ-FT-MS and proteomic search engine.

    PubMed

    Giménez, Estela; Gay, Marina; Vilaseca, Marta

    2017-01-30

    Here we demonstrate the potential of nano-UPLC-LTQ-FT-MS and the Byonic™ proteomic search engine for the separation, detection, and identification of N- and O-glycopeptide glycoforms in standard glycoproteins. The use of a BEH C18 nanoACQUITY column allowed the separation of the glycopeptides present in the glycoprotein digest and a baseline-resolution of the glycoforms of the same glycopeptide on the basis of the number of sialic acids. Moreover, we evaluated several acquisition strategies in order to improve the detection and characterization of glycopeptide glycoforms with the maximum number of identification percentages. The proposed strategy is simple to set up with the technology platforms commonly used in proteomic labs. The method allows the straightforward and rapid obtention of a general glycosylated map of a given protein, including glycosites and their corresponding glycosylated structures. The MS strategy selected in this work, based on a gas phase fractionation approach, led to 136 unique peptides from four standard proteins, which represented 78% of the total number of peptides identified. Moreover, the method does not require an extra glycopeptide enrichment step, thus preventing the bias that this step could cause towards certain glycopeptide species. Data are available via ProteomeXchange with identifier PXD003578. We propose a simple and high-throughput glycoproteomics-based methodology that allows the separation of glycopeptide glycoforms on the basis of the number of sialic acids, and their automatic and rapid identification without prior knowledge of protein glycosites or type and structure of the glycans. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Superacid-Surfactant Exchange: Enabling Nondestructive Dispersion of Full-Length Carbon Nanotubes in Water.

    PubMed

    Wang, Peng; Kim, Mijin; Peng, Zhiwei; Sun, Chuan-Fu; Mok, Jasper; Lieberman, Anna; Wang, YuHuang

    2017-09-26

    Attaining aqueous solutions of individual, long single-walled carbon nanotubes is a critical first step for harnessing the extraordinary properties of these materials. However, the widely used ultrasonication-ultracentrifugation approach and its variants inadvertently cut the nanotubes into short pieces. The process is also time-consuming and difficult to scale. Here we present an unexpectedly simple solution to this decade-old challenge by directly neutralizing a nanotube-chlorosulfonic acid solution in the presence of sodium deoxycholate. This straightforward superacid-surfactant exchange eliminates the need for both ultrasonication and ultracentrifugation altogether, allowing aqueous solutions of individual nanotubes to be prepared within minutes and preserving the full length of the nanotubes. We found that the average length of the processed nanotubes is more than 350% longer than sonicated controls, with a significant fraction approaching ∼9 μm, a length that is limited by only the raw material. The nondestructive nature is manifested by an extremely low density of defects, bright and homogeneous photoluminescence in the near-infrared, and ultrahigh electrical conductivity in transparent thin films (130 Ω/sq at 83% transmittance), which well exceeds that of indium tin oxide. Furthermore, we demonstrate that our method is fully compatible with established techniques for sorting nanotubes by their electronic structures and can also be readily applied to graphene. This surprisingly simple method thus enables nondestructive aqueous solution processing of high-quality carbon nanomaterials at large-scale and low-cost with the potential for a wide range of fundamental studies and applications, including, for example, transparent conductors, near-infrared imaging, and high-performance electronics.

  9. A straightforward method for measuring the range of apparent density of microplastics.

    PubMed

    Li, Lingyun; Li, Mengmeng; Deng, Hua; Cai, Li; Cai, Huiwen; Yan, Beizhan; Hu, Jun; Shi, Huahong

    2018-10-15

    Density of microplastics has been regarded as the primary property that affect the distribution and bioavailability of microplastics in the water column. For measuring the density of microplastis, we developed a simple and rapid method based on density gradient solutions. In this study, we tested four solvents to make the density gradient solutions, i.e., ethanol (0.8 g/cm 3 ), ultrapure water (1.0 g/cm 3 ), saturated NaI (1.8 g/cm 3 ) and ZnCl 2 (1.8 g/cm 3 ). Density of microplastics was measured via observing the float or sink status in the density gradient solutions. We found that density gradient solutions made from ZnCl 2 had a larger uncertainty in measuring density than that from NaI, most likely due to a higher surface tension of ZnCl 2 solution. Solutions made from ethanol, ultrapure water, and NaI showed consistent density results with listed densities of commercial products, indicating that these density gradient solutions were suitable for measuring microplastics with a density range of 0.8-1.8 g/cm 3 . Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Non-invasive determination of glucose directly in raw fruits using a continuous flow system based on microdialysis sampling and amperometric detection at an integrated enzymatic biosensor.

    PubMed

    Vargas, E; Ruiz, M A; Campuzano, S; Reviejo, A J; Pingarrón, J M

    2016-03-31

    A non-destructive, rapid and simple to use sensing method for direct determination of glucose in non-processed fruits is described. The strategy involved on-line microdialysis sampling coupled with a continuous flow system with amperometric detection at an enzymatic biosensor. Apart from direct determination of glucose in fruit juices and blended fruits, this work describes for the first time the successful application of an enzymatic biosensor-based electrochemical approach to the non-invasive determination of glucose in raw fruits. The methodology correlates, through previous calibration set-up, the amperometric signal generated from glucose in non-processed fruits with its content in % (w/w). The comparison of the obtained results using the proposed approach in different fruits with those provided by other method involving the same commercial biosensor as amperometric detector in stirred solutions pointed out that there were no significant differences. Moreover, in comparison with other available methodologies, this microdialysis-coupled continuous flow system amperometric biosensor-based procedure features straightforward sample preparation, low cost, reduced assay time (sampling rate of 7 h(-1)) and ease of automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. SPHRINT - Printing Drug Delivery Microspheres from Polymeric Melts.

    PubMed

    Shpigel, Tal; Uziel, Almog; Lewitus, Dan Y

    2018-06-01

    This paper describes a simple, straightforward, and rapid method for producing microspheres from molten polymers by merely printing them in an inkjet-like manner onto a superoleophobic surface (microsphere printing, hence SPHRINT). Similar to 3D printing, a polymer melt is deposited onto a surface; however, in contrast to 2D or 3D printing, the surface is not wetted (i.e. exhibiting high contact angles with liquids, above 150°, due to its low surface energy), resulting in the formation of discrete spherical microspheres. In this study, microspheres were printed using polycaprolactone and poly(lactic-co-glycolic acid) loaded with a model active pharmaceutical ingredient-ibuprofen (IBU). The formation of microspheres was captured by high-speed imaging and was found to involve several physical phenomena characterized by non-dimensional numbers, including the thinning and breakup of highly viscous, weakly elastic filaments, which are first to be described in pure polymer melts. The resulting IBU-loaded microspheres had higher sphericity, reproducible sizes and shapes, and superior drug encapsulation efficiencies with a distinctly high process yield (>95%) as compared to the conservative solvent-based methods used presently. Furthermore, the microspheres showed sustained release profiles. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity

    NASA Astrophysics Data System (ADS)

    Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.

    2004-12-01

    Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.

  13. Development and application of accurate analytical models for single active electron potentials

    NASA Astrophysics Data System (ADS)

    Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas

    2015-05-01

    The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).

  14. Simple procedure for phase-space measurement and entanglement validation

    NASA Astrophysics Data System (ADS)

    Rundle, R. P.; Mills, P. W.; Tilma, Todd; Samson, J. H.; Everitt, M. J.

    2017-08-01

    It has recently been shown that it is possible to represent the complete quantum state of any system as a phase-space quasiprobability distribution (Wigner function) [Phys. Rev. Lett. 117, 180401 (2016), 10.1103/PhysRevLett.117.180401]. Such functions take the form of expectation values of an observable that has a direct analogy to displaced parity operators. In this work we give a procedure for the measurement of the Wigner function that should be applicable to any quantum system. We have applied our procedure to IBM's Quantum Experience five-qubit quantum processor to demonstrate that we can measure and generate the Wigner functions of two different Bell states as well as the five-qubit Greenberger-Horne-Zeilinger state. Because Wigner functions for spin systems are not unique, we define, compare, and contrast two distinct examples. We show how the use of these Wigner functions leads to an optimal method for quantum state analysis especially in the situation where specific characteristic features are of particular interest (such as for spin Schrödinger cat states). Furthermore we show that this analysis leads to straightforward, and potentially very efficient, entanglement test and state characterization methods.

  15. IOL calculation using paraxial matrix optics.

    PubMed

    Haigis, Wolfgang

    2009-07-01

    Matrix methods have a long tradition in paraxial physiological optics. They are especially suited to describe and handle optical systems in a simple and intuitive manner. While these methods are more and more applied to calculate the refractive power(s) of toric intraocular lenses (IOL), they are hardly used in routine IOL power calculations for cataract and refractive surgery, where analytical formulae are commonly utilized. Since these algorithms are also based on paraxial optics, matrix optics can offer rewarding approaches to standard IOL calculation tasks, as will be shown here. Some basic concepts of matrix optics are introduced and the system matrix for the eye is defined, and its application in typical IOL calculation problems is illustrated. Explicit expressions are derived to determine: predicted refraction for a given IOL power; necessary IOL power for a given target refraction; refractive power for a phakic IOL (PIOL); predicted refraction for a thick lens system. Numerical examples with typical clinical values are given for each of these expressions. It is shown that matrix optics can be applied in a straightforward and intuitive way to most problems of modern routine IOL calculation, in thick or thin lens approximation, for aphakic or phakic eyes.

  16. Communication and Organization in Software Development: An Empirical Study

    NASA Technical Reports Server (NTRS)

    Seaman, Carolyn B.; Basili, Victor R.

    1996-01-01

    The empirical study described in this paper addresses the issue of communication among members of a software development organization. The independent variables are various attributes of organizational structure. The dependent variable is the effort spent on sharing information which is required by the software development process in use. The research questions upon which the study is based ask whether or not these attributes of organizational structure have an effect on the amount of communication effort expended. In addition, there are a number of blocking variables which have been identified. These are used to account for factors other than organizational structure which may have an effect on communication effort. The study uses both quantitative and qualitative methods for data collection and analysis. These methods include participant observation, structured interviews, and graphical data presentation. The results of this study indicate that several attributes of organizational structure do affect communication effort, but not in a simple, straightforward way. In particular, the distances between communicators in the reporting structure of the organization, as well as in the physical layout of offices, affects how quickly they can share needed information, especially during meetings. These results provide a better understanding of how organizational structure helps or hinders communication in software development.

  17. TEA: the epigenome platform for Arabidopsis methylome study.

    PubMed

    Su, Sheng-Yao; Chen, Shu-Hwa; Lu, I-Hsuan; Chiang, Yih-Shien; Wang, Yu-Bin; Chen, Pao-Yang; Lin, Chung-Yen

    2016-12-22

    Bisulfite sequencing (BS-seq) has become a standard technology to profile genome-wide DNA methylation at single-base resolution. It allows researchers to conduct genome-wise cytosine methylation analyses on issues about genomic imprinting, transcriptional regulation, cellular development and differentiation. One single data from a BS-Seq experiment is resolved into many features according to the sequence contexts, making methylome data analysis and data visualization a complex task. We developed a streamlined platform, TEA, for analyzing and visualizing data from whole-genome BS-Seq (WGBS) experiments conducted in the model plant Arabidopsis thaliana. To capture the essence of the genome methylation level and to meet the efficiency for running online, we introduce a straightforward method for measuring genome methylation in each sequence context by gene. The method is scripted in Java to process BS-Seq mapping results. Through a simple data uploading process, the TEA server deploys a web-based platform for deep analysis by linking data to an updated Arabidopsis annotation database and toolkits. TEA is an intuitive and efficient online platform for analyzing the Arabidopsis genomic DNA methylation landscape. It provides several ways to help users exploit WGBS data. TEA is freely accessible for academic users at: http://tea.iis.sinica.edu.tw .

  18. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  19. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  20. Rapid and selective determination of multi-sulfonamides by high-performance thin layer chromatography coupled to fluorescent densitometry and electrospray ionization mass detection.

    PubMed

    Chen, Yisheng; Schwack, Wolfgang

    2014-02-28

    In the European Union (EU), sulfonamides are among the most widely administrated groups of antibiotics in animal husbandry. Therefore, monitoring their residues in edible animal tissues plays an important role in the EU food safety framework. In this work, a simple and efficient method for the rapid screening of twelve prior sulfonamides frequently prescribed as veterinary drugs by high-performance thin-layer chromatography (HPTLC) was established. Sample extracts obtained with acetonitrile were tenfold concentrated and applied to HPTLC without any further cleanup. Following separation and fluram derivatization, sensitive and selective quantitation of the analytes can readily be accomplished with fluorescent densitometry. Limits of detection and quantitation were 15-40 and 35-70μg/kg, respectively. Additionally, a confirmative detection by HPTLC-electrospray ionization mass spectrometry (HPTLC-ESI/MS) was optimized, offering straightforward identification of target zones. Therefore, the risk of potential false positive findings can efficiently be reduced. The method was validated to meet the enforced commission regulation (EU) No. 37/2010, regarding different matrix complexities (bovine milk, porcine liver and kidney). Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Electrically Controllable Microparticle Synthesis and Digital Microfluidic Manipulation by Electric-Field-Induced Droplet Dispensing into Immiscible Fluids

    PubMed Central

    Um, Taewoong; Hong, Jiwoo; Im, Do Jin; Lee, Sang Joon; Kang, In Seok

    2016-01-01

    The dispensing of tiny droplets is a basic and crucial process in a myriad of applications, such as DNA/protein microarray, cell cultures, chemical synthesis of microparticles, and digital microfluidics. This work systematically demonstrates droplet dispensing into immiscible fluids through electric charge concentration (ECC) method. It exhibits three main modes (i.e., attaching, uniform, and bursting modes) as a function of flow rates, applied voltages, and gap distances between the nozzle and the oil surface. Through a conventional nozzle with diameter of a few millimeters, charged droplets with volumes ranging from a few μL to a few tens of nL can be uniformly dispensed into the oil chamber without reduction in nozzle size. Based on the features of the proposed method (e.g., formation of droplets with controllable polarity and amount of electric charge in water and oil system), a simple and straightforward method is developed for microparticle synthesis, including preparation of colloidosomes and fabrication of Janus microparticles with anisotropic internal structures. Finally, a combined system consisting of ECC-induced droplet dispensing and electrophoresis of charged droplet (ECD)-driven manipulation systems is constructed. This integrated platform will provide increased utility and flexibility in microfluidic applications because a charged droplet can be delivered toward the intended position by programmable electric control. PMID:27534580

  2. The first three rungs of the cosmological distance ladder

    NASA Astrophysics Data System (ADS)

    Krisciunas, Kevin; DeBenedictis, Erika; Steeger, Jeremy; Bischoff-Kim, Agnes; Tabak, Gil; Pasricha, Kanika

    2012-05-01

    It is straightforward to determine the size of the Earth and the distance to the Moon without using a telescope. The methods have been known since the third century BCE. However, few astronomers have done this measurement from data they have taken. We use a gnomon to determine the latitude and longitude of South Bend, Indiana, and College Station, Texas, and determine the value of the radius of the Earth to be Rearth=6290 km, only 1.4% smaller than the known value. We use the method of Aristarchus and the size of the Earth's shadow during the lunar eclipse of June 15, 2011 to estimate the distance to the Moon to be 62.3Rearth, 3.3% greater than the known mean value. We use measurements of the angular motion of the Moon against the background stars over the course of two nights, using a simple cross staff device, to estimate the Moon's distance at perigee and apogee. We use simultaneous observations of asteroid 1996 HW1 obtained with small telescopes in Socorro, New Mexico, and Ojai, California, to obtain a value of the Astronomical Unit of (1.59+/-0.19)×108 km, about 6% too large. The data and methods presented here can easily become part of an introductory astronomy laboratory class.

  3. Direct extraction of genomic DNA from maize with aqueous ionic liquid buffer systems for applications in genetically modified organisms analysis.

    PubMed

    Gonzalez García, Eric; Ressmann, Anna K; Gaertner, Peter; Zirbs, Ronald; Mach, Robert L; Krska, Rudolf; Bica, Katharina; Brunner, Kurt

    2014-12-01

    To date, the extraction of genomic DNA is considered a bottleneck in the process of genetically modified organisms (GMOs) detection. Conventional DNA isolation methods are associated with long extraction times and multiple pipetting and centrifugation steps, which makes the entire procedure not only tedious and complicated but also prone to sample cross-contamination. In recent times, ionic liquids have emerged as innovative solvents for biomass processing, due to their outstanding properties for dissolution of biomass and biopolymers. In this study, a novel, easily applicable, and time-efficient method for the direct extraction of genomic DNA from biomass based on aqueous-ionic liquid solutions was developed. The straightforward protocol relies on extraction of maize in a 10 % solution of ionic liquids in aqueous phosphate buffer for 5 min at room temperature, followed by a denaturation step at 95 °C for 10 min and a simple filtration to remove residual biopolymers. A set of 22 ionic liquids was tested in a buffer system and 1-ethyl-3-methylimidazolium dimethylphosphate, as well as the environmentally benign choline formate, were identified as ideal candidates. With this strategy, the quality of the genomic DNA extracted was significantly improved and the extraction protocol was notably simplified compared with a well-established method.

  4. Deep Correlated Holistic Metric Learning for Sketch-Based 3D Shape Retrieval.

    PubMed

    Dai, Guoxian; Xie, Jin; Fang, Yi

    2018-07-01

    How to effectively retrieve desired 3D models with simple queries is a long-standing problem in computer vision community. The model-based approach is quite straightforward but nontrivial, since people could not always have the desired 3D query model available by side. Recently, large amounts of wide-screen electronic devices are prevail in our daily lives, which makes the sketch-based 3D shape retrieval a promising candidate due to its simpleness and efficiency. The main challenge of sketch-based approach is the huge modality gap between sketch and 3D shape. In this paper, we proposed a novel deep correlated holistic metric learning (DCHML) method to mitigate the discrepancy between sketch and 3D shape domains. The proposed DCHML trains two distinct deep neural networks (one for each domain) jointly, which learns two deep nonlinear transformations to map features from both domains into a new feature space. The proposed loss, including discriminative loss and correlation loss, aims to increase the discrimination of features within each domain as well as the correlation between different domains. In the new feature space, the discriminative loss minimizes the intra-class distance of the deep transformed features and maximizes the inter-class distance of the deep transformed features to a large margin within each domain, while the correlation loss focused on mitigating the distribution discrepancy across different domains. Different from existing deep metric learning methods only with loss at the output layer, our proposed DCHML is trained with loss at both hidden layer and output layer to further improve the performance by encouraging features in the hidden layer also with desired properties. Our proposed method is evaluated on three benchmarks, including 3D Shape Retrieval Contest 2013, 2014, and 2016 benchmarks, and the experimental results demonstrate the superiority of our proposed method over the state-of-the-art methods.

  5. Tachometer Derived From Brushless Shaft-Angle Resolver

    NASA Technical Reports Server (NTRS)

    Howard, David E.; Smith, Dennis A.

    1995-01-01

    Tachometer circuit operates in conjunction with brushless shaft-angle resolver. By performing sequence of straightforward mathematical operations on resolver signals and utilizing simple trigonometric identity, generates voltage proportional to rate of rotation of shaft. One advantage is use of brushless shaft-angle resolver as main source of rate signal: no brushes to wear out, no brush noise, and brushless resolvers have proven robustness. No switching of signals to generate noise. Another advantage, shaft-angle resolver used as shaft-angle sensor, tachometer input obtained without adding another sensor. Present circuit reduces overall size, weight, and cost of tachometer.

  6. PLANT DERMATITIS: ASIAN PERSPECTIVE

    PubMed Central

    Goon, Anthony Teik Jin; Goh, Chee Leok

    2011-01-01

    Occupational and recreational plant exposure on the skin is fairly common. Plant products and extracts are commonly used and found extensively in the environment. Adverse reactions to plants and their products are also fairly common. However, making the diagnosis of contact dermatitis from plants and plant extracts is not always simple and straightforward. Phytodermatitis refers to inflammation of the skin caused by a plant. The clinical patterns may be allergic phytodermatitis, photophytodermatitis, irritant contact dermatitis, pharmacological injury, and mechanical injury. In this article, we will focus mainly on allergy contact dermatitis from plants or allergic phytodermatitis occurring in Asia. PMID:22345775

  7. Grammar of binding in the languages of the world: Unity versus diversity.

    PubMed

    Reuland, Eric

    2017-11-01

    Cole, Hermon, and Yanti (2015) present a number of far-reaching conclusions about language universals on the basis of their study of the anaphoric systems of the Austronesian languages of Indonesia. The present contribution critically assesses these conclusions. It reports a further set of data, and shows that contra to what these authors argue, the systems they discuss can be straightforwardly accounted for by a simple set of universal principles plus properties of the vocabulary of the languages involved. I conclude this article with some remarks on acquisition. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Berkeley's moral philosophy.

    PubMed Central

    Warnock, G

    1990-01-01

    Berkeley held that the moral duty of mankind was to obey God's laws; that--since God was a benevolent Creator--the object of His laws must be to promote the welfare and flourishing of mankind; and that, accordingly, humans could identify their moral duties by asking what system of laws for conduct would in fact tend to promote that object. This position--which is akin to that of 'rule' Utilitarianism--is neither unfamiliar nor manifestly untenable. He was surely mistaken, however, in his further supposition that, if this theory were accepted, the resolution of all (or most) particular moral dilemmas would be simple and straightforward. PMID:2181141

  9. Incorporating uncertainty into medical decision making: an approach to unexpected test results.

    PubMed

    Bianchi, Matt T; Alexander, Brian M; Cash, Sydney S

    2009-01-01

    The utility of diagnostic tests derives from the ability to translate the population concepts of sensitivity and specificity into information that will be useful for the individual patient: the predictive value of the result. As the array of available diagnostic testing broadens, there is a temptation to de-emphasize history and physical findings and defer to the objective rigor of technology. However, diagnostic test interpretation is not always straightforward. One significant barrier to routine use of probability-based test interpretation is the uncertainty inherent in pretest probability estimation, the critical first step of Bayesian reasoning. The context in which this uncertainty presents the greatest challenge is when test results oppose clinical judgment. It is this situation when decision support would be most helpful. The authors propose a simple graphical approach that incorporates uncertainty in pretest probability and has specific application to the interpretation of unexpected results. This method quantitatively demonstrates how uncertainty in disease probability may be amplified when test results are unexpected (opposing clinical judgment), even for tests with high sensitivity and specificity. The authors provide a simple nomogram for determining whether an unexpected test result suggests that one should "switch diagnostic sides.'' This graphical framework overcomes the limitation of pretest probability uncertainty in Bayesian analysis and guides decision making when it is most challenging: interpretation of unexpected test results.

  10. ;Click; analytics for ;click; chemistry - A simple method for calibration-free evaluation of online NMR spectra

    NASA Astrophysics Data System (ADS)

    Michalik-Onichimowska, Aleksandra; Kern, Simon; Riedel, Jens; Panne, Ulrich; King, Rudibert; Maiwald, Michael

    2017-04-01

    Driven mostly by the search for chemical syntheses under biocompatible conditions, so called "click" chemistry rapidly became a growing field of research. The resulting simple one-pot reactions are so far only scarcely accompanied by an adequate optimization via comparably straightforward and robust analysis techniques possessing short set-up times. Here, we report on a fast and reliable calibration-free online NMR monitoring approach for technical mixtures. It combines a versatile fluidic system, continuous-flow measurement of 1H spectra with a time interval of 20 s per spectrum, and a robust, fully automated algorithm to interpret the obtained data. As a proof-of-concept, the thiol-ene coupling between N-boc cysteine methyl ester and allyl alcohol was conducted in a variety of non-deuterated solvents while its time-resolved behaviour was characterized with step tracer experiments. Overlapping signals in online spectra during thiol-ene coupling could be deconvoluted with a spectral model using indirect hard modeling and were subsequently converted to either molar ratios (using a calibration-free approach) or absolute concentrations (using 1-point calibration). For various solvents the kinetic constant k for pseudo-first order reaction was estimated to be 3.9 h-1 at 25 °C. The obtained results were compared with direct integration of non-overlapping signals and showed good agreement with the implemented mass balance.

  11. Volume determination of irregularly-shaped quasi-spherical nanoparticles.

    PubMed

    Attota, Ravi Kiran; Liu, Eileen Cherry

    2016-11-01

    Nanoparticles (NPs) are widely used in diverse application areas, such as medicine, engineering, and cosmetics. The size (or volume) of NPs is one of the most important parameters for their successful application. It is relatively straightforward to determine the volume of regular NPs such as spheres and cubes from a one-dimensional or two-dimensional measurement. However, due to the three-dimensional nature of NPs, it is challenging to determine the proper physical size of many types of regularly and irregularly-shaped quasi-spherical NPs at high-throughput using a single tool. Here, we present a relatively simple method that determines a better volume estimate of NPs by combining measurements from their top-down projection areas and peak heights using two tools. The proposed method is significantly faster and more economical than the electron tomography method. We demonstrate the improved accuracy of the combined method over scanning electron microscopy (SEM) or atomic force microscopy (AFM) alone by using modeling, simulations, and measurements. This study also exposes the existence of inherent measurement biases for both SEM and AFM, which usually produce larger measured diameters with SEM than with AFM. However, in some cases SEM measured diameters appear to have less error compared to AFM measured diameters, especially for widely used IS-NPs such as of gold, and silver. The method provides a much needed, proper high-throughput volumetric measurement method useful for many applications. Graphical Abstract The combined method for volume determination of irregularly-shaped quasi-spherical nanoparticles.

  12. Determining flexor-tendon repair techniques via soft computing

    NASA Technical Reports Server (NTRS)

    Johnson, M.; Firoozbakhsh, K.; Moniem, M.; Jamshidi, M.

    2001-01-01

    An SC-based multi-objective decision-making method for determining the optimal flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the Taguchi method. Using the Taguchi method results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.

  13. Determining flexor-tendon repair techniques via soft computing.

    PubMed

    Johnson, M; Firoozbakhsh, K; Moniem, M; Jamshidi, M

    2001-01-01

    An SC-based multi-objective decision-making method for determining the optimal flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the Taguchi method. Using the Taguchi method results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.

  14. A Simple and Efficient Methodology To Improve Geometric Accuracy in Gamma Knife Radiation Surgery: Implementation in Multiple Brain Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karaiskos, Pantelis, E-mail: pkaraisk@med.uoa.gr; Gamma Knife Department, Hygeia Hospital, Athens; Moutsatsos, Argyris

    Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquiredmore » from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.« less

  15. Segmentation of remotely sensed data using parallel region growing

    NASA Technical Reports Server (NTRS)

    Tilton, J. C.; Cox, S. C.

    1983-01-01

    The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.

  16. Reflection of a polarized light cone

    NASA Astrophysics Data System (ADS)

    Brody, Jed; Weiss, Daniel; Berland, Keith

    2013-01-01

    We introduce a visually appealing experimental demonstration of Fresnel reflection. In this simple optical experiment, a polarized light beam travels through a high numerical-aperture microscope objective, reflects off a glass slide, and travels back through the same objective lens. The return beam is sampled with a polarizing beam splitter and produces a surprising geometric pattern on an observation screen. Understanding the origin of this pattern requires careful attention to geometry and an understanding of the Fresnel coefficients for S and P polarized light. We demonstrate that in addition to a relatively simple experimental implementation, the shape of the observed pattern can be computed both analytically and by using optical modeling software. The experience of working through complex mathematical computations and demonstrating their agreement with a surprising experimental observation makes this a highly educational experiment for undergraduate optics or advanced-lab courses. It also provides a straightforward yet non-trivial system for teaching students how to use optical modeling software.

  17. Contrast discrimination, non-uniform patterns and change blindness.

    PubMed Central

    Scott-Brown, K C; Orbach, H S

    1998-01-01

    Change blindness--our inability to detect large changes in natural scenes when saccades, blinks and other transients interrupt visual input--seems to contradict psychophysical evidence for our exquisite sensitivity to contrast changes. Can the type of effects described as 'change blindness' be observed with simple, multi-element stimuli, amenable to psychophysical analysis? Such stimuli, composed of five mixed contrast elements, elicited a striking increase in contrast increment thresholds compared to those for an isolated element. Cue presentation prior to the stimulus substantially reduced thresholds, as for change blindness with natural scenes. On one hand, explanations for change blindness based on abstract and sketchy representations in short-term visual memory seem inappropriate for this low-level image property of contrast where there is ample evidence for exquisite performance on memory tasks. On the other hand, the highly increased thresholds for mixed contrast elements, and the decreased thresholds when a cue is present, argue against any simple early attentional or sensory explanation for change blindness. Thus, psychophysical results for very simple patterns cannot straightforwardly predict results even for the slightly more complicated patterns studied here. PMID:9872004

  18. Simple robust control laws for robot manipulators. Part 2: Adaptive case

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.; Wen, J. T.

    1987-01-01

    A new class of asymptotically stable adaptive control laws is introduced for application to the robotic manipulator. Unlike most applications of adaptive control theory to robotic manipulators, this analysis addresses the nonlinear dynamics directly without approximation, linearization, or ad hoc assumptions, and utilizes a parameterization based on physical (time-invariant) quantities. This approach is made possible by using energy-like Lyapunov functions which retain the nonlinear character and structure of the dynamics, rather than simple quadratic forms which are ubiquitous to the adaptive control literature, and which have bound the theory tightly to linear systems with unknown parameters. It is a unique feature of these results that the adaptive forms arise by straightforward certainty equivalence adaptation of their nonadaptive counterparts found in the companion to this paper (i.e., by replacing unknown quantities by their estimates) and that this simple approach leads to asymptotically stable closed-loop adaptive systems. Furthermore, it is emphasized that this approach does not require convergence of the parameter estimates (i.e., via persistent excitation), invertibility of the mass matrix estimate, or measurement of the joint accelerations.

  19. Shot-Noise Limited Single-Molecule FRET Histograms: Comparison between Theory and Experiments†

    PubMed Central

    Nir, Eyal; Michalet, Xavier; Hamadani, Kambiz M.; Laurence, Ted A.; Neuhauser, Daniel; Kovchegov, Yevgeniy; Weiss, Shimon

    2011-01-01

    We describe a simple approach and present a straightforward numerical algorithm to compute the best fit shot-noise limited proximity ratio histogram (PRH) in single-molecule fluorescence resonant energy transfer diffusion experiments. The key ingredient is the use of the experimental burst size distribution, as obtained after burst search through the photon data streams. We show how the use of an alternated laser excitation scheme and a correspondingly optimized burst search algorithm eliminates several potential artifacts affecting the calculation of the best fit shot-noise limited PRH. This algorithm is tested extensively on simulations and simple experimental systems. We find that dsDNA data exhibit a wider PRH than expected from shot noise only and hypothetically account for it by assuming a small Gaussian distribution of distances with an average standard deviation of 1.6 Å. Finally, we briefly mention the results of a future publication and illustrate them with a simple two-state model system (DNA hairpin), for which the kinetic transition rates between the open and closed conformations are extracted. PMID:17078646

  20. Nodal Analysis Optimization Based on the Use of Virtual Current Sources: A Powerful New Pedagogical Method

    ERIC Educational Resources Information Center

    Chatzarakis, G. E.

    2009-01-01

    This paper presents a new pedagogical method for nodal analysis optimization based on the use of virtual current sources, applicable to any linear electric circuit (LEC), regardless of its complexity. The proposed method leads to straightforward solutions, mostly arrived at by inspection. Furthermore, the method is easily adapted to computer…

  1. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  2. An immersed boundary method for simulating vesicle dynamics in three dimensions

    NASA Astrophysics Data System (ADS)

    Seol, Yunchang; Hu, Wei-Fan; Kim, Yongsam; Lai, Ming-Chih

    2016-10-01

    We extend our previous immersed boundary (IB) method for 3D axisymmetric inextensible vesicle in Navier-Stokes flows (Hu et al., 2014 [17]) to general three dimensions. Despite a similar spirit in numerical algorithms to the axisymmetric case, the fully 3D numerical implementation is much more complicated and is far from straightforward. A vesicle membrane surface is known to be incompressible and exhibits bending resistance. As in 3D axisymmetric case, instead of keeping the vesicle locally incompressible, we adopt a modified elastic tension energy to make the vesicle surface patch nearly incompressible so that solving the unknown tension (Lagrange multiplier for the incompressible constraint) can be avoided. Nevertheless, the new elastic force derived from the modified tension energy has exactly the same mathematical form as the original one except the different definitions of tension. The vesicle surface is discretized on a triangular mesh where the elastic tension and bending force are calculated on each vertex (Lagrangian marker in the IB method) of the triangulation. A series of numerical tests on the present scheme are conducted to illustrate the robustness and applicability of the method. We perform the convergence study for the immersed boundary forces and the fluid velocity field. We then study the vesicle dynamics in various flows such as quiescent, simple shear, and gravitational flows. Our numerical results show good agreements with those obtained in previous theoretical, experimental and numerical studies.

  3. Testing framework for embedded languages

    NASA Astrophysics Data System (ADS)

    Leskó, Dániel; Tejfel, Máté

    2012-09-01

    Embedding a new programming language into an existing one is a widely used technique, because it fastens the development process and gives a part of a language infrastructure for free (e.g. lexical, syntactical analyzers). In this paper we are presenting a new advantage of this development approach regarding to adding testing support for these new languages. Tool support for testing is a crucial point for a newly designed programming language. It could be done in the hard way by creating a testing tool from scratch, or we could try to reuse existing testing tools by extending them with an interface to our new language. The second approach requires less work, and also it fits very well for the embedded approach. The problem is that the creation of such interfaces is not straightforward at all, because the existing testing tools were mostly not designed to be extendable and to be able to deal with new languages. This paper presents an extendable and modular model of a testing framework, in which the most basic design decision was to keep the - previously mentioned - interface creation simple and straightforward. Other important aspects of our model are the test data generation, the oracle problem and the customizability of the whole testing phase.

  4. A procedure for landslide susceptibility zonation by the conditional analysis method1

    NASA Astrophysics Data System (ADS)

    Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo

    2002-12-01

    Numerous methods have been proposed for landslide probability zonation of the landscape by means of a Geographic Information System (GIS). Among the multivariate methods, i.e. those methods which simultaneously take into account all the factors contributing to instability, the Conditional Analysis method applied to a subdivision of the territory into Unique Condition Units is particularly straightforward from a conceptual point of view and particularly suited to the use of a GIS. In fact, working on the principle that future landslides are more likely to occur under those conditions which led to past instability, landslide susceptibility is defined by computing the landslide density in correspondence with different combinations of instability factors. The conceptual simplicity of this method, however, does not necessarily imply that it is simple to implement, especially as it requires rather complex operations and a high number of GIS commands. Moreover, there is the possibility that, in order to achieve satisfactory results, the procedure has to be repeated a few times changing the factors or modifying the class subdivision. To solve this problem, we created a shell program which, by combining the shell commands, the GIS Geographical Research Analysis Support System (GRASS) commands and the gawk language commands, carries out the whole procedure automatically. This makes the construction of a Landslide Susceptibility Map easy and fast for large areas too, and even when a high spatial resolution is adopted, as shown by application of the procedure to the Parma River basin, in the Italian Northern Apennines.

  5. Efficient and precise calculation of the b-matrix elements in diffusion-weighted imaging pulse sequences.

    PubMed

    Zubkov, Mikhail; Stait-Gardner, Timothy; Price, William S

    2014-06-01

    Precise NMR diffusion measurements require detailed knowledge of the cumulative dephasing effect caused by the numerous gradient pulses present in most NMR pulse sequences. This effect, which ultimately manifests itself as the diffusion-related NMR signal attenuation, is usually described by the b-value or the b-matrix in the case of multidirectional diffusion weighting, the latter being common in diffusion-weighted NMR imaging. Neglecting some of the gradient pulses introduces an error in the calculated diffusion coefficient reaching in some cases 100% of the expected value. Therefore, ensuring the b-matrix calculation includes all the known gradient pulses leads to significant error reduction. Calculation of the b-matrix for simple gradient waveforms is rather straightforward, yet it grows cumbersome when complexly shaped and/or numerous gradient pulses are introduced. Making three broad assumptions about the gradient pulse arrangement in a sequence results in an efficient framework for calculation of b-matrices as well providing some insight into optimal gradient pulse placement. The framework allows accounting for the diffusion-sensitising effect of complexly shaped gradient waveforms with modest computational time and power. This is achieved by using the b-matrix elements of the simple unmodified pulse sequence and minimising the integration of the complexly shaped gradient waveform in the modified sequence. Such re-evaluation of the b-matrix elements retains all the analytical relevance of the straightforward approach, yet at least halves the amount of symbolic integration required. The application of the framework is demonstrated with the evaluation of the expression describing the diffusion-sensitizing effect, caused by different bipolar gradient pulse modules. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. SIZE DISTRIBUTION OF SEA-SALT EMISSIONS AS A FUNCTION OF RELATIVE HUMIDITY

    EPA Science Inventory

    This note presents a straightforward method to correct sea-salt-emission particle-size distributions according to local relative humidity. The proposed method covers a wide range of relative humidity (0.45 to 0.99) and its derivation incorporates recent laboratory results on sea-...

  7. Least squares estimation of avian molt rates

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.

  8. A Method of Assembling Compact Coherent Fiber-Optic Bundles

    NASA Technical Reports Server (NTRS)

    Martin, Stefan; Liu, Duncan; Levine, Bruce Martin; Shao, Michael; Wallace, James

    2007-01-01

    A method of assembling coherent fiber-optic bundles in which all the fibers are packed together as closely as possible is undergoing development. The method is based, straightforwardly, on the established concept of hexagonal close packing; hence, the development efforts are focused on fixtures and techniques for practical implementation of hexagonal close packing of parallel optical fibers.

  9. Developing a Method to Mask Trees in Commercial Multispectral Imagery

    NASA Astrophysics Data System (ADS)

    Becker, S. J.; Daughtry, C. S. T.; Jain, D.; Karlekar, S. S.

    2015-12-01

    The US Army has an increasing focus on using automated remote sensing techniques with commercial multispectral imagery (MSI) to map urban and peri-urban agricultural and vegetative features; however, similar spectral profiles between trees (i.e., forest canopy) and other vegetation result in confusion between these cover classes. Established vegetation indices, like the Normalized Difference Vegetation Index (NDVI), are typically not effective in reliably differentiating between trees and other vegetation. Previous research in tree mapping has included integration of hyperspectral imagery (HSI) and LiDAR for tree detection and species identification, as well as the use of MSI to distinguish tree crowns from non-vegetated features. This project developed a straightforward method to model and also mask out trees from eight-band WorldView-2 (1.85 meter x 1.85 meter resolution at nadir) satellite imagery at the Beltsville Agricultural Research Center in Beltsville, MD spanning 2012 - 2015. The study site included tree cover, a range of agricultural and vegetative cover types, and urban features. The modeling method exploits the product of the red and red edge bands and defines accurate thresholds between trees and other land covers. Results show this method outperforms established vegetation indices including the NDVI, Soil Adjusted Vegetation Index, Normalized Difference Water Index, Simple Ratio, and Normalized Difference Red Edge Index in correctly masking trees while preserving the other information in the imagery. This method is useful when HSI and LiDAR collection are not possible or when using archived MSI.

  10. Does daily nurse staffing match ward workload variability? Three hospitals' experiences.

    PubMed

    Gabbay, Uri; Bukchin, Michael

    2009-01-01

    Nurse shortage and rising healthcare resource burdens mean that appropriate workforce use is imperative. This paper aims to evaluate whether daily nursing staffing meets ward workload needs. Nurse attendance and daily nurses' workload capacity in three hospitals were evaluated. Statistical process control was used to evaluate intra-ward nurse workload capacity and day-to-day variations. Statistical process control is a statistics-based method for process monitoring that uses charts with predefined target measure and control limits. Standardization was performed for inter-ward analysis by converting ward-specific crude measures to ward-specific relative measures by dividing observed/expected. Two charts: acceptable and tolerable daily nurse workload intensity, were defined. Appropriate staffing indicators were defined as those exceeding predefined rates within acceptable and tolerable limits (50 percent and 80 percent respectively). A total of 42 percent of the overall days fell within acceptable control limits and 71 percent within tolerable control limits. Appropriate staffing indicators were met in only 33 percent of wards regarding acceptable nurse workload intensity and in only 45 percent of wards regarding tolerable workloads. The study work did not differentiate crude nurse attendance and it did not take into account patient severity since crude bed occupancy was used. Double statistical process control charts and certain staffing indicators were used, which is open to debate. Wards that met appropriate staffing indicators prove the method's feasibility. Wards that did not meet appropriate staffing indicators prove the importance and the need for process evaluations and monitoring. Methods presented for monitoring daily staffing appropriateness are simple to implement either for intra-ward day-to-day variation by using nurse workload capacity statistical process control charts or for inter-ward evaluation using standardized measure of nurse workload intensity. The real challenge will be to develop planning systems and implement corrective interventions such as dynamic and flexible daily staffing, which will face difficulties and barriers. The paper fulfils the need for workforce utilization evaluation. A simple method using available data for daily staffing appropriateness evaluation, which is easy to implement and operate, is presented. The statistical process control method enables intra-ward evaluation, while standardization by converting crude into relative measures enables inter-ward analysis. The staffing indicator definitions enable performance evaluation. This original study uses statistical process control to develop simple standardization methods and applies straightforward statistical tools. This method is not limited to crude measures, rather it uses weighted workload measures such as nursing acuity or weighted nurse level (i.e. grade/band).

  11. Hybrid Analysis of Engine Core Noise

    NASA Astrophysics Data System (ADS)

    O'Brien, Jeffrey; Kim, Jeonglae; Ihme, Matthias

    2015-11-01

    Core noise, or the noise generated within an aircraft engine, is becoming an increasing concern for the aviation industry as other noise sources are progressively reduced. The prediction of core noise generation and propagation is especially challenging for computationalists since it involves extensive multiphysics including chemical reaction and moving blades in addition to the aerothermochemical effects of heated jets. In this work, a representative engine flow path is constructed using experimentally verified geometries to simulate the physics of core noise. A combustor, single-stage turbine, nozzle and jet are modeled in separate calculations using appropriate high fidelity techniques including LES, actuator disk theory and Ffowcs-Williams Hawkings surfaces. A one way coupling procedure is developed for passing fluctuations downstream through the flowpath. This method effectively isolates the core noise from other acoustic sources, enables straightforward study of the interaction between core noise and jet exhaust, and allows for simple distinction between direct and indirect noise. The impact of core noise on the farfield jet acoustics is studied extensively and the relative efficiency of different disturbance types and shapes is examined in detail.

  12. Engineered ascorbate peroxidase as a genetically encoded reporter for electron microscopy.

    PubMed

    Martell, Jeffrey D; Deerinck, Thomas J; Sancak, Yasemin; Poulos, Thomas L; Mootha, Vamsi K; Sosinsky, Gina E; Ellisman, Mark H; Ting, Alice Y

    2012-11-01

    Electron microscopy (EM) is the standard method for imaging cellular structures with nanometer resolution, but existing genetic tags are inactive in most cellular compartments or require light and can be difficult to use. Here we report the development of 'APEX', a genetically encodable EM tag that is active in all cellular compartments and does not require light. APEX is a monomeric 28-kDa peroxidase that withstands strong EM fixation to give excellent ultrastructural preservation. We demonstrate the utility of APEX for high-resolution EM imaging of a variety of mammalian organelles and specific proteins using a simple and robust labeling procedure. We also fused APEX to the N or C terminus of the mitochondrial calcium uniporter (MCU), a recently identified channel whose topology is disputed. These fusions give EM contrast exclusively in the mitochondrial matrix, suggesting that both the N and C termini of MCU face the matrix. Because APEX staining is not dependent on light activation, APEX should make EM imaging of any cellular protein straightforward, regardless of the size or thickness of the specimen.

  13. Resin embedded multicycle imaging (REMI): a tool to evaluate protein domains.

    PubMed

    Busse, B L; Bezrukov, L; Blank, P S; Zimmerberg, J

    2016-08-08

    Protein complexes associated with cellular processes comprise a significant fraction of all biology, but our understanding of their heterogeneous organization remains inadequate, particularly for physiological densities of multiple protein species. Towards resolving this limitation, we here present a new technique based on resin-embedded multicycle imaging (REMI) of proteins in-situ. By stabilizing protein structure and antigenicity in acrylic resins, affinity labels were repeatedly applied, imaged, removed, and replaced. In principle, an arbitrarily large number of proteins of interest may be imaged on the same specimen with subsequent digital overlay. A series of novel preparative methods were developed to address the problem of imaging multiple protein species in areas of the plasma membrane or volumes of cytoplasm of individual cells. For multiplexed examination of antibody staining we used straightforward computational techniques to align sequential images, and super-resolution microscopy was used to further define membrane protein colocalization. We give one example of a fibroblast membrane with eight multiplexed proteins. A simple statistical analysis of this limited membrane proteomic dataset is sufficient to demonstrate the analytical power contributed by additional imaged proteins when studying membrane protein domains.

  14. Heat-treated stainless steel felt as scalable anode material for bioelectrochemical systems.

    PubMed

    Guo, Kun; Soeriyadi, Alexander H; Feng, Huajun; Prévoteau, Antonin; Patil, Sunil A; Gooding, J Justin; Rabaey, Korneel

    2015-11-01

    This work reports a simple and scalable method to convert stainless steel (SS) felt into an effective anode for bioelectrochemical systems (BESs) by means of heat treatment. X-ray photoelectron spectroscopy and cyclic voltammetry elucidated that the heat treatment generated an iron oxide rich layer on the SS felt surface. The iron oxide layer dramatically enhanced the electroactive biofilm formation on SS felt surface in BESs. Consequently, the sustained current densities achieved on the treated electrodes (1 cm(2)) were around 1.5±0.13 mA/cm(2), which was seven times higher than the untreated electrodes (0.22±0.04 mA/cm(2)). To test the scalability of this material, the heat-treated SS felt was scaled up to 150 cm(2) and similar current density (1.5 mA/cm(2)) was achieved on the larger electrode. The low cost, straightforwardness of the treatment, high conductivity and high bioelectrocatalytic performance make heat-treated SS felt a scalable anodic material for BESs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. On the resolution of plenoptic PIV

    NASA Astrophysics Data System (ADS)

    Deem, Eric A.; Zhang, Yang; Cattafesta, Louis N.; Fahringer, Timothy W.; Thurow, Brian S.

    2016-08-01

    Plenoptic PIV offers a simple, single camera solution for volumetric velocity measurements of fluid flow. However, due to the novel manner in which the particle images are acquired and processed, few references exist to aid in determining the resolution limits of the measurements. This manuscript provides a framework for determining the spatial resolution of plenoptic PIV based on camera design and experimental parameters. This information can then be used to determine the smallest length scales of flows that are observable by plenoptic PIV, the dynamic range of plenoptic PIV, and the corresponding uncertainty in plenoptic PIV measurements. A simplified plenoptic camera is illustrated to provide the reader with a working knowledge of the method in which the light field is recorded. Then, operational considerations are addressed. This includes a derivation of the depth resolution in terms of the design parameters of the camera. Simulated volume reconstructions are presented to validate the derived limits. It is found that, while determining the lateral resolution is relatively straightforward, many factors affect the resolution along the optical axis. These factors are addressed and suggestions are proposed for improving performance.

  16. One-Step Synthesis of Boron Nitride Quantum Dots: Simple Chemistry Meets Delicate Nanotechnology.

    PubMed

    Liu, Bingping; Yan, Shihai; Song, Zhongqian; Liu, Mengli; Ji, Xuqiang; Yang, Wenrong; Liu, Jingquan

    2016-12-23

    Herein, a conceptually new and straightforward aqueous route is described for the synthesis of hydroxyl- and amino-functionalized boron nitride quantum dots (BNQDs) with quantum yields (QY) as high as 18.3 % by using a facile bottom-up approach, in which a mixture of boric acid and ammonia solution was hydrothermally treated in one pot at 200 °C for 12 h. The functionalized BNQDs, with excellent photoluminescence properties, could be easily dispersed in an aqueous medium and applied as fluorescent probes for the detection of ferrous (Fe 2+ ) and ferric (Fe 3+ ) ions with excellent selectivity and low detection limits. The mechanisms for the hydrothermal reaction and fluorescence quenching were also simulated by using density functional theory (DFT), which confirmed the feasibility and advantages of this strategy. It provides a scalable and eco-friendly method for preparation of BNQDs with good dispersability and could also be generalized to the synthesis of other 2D quantum dots and nanoplates. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Durable and mass producible polymer surface structures with different combinations of micro-micro hierarchy

    NASA Astrophysics Data System (ADS)

    Jiang, Yu; Suvanto, Mika; Pakkanen, Tapani A.

    2016-01-01

    Extensive studies have been performed with the aim of fabricating hierarchical surface structures inspired by nature. However, synthetic hierarchical structures have to sacrifice mechanical resistance to functionality by introducing finer scaled structures. Therefore, surfaces are less durable. Surface micro-micro hierarchy has been proven to be effective in replacing micro-nano hierarchy in the sense of superhydrophobicity. However, less attention has been paid to the combined micro-micro hierarchies with surface pillars and pits incorporated together. The fabrication of this type of hierarchy may be less straightforward, with the possibility of being a complicated multi-step process. In this study, we present a simple yet mass producible fabrication method for hierarchical structures with different combinations of surface pillars and pits. The fabrication was based on only one aluminum (Al) mold with sequential mountings. The fabricated structures exhibit high mechanical durability and structural stabilities with a normal load up to 100 kg. In addition, the theoretical estimation of the wetting state shows a promising way of stabilizing a water droplet on the surface pit structures with a more stable Cassie-Baxter state.

  18. Improved antifouling properties and selective biofunctionalization of stainless steel by employing heterobifunctional silane-polyethylene glycol overlayers and avidin-biotin technology

    PubMed Central

    Hynninen, Ville; Vuori, Leena; Hannula, Markku; Tapio, Kosti; Lahtonen, Kimmo; Isoniemi, Tommi; Lehtonen, Elina; Hirsimäki, Mika; Toppari, J. Jussi; Valden, Mika; Hytönen, Vesa P.

    2016-01-01

    A straightforward solution-based method to modify the biofunctionality of stainless steel (SS) using heterobifunctional silane-polyethylene glycol (silane-PEG) overlayers is reported. Reduced nonspecific biofouling of both proteins and bacteria onto SS and further selective biofunctionalization of the modified surface were achieved. According to photoelectron spectroscopy analyses, the silane-PEGs formed less than 10 Å thick overlayers with close to 90% surface coverage and reproducible chemical compositions. Consequently, the surfaces also became more hydrophilic, and the observed non-specific biofouling of proteins was reduced by approximately 70%. In addition, the attachment of E. coli was reduced by more than 65%. Moreover, the potential of the overlayer to be further modified was demonstrated by successfully coupling biotinylated alkaline phosphatase (bAP) to a silane-PEG-biotin overlayer via avidin-biotin bridges. The activity of the immobilized enzyme was shown to be well preserved without compromising the achieved antifouling properties. Overall, the simple solution-based approach enables the tailoring of SS to enhance its activity for biomedical and biotechnological applications. PMID:27381834

  19. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  20. Improved antifouling properties and selective biofunctionalization of stainless steel by employing heterobifunctional silane-polyethylene glycol overlayers and avidin-biotin technology

    NASA Astrophysics Data System (ADS)

    Hynninen, Ville; Vuori, Leena; Hannula, Markku; Tapio, Kosti; Lahtonen, Kimmo; Isoniemi, Tommi; Lehtonen, Elina; Hirsimäki, Mika; Toppari, J. Jussi; Valden, Mika; Hytönen, Vesa P.

    2016-07-01

    A straightforward solution-based method to modify the biofunctionality of stainless steel (SS) using heterobifunctional silane-polyethylene glycol (silane-PEG) overlayers is reported. Reduced nonspecific biofouling of both proteins and bacteria onto SS and further selective biofunctionalization of the modified surface were achieved. According to photoelectron spectroscopy analyses, the silane-PEGs formed less than 10 Å thick overlayers with close to 90% surface coverage and reproducible chemical compositions. Consequently, the surfaces also became more hydrophilic, and the observed non-specific biofouling of proteins was reduced by approximately 70%. In addition, the attachment of E. coli was reduced by more than 65%. Moreover, the potential of the overlayer to be further modified was demonstrated by successfully coupling biotinylated alkaline phosphatase (bAP) to a silane-PEG-biotin overlayer via avidin-biotin bridges. The activity of the immobilized enzyme was shown to be well preserved without compromising the achieved antifouling properties. Overall, the simple solution-based approach enables the tailoring of SS to enhance its activity for biomedical and biotechnological applications.

  1. Sensing Cell-Culture Assays with Low-Cost Circuitry.

    PubMed

    Pérez, Pablo; Huertas, Gloria; Maldonado-Jacobi, Andrés; Martín, María; Serrano, Juan A; Olmo, Alberto; Daza, Paula; Yúfera, Alberto

    2018-06-11

    An alternative approach for cell-culture end-point protocols is proposed herein. This new technique is suitable for real-time remote sensing. It is based on Electrical Cell-substrate Impedance Spectroscopy (ECIS) and employs the Oscillation-Based Test (OBT) method. Simple and straightforward circuit blocks form the basis of the proposed measurement system. Oscillation parameters - frequency and amplitude - constitute the outcome, directly correlated with the culture status. A user can remotely track the evolution of cell cultures in real time over the complete experiment through a web tool continuously displaying the acquired data. Experiments carried out with commercial electrodes and a well-established cell line (AA8) are described, obtaining the cell number in real time from growth assays. The electrodes have been electrically characterized along the design flow in order to predict the system performance and the sensitivity curves. Curves for 1-week cell growth are reported. The obtained experimental results validate the proposed OBT for cell-culture characterization. Furthermore, the proposed electrode model provides a good approximation for the cell number and the time evolution of the studied cultures.

  2. Resin embedded multicycle imaging (REMI): a tool to evaluate protein domains

    PubMed Central

    Busse, B. L.; Bezrukov, L.; Blank, P. S.; Zimmerberg, J.

    2016-01-01

    Protein complexes associated with cellular processes comprise a significant fraction of all biology, but our understanding of their heterogeneous organization remains inadequate, particularly for physiological densities of multiple protein species. Towards resolving this limitation, we here present a new technique based on resin-embedded multicycle imaging (REMI) of proteins in-situ. By stabilizing protein structure and antigenicity in acrylic resins, affinity labels were repeatedly applied, imaged, removed, and replaced. In principle, an arbitrarily large number of proteins of interest may be imaged on the same specimen with subsequent digital overlay. A series of novel preparative methods were developed to address the problem of imaging multiple protein species in areas of the plasma membrane or volumes of cytoplasm of individual cells. For multiplexed examination of antibody staining we used straightforward computational techniques to align sequential images, and super-resolution microscopy was used to further define membrane protein colocalization. We give one example of a fibroblast membrane with eight multiplexed proteins. A simple statistical analysis of this limited membrane proteomic dataset is sufficient to demonstrate the analytical power contributed by additional imaged proteins when studying membrane protein domains. PMID:27499335

  3. Design of label-free, homogeneous biosensing platform based on plasmonic coupling and surface-enhanced Raman scattering using unmodified gold nanoparticles.

    PubMed

    Yi, Zi; Li, Xiao-Yan; Liu, Feng-Juan; Jin, Pei-Yan; Chu, Xia; Yu, Ru-Qin

    2013-05-15

    Surface-enhanced Raman scattering (SERS) has emerged as a promising spectroscopic technique for biosensing. However, to design a SERS-based biosensor, almost all currently used methods involve the time-consuming and complicated modification of the metallic nanoparticles with the Raman active dye and biorecognition element, which restricts their widespread applications. Herein, we report a label-free, homogeneous and easy-to-operate biosensing platform for the rapid, simple and sensitive SERS detection by using the unmodified gold nanoparticles (Au NPs). This strategy utilizes the difference in adsorption property of single-stranded DNA (ssDNA) and double-stranded DNA (dsDNA) on citrate-coated Au NPs. In the presence of dsDNA, the aggregation of Au NPs takes place after adding salt solution because the dsDNA cannot adsorb on the Au NPs to protect them from salt-induced aggregation. Such aggregation gives rise to the plasmonic coupling of adjacent metallic NPs and turns on the enhancement of the Raman scattering, displaying a strong SERS signal. In contrast, the ssDNA can adsorb on the Au NPs surface through strong electrostatic attraction and protect them from salt-induced aggregation, showing a weak SERS signal. This approach is not only straightforward and simple in design but also rapid and convenient in operation. The feasibility and universality of the design have been demonstrated successfully by the detection of DNA and Hg(2+), and the assay possesses the superior signal-to-background ratio as high as ∼30 and excellent selectivity. The method can be extended to detect various analytes, such as other metal ions, proteins and small molecules by using the oligonucleotides that can selectively bind the analytes. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. "Nailing" the management of the ingrown great toenail.

    PubMed

    Block, Stan L

    2014-11-01

    "Nailing" the management of the severely ingrown great toenail, commonly encountered in the adolescent population, is an important tool in the pediatrician's armamentarium. I have found great toenail removal to be worthwhile, with straightforward indications; and quite rewarding for my patients in terms of time, convenience, and costs. The key to the procedure is to keep it simple. Four basic vital steps are involved: (1) operative permit and explanation; (2) performing a careful complete digital nerve block; (3) removing the entire toenail; and, importantly, (4) performing a partial chemical matricectomy--with readily available silver nitrate sticks--to prevent frequent recurrences. Copyright 2014, SLACK Incorporated.

  5. Passive athermalization of multimode interference devices for wavelength-locking applications.

    PubMed

    Ruiz-Perez, Victor I; May-Arrioja, Daniel A; Guzman-Sepulveda, Jose R

    2017-03-06

    In this paper we demonstrate the passive, material-based athermalization of all-fiber architectures by cascading multimode interference (MMI) devices. In-line thermal compensation is achieved by including a liquid-core multimode section of variable length that allows ensuring temperature-independent operation while preserving the inherent filter-like spectral response of the MMI devices. The design of the temperature compensation unit is straightforward and its fabrication is simple. The applicability of our approach is experimentally verified by fabricating a wavelength-locked MMI laser with sensitivity of only -0.1 pm/°C, which is at least one order of magnitude lower than that achieved with other fiber optics devices.

  6. Convenient optical pressure gauge for multimegabar pressures calibrated to 300 GPa

    NASA Astrophysics Data System (ADS)

    Sun, Liling; Ruoff, Arthur L.; Stupian, Gary

    2005-01-01

    The accurate measurement of pressure by a straightforward and inexpensive optical procedure has been needed in the multimegabar region since static pressures over 216GPa, 361GPa, 420GPa and 560GPa were obtained in the diamond anvil cell. Here, a simple optical pressure gauge based on the Raman shift of the diamond at the center of a diamond tip at the diamond-sample interface is calibrated against a primary gauge (Pt isotherm at 300K from shock data) to 300GPa, thus enabling researchers who do not have a synchrotron to conveniently measure pressure with an optical scale from 50to300GPa.

  7. UV-vis spectra as an alternative to the Lowry method for quantify hair damage induced by surfactants.

    PubMed

    Pires-Oliveira, Rafael; Joekes, Inés

    2014-11-01

    It is well known that long term use of shampoo causes damage to human hair. Although the Lowry method has been widely used to quantify hair damage, it is unsuitable to determine this in the presence of some surfactants and there is no other method proposed in literature. In this work, a different method is used to investigate and compare the hair damage induced by four types of surfactants (including three commercial-grade surfactants) and water. Hair samples were immersed in aqueous solution of surfactants under conditions that resemble a shower (38 °C, constant shaking). These solutions become colored with time of contact with hair and its UV-vis spectra were recorded. For comparison, the amount of extracted proteins from hair by sodium dodecyl sulfate (SDS) and by water were estimated by the Lowry method. Additionally, non-pigmented vs. pigmented hair and also sepia melanin were used to understand the washing solution color and their spectra. The results presented herein show that hair degradation is mostly caused by the extraction of proteins, cuticle fragments and melanin granules from hair fiber. It was found that the intensity of solution color varies with the charge density of the surfactants. Furthermore, the intensity of solution color can be correlated to the amount of proteins quantified by the Lowry method as well as to the degree of hair damage. UV-vis spectrum of hair washing solutions is a simple and straightforward method to quantify and compare hair damages induced by different commercial surfactants. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Electrically Controllable Microparticle Synthesis and Digital Microfluidic Manipulation by Electric-Field-Induced Droplet Dispensing into Immiscible Fluids

    NASA Astrophysics Data System (ADS)

    Um, Taewoong; Hong, Jiwoo; Kang, In Seok

    2016-11-01

    The dispensing of tiny droplets is a basic and crucial process in a myriad of applications, such as DNA/protein microarray, cell cultures, chemical synthesis of microparticles, and digital microfluidics. This work demonstrates the droplet dispensing into immiscible fluids through electric charge concentration (ECC) method. Three main modes (i.e., attaching, uniform and bursting modes) are exhibited as a function of flow rates, applied voltage and gap distance between the nozzle and the oil surface. Through a conventional nozzle with diameter of a few millimeters, charged droplets with volumes ranging from a few μL to a few tens of nL can be uniformly dispensed into the oil chamber without reduction in nozzle size. Based on the features of the proposed method (e.g., formation of droplets with controllable polarity and amount of electric charge in water and oil system), a simple and straightforward method is developed for microparticle synthesis, including preparation for colloidosomes and fabrication of Janus microparticles with anisotropic internal structures. Finally, a combined system consisting of ECC-induced droplet dispensing and electrophoresis of charged droplet (ECD)-driven manipulation systems is constructed. This work was supported by the BK21Plus Program for advanced education of creative chemical engineers of the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP).

  9. Generalized Fisher matrices

    NASA Astrophysics Data System (ADS)

    Heavens, A. F.; Seikel, M.; Nord, B. D.; Aich, M.; Bouffanais, Y.; Bassett, B. A.; Hobson, M. P.

    2014-12-01

    The Fisher Information Matrix formalism (Fisher 1935) is extended to cases where the data are divided into two parts (X, Y), where the expectation value of Y depends on X according to some theoretical model, and X and Y both have errors with arbitrary covariance. In the simplest case, (X, Y) represent data pairs of abscissa and ordinate, in which case the analysis deals with the case of data pairs with errors in both coordinates, but X can be any measured quantities on which Y depends. The analysis applies for arbitrary covariance, provided all errors are Gaussian, and provided the errors in X are small, both in comparison with the scale over which the expected signal Y changes, and with the width of the prior distribution. This generalizes the Fisher Matrix approach, which normally only considers errors in the `ordinate' Y. In this work, we include errors in X by marginalizing over latent variables, effectively employing a Bayesian hierarchical model, and deriving the Fisher Matrix for this more general case. The methods here also extend to likelihood surfaces which are not Gaussian in the parameter space, and so techniques such as DALI (Derivative Approximation for Likelihoods) can be generalized straightforwardly to include arbitrary Gaussian data error covariances. For simple mock data and theoretical models, we compare to Markov Chain Monte Carlo experiments, illustrating the method with cosmological supernova data. We also include the new method in the FISHER4CAST software.

  10. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175

  11. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  12. Estimating the burden of recurrent events in the presence of competing risks: the method of mean cumulative count.

    PubMed

    Dong, Huiru; Robison, Leslie L; Leisenring, Wendy M; Martin, Leah J; Armstrong, Gregory T; Yasui, Yutaka

    2015-04-01

    Cumulative incidence has been widely used to estimate the cumulative probability of developing an event of interest by a given time, in the presence of competing risks. When it is of interest to measure the total burden of recurrent events in a population, however, the cumulative incidence method is not appropriate because it considers only the first occurrence of the event of interest for each individual in the analysis: Subsequent occurrences are not included. Here, we discuss a straightforward and intuitive method termed "mean cumulative count," which reflects a summarization of all events that occur in the population by a given time, not just the first event for each subject. We explore the mathematical relationship between mean cumulative count and cumulative incidence. Detailed calculation of mean cumulative count is described by using a simple hypothetical example, and the computation code with an illustrative example is provided. Using follow-up data from January 1975 to August 2009 collected in the Childhood Cancer Survivor Study, we show applications of mean cumulative count and cumulative incidence for the outcome of subsequent neoplasms to demonstrate different but complementary information obtained from the 2 approaches and the specific utility of the former. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Conservative, unconditionally stable discretization methods for Hamiltonian equations, applied to wave motion in lattice equations modeling protein molecules

    NASA Astrophysics Data System (ADS)

    LeMesurier, Brenton

    2012-01-01

    A new approach is described for generating exactly energy-momentum conserving time discretizations for a wide class of Hamiltonian systems of DEs with quadratic momenta, including mechanical systems with central forces; it is well-suited in particular to the large systems that arise in both spatial discretizations of nonlinear wave equations and lattice equations such as the Davydov System modeling energetic pulse propagation in protein molecules. The method is unconditionally stable, making it well-suited to equations of broadly “Discrete NLS form”, including many arising in nonlinear optics. Key features of the resulting discretizations are exact conservation of both the Hamiltonian and quadratic conserved quantities related to continuous linear symmetries, preservation of time reversal symmetry, unconditional stability, and respecting the linearity of certain terms. The last feature allows a simple, efficient iterative solution of the resulting nonlinear algebraic systems that retain unconditional stability, avoiding the need for full Newton-type solvers. One distinction from earlier work on conservative discretizations is a new and more straightforward nearly canonical procedure for constructing the discretizations, based on a “discrete gradient calculus with product rule” that mimics the essential properties of partial derivatives. This numerical method is then used to study the Davydov system, revealing that previously conjectured continuum limit approximations by NLS do not hold, but that sech-like pulses related to NLS solitons can nevertheless sometimes arise.

  14. Time Recovery for a Complex Process Using Accelerated Dynamics.

    PubMed

    Paz, S Alexis; Leiva, Ezequiel P M

    2015-04-14

    The hyperdynamics method (HD) developed by Voter (J. Chem. Phys. 1996, 106, 4665) sets the theoretical basis to construct an accelerated simulation scheme that holds the time scale information. Since HD is based on transition state theory, pseudoequilibrium conditions (PEC) must be satisfied before any system in a trapped state may be accelerated. As the system evolves, many trapped states may appear, and the PEC must be assumed in each one to accelerate the escape. However, since the system evolution is a priori unknown, the PEC cannot be permanently assumed to be true. Furthermore, the different parameters of the bias function used may need drastic recalibration during this evolution. To overcome these problems, we present a general scheme to switch between HD and conventional molecular dynamics (MD) in an automatic fashion during the simulation. To decide when HD should start and finish, criteria based on the energetic properties of the system are introduced. On the other hand, a very simple bias function is proposed, leading to a straightforward on-the-fly set up of the required parameters. A way to measure the quality of the simulation is suggested. The efficiency of the present hybrid HD-MD method is tested for a two-dimensional model potential and for the coalescence process of two nanoparticles. In spite of the important complexity of the latter system (165 degrees of freedoms), some relevant mechanistic properties were recovered within the present method.

  15. Ultrasonic Welding of Thermoplastic Composite Coupons for Mechanical Characterization of Welded Joints through Single Lap Shear Testing.

    PubMed

    Villegas, Irene F; Palardy, Genevieve

    2016-02-11

    This paper presents a novel straightforward method for ultrasonic welding of thermoplastic-composite coupons in optimum processing conditions. The ultrasonic welding process described in this paper is based on three main pillars. Firstly, flat energy directors are used for preferential heat generation at the joining interface during the welding process. A flat energy director is a neat thermoplastic resin film that is placed between the parts to be joined prior to the welding process and heats up preferentially owing to its lower compressive stiffness relative to the composite substrates. Consequently, flat energy directors provide a simple solution that does not require molding of resin protrusions on the surfaces of the composite substrates, as opposed to ultrasonic welding of unreinforced plastics. Secondly, the process data provided by the ultrasonic welder is used to rapidly define the optimum welding parameters for any thermoplastic composite material combination. Thirdly, displacement control is used in the welding process to ensure consistent quality of the welded joints. According to this method, thermoplastic-composite flat coupons are individually welded in a single lap configuration. Mechanical testing of the welded coupons allows determining the apparent lap shear strength of the joints, which is one of the properties most commonly used to quantify the strength of thermoplastic composite welded joints.

  16. Ultrasonic Welding of Thermoplastic Composite Coupons for Mechanical Characterization of Welded Joints through Single Lap Shear Testing

    PubMed Central

    Villegas, Irene F.; Palardy, Genevieve

    2016-01-01

    This paper presents a novel straightforward method for ultrasonic welding of thermoplastic-composite coupons in optimum processing conditions. The ultrasonic welding process described in this paper is based on three main pillars. Firstly, flat energy directors are used for preferential heat generation at the joining interface during the welding process. A flat energy director is a neat thermoplastic resin film that is placed between the parts to be joined prior to the welding process and heats up preferentially owing to its lower compressive stiffness relative to the composite substrates. Consequently, flat energy directors provide a simple solution that does not require molding of resin protrusions on the surfaces of the composite substrates, as opposed to ultrasonic welding of unreinforced plastics. Secondly, the process data provided by the ultrasonic welder is used to rapidly define the optimum welding parameters for any thermoplastic composite material combination. Thirdly, displacement control is used in the welding process to ensure consistent quality of the welded joints. According to this method, thermoplastic-composite flat coupons are individually welded in a single lap configuration. Mechanical testing of the welded coupons allows determining the apparent lap shear strength of the joints, which is one of the properties most commonly used to quantify the strength of thermoplastic composite welded joints. PMID:26890931

  17. Realistic absorption coefficient of ultrathin films

    NASA Astrophysics Data System (ADS)

    Cesaria, M.; Caricato, A. P.; Martino, M.

    2012-10-01

    Both a theoretical algorithm and an experimental procedure are discussed of a new route to determine the absorption/scattering properties of thin films deposited on transparent substrates. Notably, the non-measurable contribution of the film-substrate interface is inherently accounted for. While the experimental procedure exploits only measurable spectra combined according to a very simple algorithm, the theoretical derivation does not require numerical handling of the acquired spectra or any assumption on the film homogeneity and substrate thickness. The film absorption response is estimated by subtracting the measured absorption spectrum of the bare substrate from that of the film on the substrate structure but in a non-straightforward way. In fact, an assumption about the absorption profile of the overall structure is introduced and a corrective factor accounting for the relative film-to-substrate thickness. The method is tested on films of a well known material (ITO) as a function of the film structural quality and influence of the film-substrate interface, both deliberately changed by thickness tuning and doping. Results are found fully consistent with information obtained by standard optical analysis and band gap values reported in the literature. Additionally, comparison with a conventional method demonstrates that our route is generally more accurate even if particularly suited for very thin films.

  18. Thermal stability increase in metallic nanoparticles-loaded cellulose nanocrystal nanocomposites.

    PubMed

    Goikuria, U; Larrañaga, A; Vilas, J L; Lizundia, E

    2017-09-01

    Due to the potential of CNC-based flexible materials for novel industrial applications, the aim of this work is to improve the thermal stability of cellulose nanocrystals (CNC) films through a straightforward and scalable method. Based of nanocomposite approach, five different metallic nanoparticles (ZnO, SiO 2 , TiO 2 , Al 2 O 3 and Fe 2 O 3 ) have been co-assembled in water with CNCs to obtain free-standing nanocomposite films. Thermogravimetric analysis (TGA) reveals an increased thermal stability upon nanoparticle. This increase in the thermal stability reaches a maximum of 75°C for the nanocomposites having 10wt% of Fe 2 O 3 and ZnO. The activation energies of thermodegradation process (E a ) determined according to Kissinger and Ozawa-Flynn-Wall methods further confirm the delayed degradation of CNC nanocomposites upon heating. Finally, the changes induced in the crystalline structure during thermodegradation were followed by wide angle X-ray diffraction (WAXD). It is also observed that thermal degradation proceeds at higher temperatures for nanocomposites having metallic nanoparticles. Overall, experimental findings here showed make nanocomposite approach a simple low-cost environmentally-friendly strategy to overcome the relatively poor thermal stability of CNCs when extracted via sulfuric acid assisted hydrolysis of cellulose. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Heuristic extraction of rules in pruned artificial neural networks models used for quantifying highly overlapping chromatographic peaks.

    PubMed

    Hervás, César; Silva, Manuel; Serrano, Juan Manuel; Orejuela, Eva

    2004-01-01

    The suitability of an approach for extracting heuristic rules from trained artificial neural networks (ANNs) pruned by a regularization method and with architectures designed by evolutionary computation for quantifying highly overlapping chromatographic peaks is demonstrated. The ANN input data are estimated by the Levenberg-Marquardt method in the form of a four-parameter Weibull curve associated with the profile of the chromatographic band. To test this approach, two N-methylcarbamate pesticides, carbofuran and propoxur, were quantified using a classic peroxyoxalate chemiluminescence reaction as a detection system for chromatographic analysis. Straightforward network topologies (one and two outputs models) allow the analytes to be quantified in concentration ratios ranging from 1:7 to 5:1 with an average standard error of prediction for the generalization test of 2.7 and 2.3% for carbofuran and propoxur, respectively. The reduced dimensions of the selected ANN architectures, especially those obtained after using heuristic rules, allowed simple quantification equations to be developed that transform the input variables into output variables. These equations can be easily interpreted from a chemical point of view to attain quantitative analytical information regarding the effect of both analytes on the characteristics of chromatographic bands, namely profile, dispersion, peak height, and residence time. Copyright 2004 American Chemical Society

  20. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  1. A Gaussian quadrature method for total energy analysis in electronic state calculations

    NASA Astrophysics Data System (ADS)

    Fukushima, Kimichika

    This article reports studies by Fukushima and coworkers since 1980 concerning their highly accurate numerical integral method using Gaussian quadratures to evaluate the total energy in electronic state calculations. Gauss-Legendre and Gauss-Laguerre quadratures were used for integrals in the finite and infinite regions, respectively. Our previous article showed that, for diatomic molecules such as CO and FeO, elliptic coordinates efficiently achieved high numerical integral accuracy even with a numerical basis set including transition metal atomic orbitals. This article will generalize straightforward details for multiatomic systems with direct integrals in each decomposed elliptic coordinate determined from the nuclear positions of picked-up atom pairs. Sample calculations were performed for the molecules O3 and H2O. This article will also try to present, in another coordinate, a numerical integral by partially using the Becke's decomposition published in 1988, but without the Becke's fuzzy cell generated by the polynomials of internuclear distance between the pair atoms. Instead, simple nuclear weights comprising exponential functions around nuclei are used. The one-center integral is performed with a Gaussian quadrature pack in a spherical coordinate, included in the author's original program in around 1980. As for this decomposition into one-center integrals, sample calculations are carried out for Li2.

  2. A varying-coefficient method for analyzing longitudinal clinical trials data with nonignorable dropout

    PubMed Central

    Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane

    2011-01-01

    Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223

  3. Polyacrylamide medium for the electrophoretic separation of biomolecules

    DOEpatents

    Madabhushi, Ramakrishna S.; Gammon, Stuart A.

    2003-11-11

    A polyacryalmide medium for the electrophoretic separation of biomolecules. The polyacryalmide medium comprises high molecular weight polyacrylamides (PAAm) having a viscosity average molecular weight (M.sub.v) of about 675-725 kDa were synthesized by conventional red-ox polymerization technique. Using this separation medium, capillary electrophoresis of BigDye DNA sequencing standard was performed. A single base resolution of .about.725 bases was achieved in .about.60 minute in a non-covalently coated capillary of 50 .mu.m i.d., 40 cm effective length, and a filed of 160 V/cm at 40.degree. C. The resolution achieved with this formulation to separate DNA under identical conditions is much superior (725 bases vs. 625 bases) and faster (60 min. vs. 75 min.) to the commercially available PAAm, such as supplied by Amersham. The formulation method employed here to synthesize PAAm is straight-forward, simple and does not require cumbersome methods such as emulsion polymerizaiton in order to achieve very high molecular weights. Also, the formulation here does not require separation of PAAm from the reaction mixture prior to reconstituting the polymer to a final concentration. Furthermore, the formulation here is prepared from a single average mol. wt. PAAm as opposed to the mixture of two different average mo. wt. PAAm previously required to achieve high resolution.

  4. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  5. A universal approach to fabricate ordered colloidal crystals arrays based on electrostatic self-assembly.

    PubMed

    Zhang, Xun; Zhang, Junhu; Zhu, Difu; Li, Xiao; Zhang, Xuemin; Wang, Tieqiang; Yang, Bai

    2010-12-07

    We present a novel and simple method to fabricate two-dimensional (2D) poly(styrene sulfate) (PSS, negatively charged) colloidal crystals on a positively charged substrate. Our strategy contains two separate steps: one is the three-dimensional (3D) assembly of PSS particles in ethanol, and the other is electrostatic adsorption in water. First, 3D assembly in ethanol phase eliminates electrostatic attractions between colloids and the substrate. As a result, high-quality colloidal crystals are easily generated, for electrostatic attractions are unfavorable for the movement of colloidal particles during convective self-assembly. Subsequently, top layers of colloidal spheres are washed away in the water phase, whereas well-packed PSS colloids that are in contact with the substrate are tightly linked due to electrostatic interactions, resulting in the formation of ordered arrays of 2D colloidal spheres. Cycling these processes leads to the layer-by-layer assembly of 3D colloidal crystals with controllable layers. In addition, this strategy can be extended to the fabrication of patterned 2D colloidal crystals on patterned polyelectrolyte surfaces, not only on planar substrates but also on nonplanar substrates. This straightforward method may open up new possibilities for practical use of colloidal crystals of excellent quality, various patterns, and controllable fashions.

  6. Intelligent reservoir operation system based on evolving artificial neural networks

    NASA Astrophysics Data System (ADS)

    Chaves, Paulo; Chang, Fi-John

    2008-06-01

    We propose a novel intelligent reservoir operation system based on an evolving artificial neural network (ANN). Evolving means the parameters of the ANN model are identified by the GA evolutionary optimization technique. Accordingly, the ANN model should represent the operational strategies of reservoir operation. The main advantages of the Evolving ANN Intelligent System (ENNIS) are as follows: (i) only a small number of parameters to be optimized even for long optimization horizons, (ii) easy to handle multiple decision variables, and (iii) the straightforward combination of the operation model with other prediction models. The developed intelligent system was applied to the operation of the Shihmen Reservoir in North Taiwan, to investigate its applicability and practicability. The proposed method is first built to a simple formulation for the operation of the Shihmen Reservoir, with single objective and single decision. Its results were compared to those obtained by dynamic programming. The constructed network proved to be a good operational strategy. The method was then built and applied to the reservoir with multiple (five) decision variables. The results demonstrated that the developed evolving neural networks improved the operation performance of the reservoir when compared to its current operational strategy. The system was capable of successfully simultaneously handling various decision variables and provided reasonable and suitable decisions.

  7. Monodisperse Ultrasmall Manganese-Doped Multimetallic Oxysulfide Nanoparticles as Highly Efficient Oxygen Reduction Electrocatalyst.

    PubMed

    Zhang, Yingying; Wang, Xiang; Hu, Dandan; Xue, Chaozhuang; Wang, Wei; Yang, Huajun; Li, Dongsheng; Wu, Tao

    2018-04-25

    The highly efficient and cheap non-Pt-based electrocatalysts such as transition-based catalysts prepared via facile methods for oxygen reduction reaction (ORR) are desirable for large-scale practical industry applications in energy conversion and storage systems. Herein, we report a straightforward top-down synthesis of monodisperse ultrasmall manganese-doped multimetallic (ZnGe) oxysulfide nanoparticles (NPs) as an efficient ORR electrocatalyst by simple ultrasonic treatment of the Mn-doped Zn-Ge-S chalcogenidometalate crystal precursors in H 2 O/EtOH for only 1 h at room temperature. Thus obtained ultrasmall monodisperse Mn-doped oxysulfide NPs with ultralow Mn loading level (3.92 wt %) not only exhibit comparable onset and half-wave potential (0.92 and 0.86 V vs reversible hydrogen electrode, respectively) to the commercial 20 wt % Pt/C but also exceptionally high metal mass activity (189 mA/mg at 0.8 V) and good methanol tolerance. A combination of transmission electron microscopy, scanning electron microscopy, X-ray photoelectron spectroscopy, and electrochemical analysis demonstrated that the homogenous distribution of a large amount of Mn(III) on the surface of NPs mainly accounts for the high ORR activity. We believe that this simple synthesis of Mn-doped multimetallic (ZnGe) oxysulfide NPs derived from chalcogenidometalates will open a new route to explore the utilization of discrete-cluster-based chalcogenidometalates as novel non-Pt electrocatalysts for energy applications and provide a facile way to realize the effective reduction of the amount of catalyst while keeping desired catalytic performances.

  8. Evaluating surrogate endpoints, prognostic markers, and predictive markers — some simple themes

    PubMed Central

    Baker, Stuart G.; Kramer, Barnett S.

    2014-01-01

    Background A surrogate endpoint is an endpoint observed earlier than the true endpoint (a health outcome) that is used to draw conclusions about the effect of treatment on the unobserved true endpoint. A prognostic marker is a marker for predicting the risk of an event given a control treatment; it informs treatment decisions when there is information on anticipated benefits and harms of a new treatment applied to persons at high risk. A predictive marker is a marker for predicting the effect of treatment on outcome in a subgroup of patients or study participants; it provides more rigorous information for treatment selection than a prognostic marker when it is based on estimated treatment effects in a randomized trial. Methods We organized our discussion around a different theme for each topic. Results “Fundamentally an extrapolation” refers to the non-statistical considerations and assumptions needed when using surrogate endpoints to evaluate a new treatment. “Decision analysis to the rescue” refers to use the use of decision analysis to evaluate an additional prognostic marker because it is not possible to choose between purely statistical measures of marker performance. “The appeal of simplicity” refers to a straightforward and efficient use of a single randomized trial to evaluate overall treatment effect and treatment effect within subgroups using predictive markers. Conclusion The simple themes provide a general guideline for evaluation of surrogate endpoints, prognostic markers, and predictive markers. PMID:25385934

  9. GAP Noise Computation By The CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2001-01-01

    A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

  10. An efficient, widely applicable cryopreservation of Lilium shoot tips by droplet vitrification

    USDA-ARS?s Scientific Manuscript database

    We report a straightforward and widely applicable cryopreservation method for Lilium shoot tips. This method uses adventitious shoots that were induced from leaf segments cultured for 4 weeks on a shoot regeneration medium containing 1 mg L-1 a-naphthaleneacetic acid (NAA) and 0.5 mg L-1 thidiazuron...

  11. Straightforward analytical method to determine opium alkaloids in poppy seeds and bakery products.

    PubMed

    López, Patricia; Pereboom-de Fauw, Diana P K H; Mulder, Patrick P J; Spanjer, Martien; de Stoppelaar, Joyce; Mol, Hans G J; de Nijs, Monique

    2018-03-01

    A straightforward method to determine the content of six opium alkaloids (morphine, codeine, thebaine, noscapine, papaverine and narceine) in poppy seeds and bakery products was developed and validated down to a limit of quantification (LOQ) of 0.1mg/kg. The method was based on extraction with acetonitrile/water/formic acid, ten-fold dilution and analysis by LC-MS/MS using a pH 10 carbonate buffer. The method was applied for the analysis of 41 samples collected in 2015 in the Netherlands and Germany. All samples contained morphine ranging from 0.2 to 240mg/kg. The levels of codeine and thebaine ranged from below LOQ to 348mg/kg and from below LOQ to 106mg/kg, respectively. Sixty percent of the samples exceeded the guidance reference value of 4mg/kg of morphine set by BfR in Germany, whereas 25% of the samples did not comply with the limits set for morphine, codeine, thebaine and noscapine by Hungarian legislation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.

    PubMed

    Le, Guigao; Oulaid, Othmane; Zhang, Junfeng

    2015-03-01

    In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.

  13. A method to improve the nutritional quality of foods and beverages based on dietary recommendations.

    PubMed

    Nijman, C A J; Zijp, I M; Sierksma, A; Roodenburg, A J C; Leenen, R; van den Kerkhoff, C; Weststrate, J A; Meijer, G W

    2007-04-01

    The increasing consumer interest in health prompted Unilever to develop a globally applicable method (Nutrition Score) to evaluate and improve the nutritional composition of its foods and beverages portfolio. Based on (inter)national dietary recommendations, generic benchmarks were developed to evaluate foods and beverages on their content of trans fatty acids, saturated fatty acids, sodium and sugars. High intakes of these key nutrients are associated with undesirable health effects. In principle, the developed generic benchmarks can be applied globally for any food and beverage product. Product category-specific benchmarks were developed when it was not feasible to meet generic benchmarks because of technological and/or taste factors. The whole Unilever global foods and beverages portfolio has been evaluated and actions have been taken to improve the nutritional quality. The advantages of this method over other initiatives to assess the nutritional quality of foods are that it is based on the latest nutritional scientific insights and its global applicability. The Nutrition Score is the first simple, transparent and straightforward method that can be applied globally and across all food and beverage categories to evaluate the nutritional composition. It can help food manufacturers to improve the nutritional value of their products. In addition, the Nutrition Score can be a starting point for a powerful health indicator front-of-pack. This can have a significant positive impact on public health, especially when implemented by all food manufacturers.

  14. Measurement of microchannel fluidic resistance with a standard voltage meter.

    PubMed

    Godwin, Leah A; Deal, Kennon S; Hoepfner, Lauren D; Jackson, Louis A; Easley, Christopher J

    2013-01-03

    A simplified method for measuring the fluidic resistance (R(fluidic)) of microfluidic channels is presented, in which the electrical resistance (R(elec)) of a channel filled with a conductivity standard solution can be measured and directly correlated to R(fluidic) using a simple equation. Although a slight correction factor could be applied in this system to improve accuracy, results showed that a standard voltage meter could be used without calibration to determine R(fluidic) to within 12% error. Results accurate to within 2% were obtained when a geometric correction factor was applied using these particular channels. When compared to standard flow rate measurements, such as meniscus tracking in outlet tubing, this approach provided a more straightforward alternative and resulted in lower measurement error. The method was validated using 9 different fluidic resistance values (from ∼40 to 600kPa smm(-3)) and over 30 separately fabricated microfluidic devices. Furthermore, since the method is analogous to resistance measurements with a voltage meter in electrical circuits, dynamic R(fluidic) measurements were possible in more complex microfluidic designs. Microchannel R(elec) was shown to dynamically mimic pressure waveforms applied to a membrane in a variable microfluidic resistor. The variable resistor was then used to dynamically control aqueous-in-oil droplet sizes and spacing, providing a unique and convenient control system for droplet-generating devices. This conductivity-based method for fluidic resistance measurement is thus a useful tool for static or real-time characterization of microfluidic systems. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Measurement of Microchannel Fluidic Resistance with a Standard Voltage Meter

    PubMed Central

    Godwin, Leah A.; Deal, Kennon S.; Hoepfner, Lauren D.; Jackson, Louis A.; Easley, Christopher J.

    2012-01-01

    A simplified method for measuring the fluidic resistance (Rfluidic) of microfluidic channels is presented, in which the electrical resistance (Relec) of a channel filled with a conductivity standard solution can be measured and directly correlated to Rfluidic using a simple equation. Although a slight correction factor could be applied in this system to improve accuracy, results showed that a standard voltage meter could be used without calibration to determine Rfluidic to within 12% error. Results accurate to within 2% were obtained when a geometric correction factor was applied using these particular channels. When compared to standard flow rate measurements, such as meniscus tracking in outlet tubing, this approach provided a more straightforward alternative and resulted in lower measurement error. The method was validated using 9 different fluidic resistance values (from ~40 – 600 kPa s mm−3) and over 30 separately fabricated microfluidic devices. Furthermore, since the method is analogous to resistance measurements with a voltage meter in electrical circuits, dynamic Rfluidic measurements were possible in more complex microfluidic designs. Microchannel Relec was shown to dynamically mimic pressure waveforms applied to a membrane in a variable microfluidic resistor. The variable resistor was then used to dynamically control aqueous-in-oil droplet sizes and spacing, providing a unique and convenient control system for droplet-generating devices. This conductivity-based method for fluidic resistance measurement is thus a useful tool for static or real-time characterization of microfluidic systems. PMID:23245901

  16. Intra-individual reaction time variability and all-cause mortality over 17 years: a community-based cohort study.

    PubMed

    Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen

    2014-01-01

    very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.

  17. GAPTrap: A Simple Expression System for Pluripotent Stem Cells and Their Derivatives.

    PubMed

    Kao, Tim; Labonne, Tanya; Niclis, Jonathan C; Chaurasia, Ritu; Lokmic, Zerina; Qian, Elizabeth; Bruveris, Freya F; Howden, Sara E; Motazedian, Ali; Schiesser, Jacqueline V; Costa, Magdaline; Sourris, Koula; Ng, Elizabeth; Anderson, David; Giudice, Antonietta; Farlie, Peter; Cheung, Michael; Lamande, Shireen R; Penington, Anthony J; Parish, Clare L; Thomson, Lachlan H; Rafii, Arash; Elliott, David A; Elefanty, Andrew G; Stanley, Edouard G

    2016-09-13

    The ability to reliably express fluorescent reporters or other genes of interest is important for using human pluripotent stem cells (hPSCs) as a platform for investigating cell fates and gene function. We describe a simple expression system, designated GAPTrap (GT), in which reporter genes, including GFP, mCherry, mTagBFP2, luc2, Gluc, and lacZ are inserted into the GAPDH locus in hPSCs. Independent clones harboring variations of the GT vectors expressed remarkably consistent levels of the reporter gene. Differentiation experiments showed that reporter expression was reliably maintained in hematopoietic cells, cardiac mesoderm, definitive endoderm, and ventral midbrain dopaminergic neurons. Similarly, analysis of teratomas derived from GT-lacZ hPSCs showed that β-galactosidase expression was maintained in a spectrum of cell types representing derivatives of the three germ layers. Thus, the GAPTrap vectors represent a robust and straightforward tagging system that enables indelible labeling of PSCs and their differentiated derivatives. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. An Alternative Derivation of the Energy Levels of the "Particle on a Ring" System

    NASA Astrophysics Data System (ADS)

    Vincent, Alan

    1996-10-01

    All acceptable wave functions must be continuous mathematical functions. This criterion limits the acceptable functions for a particle in a linear 1-dimensional box to sine functions. If, however, the linear box is bent round into a ring, acceptable wave functions are those which are continuous at the 'join'. On this model some acceptable linear functions become unacceptable for the ring and some unacceptable cosine functions become acceptable. This approach can be used to produce a straightforward derivation of the energy levels and wave functions of the particle on a ring. These simple wave mechanical systems can be used as models of linear and cyclic delocalised systems such as conjugated hydrocarbons or the benzene ring. The promotion energy of an electron can then be used to calculate the wavelength of absorption of uv light. The simple model gives results of the correct order of magnitude and shows that, as the chain length increases, the uv maximum moves to longer wavelengths, as found experimentally.

  19. Flow derivatives and curvatures for a normal shock

    NASA Astrophysics Data System (ADS)

    Emanuel, G.

    2018-03-01

    A detached bow shock wave is strongest where it is normal to the upstream velocity. While the jump conditions across the shock are straightforward, many properties, such as the shock's curvatures and derivatives of the pressure, along and normal to a normal shock, are indeterminate. A novel procedure is introduced for resolving the indeterminacy when the unsteady flow is three-dimensional and the upstream velocity may be nonuniform. Utilizing this procedure, normal shock relations are provided for the nonunique orientation of the flow plane and the corresponding shock's curvatures and, e.g., the downstream normal derivatives of the pressure and the velocity components. These algebraic relations explicitly show the dependence of these parameters on the shock's shape and the upstream velocity gradient. A simple relation, valid only for a normal shock, is obtained for the average curvatures. Results are also obtained when the shock is an elliptic paraboloid shock. These derivatives are both simple and proportional to the average curvature.

  20. Hippocampal Replay is Not a Simple Function of Experience

    PubMed Central

    Gupta, Anoopum S.; van der Meer, Matthijs A. A.; Touretzky, David S.; Redish, A. David

    2015-01-01

    Summary Replay of behavioral sequences in the hippocampus during sharp-wave-ripple-complexes (SWRs) provides a potential mechanism for memory consolidation and the learning of knowledge structures. Current hypotheses imply that replay should straightforwardly reflect recent experience. However, we find these hypotheses to be incompatible with the content of replay on a task with two distinct behavioral sequences (A&B). We observed forward and backward replay of B even when rats had been performing A for >10 minutes. Furthermore, replay of non-local sequence B occurred more often when B was infrequently experienced. Neither forward nor backward sequences preferentially represented highly-experienced trajectories within a session. Additionally, we observed the construction of never-experienced novel-path sequences. These observations challenge the idea that sequence activation during SWRs is a simple replay of recent experience. Instead, replay reflected all physically available trajectories within the environment, suggesting a potential role in active learning and maintenance of the cognitive map. PMID:20223204

  1. Self-assembly synthesis of precious-metal-free 3D ZnO nano/micro spheres with excellent photocatalytic hydrogen production from solar water splitting

    NASA Astrophysics Data System (ADS)

    Guo, Si-yao; Zhao, Tie-jun; Jin, Zu-quan; Wan, Xiao-mei; Wang, Peng-gang; Shang, Jun; Han, Song

    2015-10-01

    A simple and straightforward solution growth routine is developed to prepare microporous 3D nano/micro ZnO microsphere with a large BET surface area of 288 m2 g-1 at room temperature. The formation mechanism of the hierarchical 3D nano/micro ZnO microsphere and its corresponding hydrogen evolution performance has been deeply discussed. In particular, this novel hierarchical 3D ZnO microspheres performs undiminished hydrogen evolution for at least 24 h under simulated solar light illumination, even under the condition of no precious metal as cocatalyst. Since the complex production process of photocatalysts and high cost of precious metal cocatalyst remains a major constraint that hinders the application of solar water splitting, this 3D nano/micro ZnO microspheres could be expected to be applicable in the precious-metal-free solar water splitting system due to its merits of low cost, simple procedure and high catalytic activity.

  2. Multiple advanced logic gates made of DNA-Ag nanocluster and the application for intelligent detection of pathogenic bacterial genes.

    PubMed

    Lin, Xiaodong; Liu, Yaqing; Deng, Jiankang; Lyu, Yanlong; Qian, Pengcheng; Li, Yunfei; Wang, Shuo

    2018-02-21

    The integration of multiple DNA logic gates on a universal platform to implement advance logic functions is a critical challenge for DNA computing. Herein, a straightforward and powerful strategy in which a guanine-rich DNA sequence lighting up a silver nanocluster and fluorophore was developed to construct a library of logic gates on a simple DNA-templated silver nanoclusters (DNA-AgNCs) platform. This library included basic logic gates, YES, AND, OR, INHIBIT, and XOR, which were further integrated into complex logic circuits to implement diverse advanced arithmetic/non-arithmetic functions including half-adder, half-subtractor, multiplexer, and demultiplexer. Under UV irradiation, all the logic functions could be instantly visualized, confirming an excellent repeatability. The logic operations were entirely based on DNA hybridization in an enzyme-free and label-free condition, avoiding waste accumulation and reducing cost consumption. Interestingly, a DNA-AgNCs-based multiplexer was, for the first time, used as an intelligent biosensor to identify pathogenic genes, E. coli and S. aureus genes, with a high sensitivity. The investigation provides a prototype for the wireless integration of multiple devices on even the simplest single-strand DNA platform to perform diverse complex functions in a straightforward and cost-effective way.

  3. STORMWATER BEST MANAGEMENT PRACTICE MONITORING

    EPA Science Inventory

    Implementation of an effective BMP monitoring program is not a straight-forward task. BMPs by definition are devices, practices, or methods used to manage stormwater runoff. This umbrella term lumps widely varying techniques into a single category. Also, with the existence of ...

  4. Empty backhaul, an opportunity to avoid fuel expended on the road.

    DOT National Transportation Integrated Search

    2009-11-01

    "An effort was undertaken to determine whether or not vehicle telemetry could provide data : which would indicate whether a commercial vehicle was operating under loaded or unloaded conditions. : With a straightforward method for establishing the loa...

  5. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  6. The Effects of Computer-Supported Inquiry-Based Learning Methods and Peer Interaction on Learning Stellar Parallax

    ERIC Educational Resources Information Center

    Ruzhitskaya, Lanika

    2011-01-01

    The presented research study investigated the effects of computer-supported inquiry-based learning and peer interaction methods on effectiveness of learning a scientific concept. The stellar parallax concept was selected as a basic, and yet important in astronomy, scientific construct, which is based on a straightforward relationship of several…

  7. A new method for long-term storage of titred microbial standard solutions suitable for microbiologic quality control activities of pharmaceutical companies.

    PubMed

    Chiellini, Carolina; Mocali, Stefano; Fani, Renato; Ferro, Iolanda; Bruschi, Serenella; Pinzani, Alessandro

    2016-08-01

    Commercially available lyophilized microbial standards are expensive and subject to reduction in cell viability due to freeze-drying stress. Here we introduce an inexpensive and straightforward method for in-house microbial standard preparation and cryoconservation that preserves constant cell titre and cell viability over 14 months.

  8. Kangaroo – A pattern-matching program for biological sequences

    PubMed Central

    2002-01-01

    Background Biologists are often interested in performing a simple database search to identify proteins or genes that contain a well-defined sequence pattern. Many databases do not provide straightforward or readily available query tools to perform simple searches, such as identifying transcription binding sites, protein motifs, or repetitive DNA sequences. However, in many cases simple pattern-matching searches can reveal a wealth of information. We present in this paper a regular expression pattern-matching tool that was used to identify short repetitive DNA sequences in human coding regions for the purpose of identifying potential mutation sites in mismatch repair deficient cells. Results Kangaroo is a web-based regular expression pattern-matching program that can search for patterns in DNA, protein, or coding region sequences in ten different organisms. The program is implemented to facilitate a wide range of queries with no restriction on the length or complexity of the query expression. The program is accessible on the web at http://bioinfo.mshri.on.ca/kangaroo/ and the source code is freely distributed at http://sourceforge.net/projects/slritools/. Conclusion A low-level simple pattern-matching application can prove to be a useful tool in many research settings. For example, Kangaroo was used to identify potential genetic targets in a human colorectal cancer variant that is characterized by a high frequency of mutations in coding regions containing mononucleotide repeats. PMID:12150718

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Haixia; Zhang, Jing

    We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme losesmore » the output of phase-conjugate clones and is regarded as irreversible quantum cloning.« less

  10. Food Addiction: An Evolving Nonlinear Science

    PubMed Central

    Shriner, Richard; Gold, Mark

    2014-01-01

    The purpose of this review is to familiarize readers with the role that addiction plays in the formation and treatment of obesity, type 2 diabetes and disorders of eating. We will outline several useful models that integrate metabolism, addiction, and human relationship adaptations to eating. A special effort will be made to demonstrate how the use of simple and straightforward nonlinear models can and are being used to improve our knowledge and treatment of patients suffering from nutritional pathology. Moving forward, the reader should be able to incorporate some of the findings in this review into their own practice, research, teaching efforts or other interests in the fields of nutrition, diabetes, and/or bariatric (weight) management. PMID:25421535

  11. Note: A three-dimensional calibration device for the confocal microscope.

    PubMed

    Jensen, K E; Weitz, D A; Spaepen, F

    2013-01-01

    Modern confocal microscopes enable high-precision measurement in three dimensions by collecting stacks of 2D (x-y) images that can be assembled digitally into a 3D image. It is difficult, however, to ensure position accuracy, particularly along the optical (z) axis where scanning is performed by a different physical mechanism than in x-y. We describe a simple device to calibrate simultaneously the x, y, and z pixel-to-micrometer conversion factors for a confocal microscope. By taking a known 2D pattern and positioning it at a precise angle with respect to the microscope axes, we created a 3D reference standard. The device is straightforward to construct and easy to use.

  12. Generating circularly polarized radiation in the extreme ultraviolet spectral range at the free-electron laser FLASH

    NASA Astrophysics Data System (ADS)

    von Korff Schmising, Clemens; Weder, David; Noll, Tino; Pfau, Bastian; Hennecke, Martin; Strüber, Christian; Radu, Ilie; Schneider, Michael; Staeck, Steffen; Günther, Christian M.; Lüning, Jan; Merhe, Alaa el dine; Buck, Jens; Hartmann, Gregor; Viefhaus, Jens; Treusch, Rolf; Eisebitt, Stefan

    2017-05-01

    A new device for polarization control at the free electron laser facility FLASH1 at DESY has been commissioned for user operation. The polarizer is based on phase retardation upon reflection off metallic mirrors. Its performance is characterized in three independent measurements and confirms the theoretical predictions of efficient and broadband generation of circularly polarized radiation in the extreme ultraviolet spectral range from 35 eV to 90 eV. The degree of circular polarization reaches up to 90% while maintaining high total transmission values exceeding 30%. The simple design of the device allows straightforward alignment for user operation and rapid switching between left and right circularly polarized radiation.

  13. Implementing the DC Mode in Cosmological Simulations with Supercomoving Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y; Kravtsov, Andrey V; Rudd, Douglas H

    2011-06-02

    As emphasized by previous studies, proper treatment of the density fluctuation on the fundamental scale of a cosmological simulation volume - the 'DC mode' - is critical for accurate modeling of spatial correlations on scales ~> 10% of simulation box size. We provide further illustration of the effects of the DC mode on the abundance of halos in small boxes and show that it is straightforward to incorporate this mode in cosmological codes that use the 'supercomoving' variables. The equations governing evolution of dark matter and baryons recast with these variables are particularly simple and include the expansion factor, andmore » hence the effect of the DC mode, explicitly only in the Poisson equation.« less

  14. Valuation of exotic options in the framework of Levy processes

    NASA Astrophysics Data System (ADS)

    Milev, Mariyan; Georgieva, Svetla; Markovska, Veneta

    2013-12-01

    In this paper we explore a straightforward procedure to price derivatives by using the Monte Carlo approach when the underlying process is a jump-diffusion. We have compared the Black-Scholes model with one of its extensions that is the Merton model. The latter model is better in capturing the market's phenomena and is comparative to stochastic volatility models in terms of pricing accuracy. We have presented simulations of asset paths and pricing of barrier options for both Geometric Brownian motion and exponential Levy processes as it is the concrete case of the Merton model. A desired level of accuracy is obtained with simple computer operations in MATLAB for efficient computational time.

  15. Common Magnets, Unexpected Polarities

    NASA Astrophysics Data System (ADS)

    Olson, Mark

    2013-11-01

    In this paper, I discuss a "misconception" in magnetism so simple and pervasive as to be typically unnoticed. That magnets have poles might be considered one of the more straightforward notions in introductory physics. However, the magnets common to students' experiences are likely different from those presented in educational contexts. This leads students, in my experience, to frequently and erroneously attribute magnetic poles based on geometric associations rather than actual observed behavior. This polarity discrepancy can provide teachers the opportunity to engage students in authentic inquiry about objects in their daily experiences. I've found that investigation of the magnetic polarities of common magnets provides a productive context for students in which to develop valuable and authentic scientific inquiry practices.

  16. Applications of multiple-constraint matrix updates to the optimal control of large structures

    NASA Technical Reports Server (NTRS)

    Smith, S. W.; Walcott, B. L.

    1992-01-01

    Low-authority control or vibration suppression in large, flexible space structures can be formulated as a linear feedback control problem requiring computation of displacement and velocity feedback gain matrices. To ensure stability in the uncontrolled modes, these gain matrices must be symmetric and positive definite. In this paper, efficient computation of symmetric, positive-definite feedback gain matrices is accomplished through the use of multiple-constraint matrix update techniques originally developed for structural identification applications. Two systems were used to illustrate the application: a simple spring-mass system and a planar truss. From these demonstrations, use of this multiple-constraint technique is seen to provide a straightforward approach for computing the low-authority gains.

  17. Assessing exposure to transformation products of soil-applied organic contaminants in surface water: comparison of model predictions and field data.

    PubMed

    Kern, Susanne; Singer, Heinz; Hollender, Juliane; Schwarzenbach, René P; Fenner, Kathrin

    2011-04-01

    Transformation products (TPs) of chemicals released to soil, for example, pesticides, are regularly detected in surface and groundwater with some TPs even dominating observed pesticide levels. Given the large number of TPs potentially formed in the environment, straightforward prioritization methods based on available data and simple, evaluative models are required to identify TPs with a high aquatic exposure potential. While different such methods exist, none of them has so far been systematically evaluated against field data. Using a dynamic multimedia, multispecies model for TP prioritization, we compared the predicted relative surface water exposure potential of pesticides and their TPs with experimental data for 16 pesticides and 46 TPs measured in a small river draining a Swiss agricultural catchment. Twenty TPs were determined quantitatively using solid-phase extraction liquid chromatography mass spectrometry (SPE-LC-MS/MS), whereas the remaining 26 TPs could only be detected qualitatively because of the lack of analytical reference standards. Accordingly, the two sets of TPs were used for quantitative and qualitative model evaluation, respectively. Quantitative comparison of predicted with measured surface water exposure ratios for 20 pairs of TPs and parent pesticides indicated agreement within a factor of 10, except for chloridazon-desphenyl and chloridazon-methyl-desphenyl. The latter two TPs were found to be present in elevated concentrations during baseflow conditions and in groundwater samples across Switzerland, pointing toward high concentrations in exfiltrating groundwater. A simple leaching relationship was shown to qualitatively agree with the observed baseflow concentrations and to thus be useful in identifying TPs for which the simple prioritization model might underestimate actual surface water concentrations. Application of the model to the 26 qualitatively analyzed TPs showed that most of those TPs categorized as exhibiting a high aquatic exposure potential could be confirmed to be present in the majority of water samples investigated. On the basis of these results, we propose a generally applicable, model-based approach to identify those TPs of soil-applied organic contaminants that exhibit a high aquatic exposure potential to prioritize them for higher-tier, experimental investigations.

  18. Stability Analysis of Algebraic Reconstruction for Immersed Boundary Methods with Application in Flow and Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Yousefzadeh, M.; Battiato, I.

    2017-12-01

    Flow and reactive transport problems in porous media often involve complex geometries with stationary or evolving boundaries due to absorption and dissolution processes. Grid based methods (e.g. finite volume, finite element, etc.) are a vital tool for studying these problems. Yet, implementing these methods requires one to answer a very first question of what type of grid is to be used. Among different possible answers, Cartesian grids are one of the most attractive options as they possess simple discretization stencil and are usually straightforward to generate at roughly no computational cost. The Immersed Boundary Method, a Cartesian based methodology, maintains most of the useful features of the structured grids while exhibiting a high-level resilience in dealing with complex geometries. These features make it increasingly more attractive to model transport in evolving porous media as the cost of grid generation reduces greatly. Yet, stability issues and severe time-step restriction due to explicit-time implementation combined with limited studies on the implementation of Neumann (constant flux) and linear and non-linear Robin (e.g. reaction) boundary conditions (BCs) have significantly limited the applicability of IBMs to transport in porous media. We have developed an implicit IBM capable of handling all types of BCs and addressed some numerical issues, including unconditional stability criteria, compactness and reduction of spurious oscillations near the immersed boundary. We tested the method for several transport and flow scenarios, including dissolution processes in porous media, and demonstrate its capabilities. Successful validation against both experimental and numerical data has been carried out.

  19. A comparison of two methods for retrieving ICD-9-CM data: the effect of using an ontology-based method for handling terminology changes.

    PubMed

    Yu, Alexander C; Cimino, James J

    2011-04-01

    Most existing controlled terminologies can be characterized as collections of terms, wherein the terms are arranged in a simple list or organized in a hierarchy. These kinds of terminologies are considered useful for standardizing terms and encoding data and are currently used in many existing information systems. However, they suffer from a number of limitations that make data reuse difficult. Relatively recently, it has been proposed that formal ontological methods can be applied to some of the problems of terminological design. Biomedical ontologies organize concepts (embodiments of knowledge about biomedical reality) whereas terminologies organize terms (what is used to code patient data at a certain point in time, based on the particular terminology version). However, the application of these methods to existing terminologies is not straightforward. The use of these terminologies is firmly entrenched in many systems, and what might seem to be a simple option of replacing these terminologies is not possible. Moreover, these terminologies evolve over time in order to suit the needs of users. Any methodology must therefore take these constraints into consideration, hence the need for formal methods of managing changes. Along these lines, we have developed a formal representation of the concept-term relation, around which we have also developed a methodology for management of terminology changes. The objective of this study was to determine whether our methodology would result in improved retrieval of data. Comparison of two methods for retrieving data encoded with terms from the International Classification of Diseases (ICD-9-CM), based on their recall when retrieving data for ICD-9-CM terms whose codes had changed but which had retained their original meaning (code change). Recall and interclass correlation coefficient. Statistically significant differences were detected (p<0.05) with the McNemar test for two terms whose codes had changed. Furthermore, when all the cases are combined in an overall category, our method also performs statistically significantly better (p<0.05). Our study shows that an ontology-based ICD-9-CM data retrieval method that takes into account the effects of terminology changes performs better on recall than one that does not in the retrieval of data for terms whose codes had changed but which retained their original meaning. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. A Comparison of Two Methods for Retrieving ICD-9-CM data: The Effect of Using an Ontology-based Method for Handling Terminology Changes

    PubMed Central

    Yu, Alexander C.; Cimino, James J.

    2012-01-01

    Objective Most existing controlled terminologies can be characterized as collections of terms, wherein the terms are arranged in a simple list or organized in a hierarchy. These kinds of terminologies are considered useful for standardizing terms and encoding data and are currently used in many existing information systems. However, they suffer from a number of limitations that make data reuse difficult. Relatively recently, it has been proposed that formal ontological methods can be applied to some of the problems of terminological design. Biomedical ontologies organize concepts (embodiments of knowledge about biomedical reality) whereas terminologies organize terms (what is used to code patient data at a certain point in time, based on the particular terminology version). However, the application of these methods to existing terminologies is not straightforward. The use of these terminologies is firmly entrenched in many systems, and what might seem to be a simple option of replacing these terminologies is not possible. Moreover, these terminologies evolve over time in order to suit the needs of users. Any methodology must therefore take these constraints into consideration, hence the need for formal methods of managing changes. Along these lines, we have developed a formal representation of the concept-term relation, around which we have also developed a methodology for management of terminology changes. The objective of this study was to determine whether our methodology would result in improved retrieval of data. Design Comparison of two methods for retrieving data encoded with terms from the International Classification of Diseases (ICD-9-CM), based on their recall when retrieving data for ICD-9-CM terms whose codes had changed but which had retained their original meaning (code change). Measurements Recall and interclass correlation coefficient. Results Statistically significant differences were detected (p<0.05) with the McNemar test for two terms whose codes had changed. Furthermore, when all the cases are combined in an overall category, our method also performs statistically significantly better (p < 0.05). Conclusion Our study shows that an ontology-based ICD-9-CM data retrieval method that takes into account the effects of terminology changes performs better on recall than one that does not in the retrieval of data for terms whose codes had changed but which retained their original meaning. PMID:21262390

  1. A straightforward method to determine flavouring substances in food by GC-MS.

    PubMed

    Lopez, Patricia; van Sisseren, Maarten; De Marco, Stefania; Jekel, Ad; de Nijs, Monique; Mol, Hans G J

    2015-05-01

    A straightforward GC-MS method was developed to determine the occurrence of fourteen flavouring compounds in food. It was successfully validated for four generic types of food (liquids, semi-solids, dry solids and fatty solids) in terms of limit of quantification, linearity, selectivity, matrix effects, recovery (53-120%) and repeatability (3-22%). The method was applied to a survey of 61 Dutch food products. The survey was designed to cover all the food commodities for which the EU Regulation 1334/2008 set maximum permitted levels. All samples were compliant with EU legislation. However, the levels of coumarin (0.6-63 mg/kg) may result in an exposure that, in case of children, would exceed the tolerable daily intake (TDI) of 0.1mg/kg bw/day. In addition to coumarin, estragole, methyl-eugenol, (R)-(+)-pulegone and thujone were EU-regulated substances detected in thirty-one of the products. The non-EU regulated alkenylbenzenes, trans-anethole and myristicin, were commonly present in beverages and in herbs-containing products. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Matrix Treatment of Ray Optics.

    ERIC Educational Resources Information Center

    Quon, W. Steve

    1996-01-01

    Describes a method to combine two learning experiences--optical physics and matrix mathematics--in a straightforward laboratory experiment that allows engineering/physics students to integrate a variety of learning insights and technical skills, including using lasers, studying refraction through thin lenses, applying concepts of matrix…

  3. NONSTATIONARY SPATIAL MODELING OF ENVIRONMENTAL DATA USING A PROCESS CONVOLUTION APPROACH

    EPA Science Inventory

    Traditional approaches to modeling spatial processes involve the specification of the covariance structure of the field. Although such methods are straightforward to understand and effective in some situations, there are often problems in incorporating non-stationarity and in ma...

  4. Methods and Techniques of Revenue Forecasting.

    ERIC Educational Resources Information Center

    Caruthers, J. Kent; Wentworth, Cathi L.

    1997-01-01

    Revenue forecasting is the critical first step in most college and university budget-planning processes. While it seems a straightforward exercise, effective forecasting requires consideration of a number of interacting internal and external variables, including demographic trends, economic conditions, and broad social priorities. The challenge…

  5. QUASI-PML FOR WAVES IN CYLINDRICAL COORDINATES. (R825225)

    EPA Science Inventory

    We prove that the straightforward extension of Berenger's original perfectly matched layer (PML) is not reflectionless at a cylindrical interface in the continuum limit. A quasi-PLM is developed as an absorbing boundary condition (ABC) for the finite-difference time-domain method...

  6. Efficient Synthesis of γ-Lactams by a Tandem Reductive Amination/Lactamization Sequence

    PubMed Central

    Nöth, Julica; Frankowski, Kevin J.; Neuenswander, Benjamin; Aubé, Jeffrey; Reiser, Oliver

    2009-01-01

    A three-component method for synthesizing highly-substituted γ-lactams from readily available maleimides, aldehydes and amines is described. A new reductive amination/intramolecular lactamization sequence provides a straightforward route to the lactam products in a single manipulation. The general utility of this method is demonstrated by the parallel synthesis of a γ-lactam library. PMID:18338857

  7. Efficient synthesis of gamma-lactams by a tandem reductive amination/lactamization sequence.

    PubMed

    Nöth, Julica; Frankowski, Kevin J; Neuenswander, Benjamin; Aubé, Jeffrey; Reiser, Oliver

    2008-01-01

    A three-component method for the synthesis of highly substituted gamma-lactams from readily available maleimides, aldehydes, and amines is described. A new reductive amination/intramolecular lactamization sequence provides a straightforward route to the lactam products in a single manipulation. The general utility of this method is demonstrated by the parallel synthesis of a gamma-lactam library.

  8. A branch-migration based fluorescent probe for straightforward, sensitive and specific discrimination of DNA mutations

    PubMed Central

    Xiao, Xianjin; Wu, Tongbo; Xu, Lei; Chen, Wei

    2017-01-01

    Abstract Genetic mutations are important biomarkers for cancer diagnostics and surveillance. Preferably, the methods for mutation detection should be straightforward, highly specific and sensitive to low-level mutations within various sequence contexts, fast and applicable at room-temperature. Though some of the currently available methods have shown very encouraging results, their discrimination efficiency is still very low. Herein, we demonstrate a branch-migration based fluorescent probe (BM probe) which is able to identify the presence of known or unknown single-base variations at abundances down to 0.3%-1% within 5 min, even in highly GC-rich sequence regions. The discrimination factors between the perfect-match target and single-base mismatched target are determined to be 89–311 by measurement of their respective branch-migration products via polymerase elongation reactions. The BM probe not only enabled sensitive detection of two types of EGFR-associated point mutations located in GC-rich regions, but also successfully identified the BRAF V600E mutation in the serum from a thyroid cancer patient which could not be detected by the conventional sequencing method. The new method would be an ideal choice for high-throughput in vitro diagnostics and precise clinical treatment. PMID:28201758

  9. Simulated low-intensity optical pulsar observation with single-photon detector

    NASA Astrophysics Data System (ADS)

    Leeb, W. R.; Alves, J.; Meingast, S.; Brunner, M.

    2015-02-01

    Context. Optical radiation of pulsars offers valuable clues to the physics of neutron stars, which are our only probes of the most extreme states of matter in the present-day universe. Still, only about 1% of all cataloged pulsars have known optical counterparts. Aims: The goal of this work is to develop an observational method optimized for discovering faint optical pulsars. Methods: A single-photon detector transforms the signal received by the telescope into a pulse sequence. The events obtained are time tagged and transformed into a histogram of event time differences. The histogram envelope presents the autocorrelation of the recorded optical signal and thus displays any periodicity of the input signal. Results: Simulations show that faint pulsars radiating in the optical regime can be detected in a straightforward way. As an example, a fictitious pulsar with a V-magnitude of 24.6 mag and a signature like the Crab pulsar can be discovered within one minute using an 8-m class telescope. At the detector's peak sensitivity the average optical flux density would then amount to Fν = 0.63 μJy. With a 40-m class telescope, such as the forthcoming European ELT, the detection of optical pulsars with magnitudes V< 30 mag is within reach for a measurement time of one minute. A two-hour "blind search" with the ELT could reach V ~ 31.3 mag. Conclusions: This method allows detecting faint periodic optical radiation with simple equipment and easy signal processing.

  10. Bayesian microsaccade detection

    PubMed Central

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  11. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    NASA Astrophysics Data System (ADS)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  12. A novel tool for user-friendly estimation of natural, diagnostic and professional radiation risk: Radio-Risk software.

    PubMed

    Carpeggiani, Clara; Paterni, Marco; Caramella, Davide; Vano, Eliseo; Semelka, Richard C; Picano, Eugenio

    2012-11-01

    Awareness of radiological risk is low among doctors and patients. An educational/decision tool that considers each patient' s cumulative lifetime radiation exposure would facilitate provider-patient communication. The purpose of this work was to develop user-friendly software for simple estimation and communication of radiological risk to patients and doctors as a part of the SUIT-Heart (Stop Useless Imaging Testing in Heart disease) Project of the Tuscany Region. We developed a novel software program (PC-platform, Windows OS fully downloadable at http://suit-heart.ifc.cnr.it) considering reference dose estimates from American Heart Association Radiological Imaging 2009 guidelines and UK Royal College of Radiology 2007 guidelines. Cancer age and gender-weighted risk were derived from Biological Effects of Ionising Radiation VII Committee, 2006. With simple input functions (demographics, age, gender) the user selects from a predetermined menu variables relating to natural (e.g., airplane flights and geo-tracked background exposure), professional (e.g., cath lab workers) and medical (e.g., CT, cardiac scintigraphy, coronary stenting) sources. The program provides a simple numeric (cumulative effective dose in milliSievert, mSv, and equivalent number of chest X-rays) and graphic (cumulative temporal trends of exposure, cancer cases out of 100 exposed persons) display. A simple software program allows straightforward estimation of cumulative dose (in multiples of chest X-rays) and risk (in extra % lifetime cancer risk), with simple numbers quantifying lifetime extra cancer risk. Pictorial display of radiation risk may be valuable for increasing radiological awareness in cardiologists. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Observations in public settings

    Treesearch

    Robert G. Lee

    1977-01-01

    Straightforward observation of children in their everyday environments is a more appropriate method of discovering the meaning of their relationships to nature than complex methodologies or reductionist commonsense thinking. Observational study requires an explicit conceptual framework and adherence to procedures that allow scientific inference. Error may come from...

  14. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1988-01-01

    The initial effort was concentrated on developing the quasi-analytical approach for two-dimensional transonic flow. To keep the problem computationally efficient and straightforward, only the two-dimensional flow was considered and the problem was modeled using the transonic small perturbation equation.

  15. Metal-free Synthesis of Ynones from Acyl Chlorides and Potassium Alkynyltrifluoroborate Salts

    PubMed Central

    Taylor, Cassandra L.; Bolshan, Yuri

    2015-01-01

    Ynones are a valuable functional group and building block in organic synthesis. Ynones serve as a precursor to many important organic functional groups and scaffolds. Traditional methods for the preparation of ynones are associated with drawbacks including harsh conditions, multiple purification steps, and the presence of unwanted byproducts. An alternative method for the straightforward preparation of ynones from acyl chlorides and potassium alkynyltrifluoroborate salts is described herein. The adoption of organotrifluoroborate salts as an alternative to organometallic reagents for the formation of new carbon-carbon bonds has a number of advantages. Potassium organotrifluoroborate salts are shelf stable, have good functional group tolerance, low toxicity, and a wide variety are straightforward to prepare. The title reaction proceeds rapidly at ambient temperature in the presence of a Lewis acid without the exclusion of air and moisture. Fair to excellent yields may be obtained via reaction of various aryl and alkyl acid chlorides with alkynyltrifluoroborate salts in the presence of boron trichloride. PMID:25742169

  16. Newton-Euler Dynamic Equations of Motion for a Multi-body Spacecraft

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric

    2007-01-01

    The Magnetospheric MultiScale (MMS) mission employs a formation of spinning spacecraft with several flexible appendages and thruster-based control. To understand the complex dynamic interaction of thruster actuation, appendage motion, and spin dynamics, each spacecraft is modeled as a tree of rigid bodies connected by spherical or gimballed joints. The method presented facilitates assembling by inspection the exact, nonlinear dynamic equations of motion for a multibody spacecraft suitable for solution by numerical integration. The building block equations are derived by applying Newton's and Euler's equations of motion to an "element" consisting of two bodies and one joint (spherical and gimballed joints are considered separately). Patterns in the "mass" and L'force" matrices guide assembly by inspection of a general N-body tree-topology system. Straightforward linear algebra operations are employed to eliminate extraneous constraint equations, resulting in a minimum-dimension system of equations to solve. This method thus combines a straightforward, easily-extendable, easily-mechanized formulation with an efficient computer implementation.

  17. Criteria for determining the need for surgical treatment of tricuspid regurgitation during mitral valve replacement

    PubMed Central

    2012-01-01

    Background Tricuspid regurgitation (TR) is common in patients with mitral valve disease; however, there are no straightforward, rapidly determinably criteria available for deciding whether TR repair should be performed during mitral valve replacement. The aim of our retrospective study was to identify a simple and fast criterion for determining whether TR repair should be performed in patients undergoing mitral valve replacement. Methods We reviewed the records of patients who underwent mitral valve replacement with or without (control) TR repair (DeVega or Kay procedure) from January 2005 to December 2008. Preoperative and 2-year postoperative echocardiographic measurements included right ventricular and atrial diameter, interventricular septum size, TR severity, ejection fraction, and pulmonary artery pressure. Results A total of 89 patients were included (control, n = 50; DeVega, n = 27; Kay, n = 12). Demographic and clinical characteristics were similar between groups. Cardiac variables were similar between the DeVega and Kay groups. Right atrium and ventricular diameter and ejection fraction were significantly decreased postoperatively both in the control and operation (DeVega + Kay) group (P < 0.05). Pulmonary artery pressure was significantly decreased postoperatively in-operation groups (P < 0.05). Our findings indicate that surgical intervention for TR should be considered during mitral valve replacement if any of the following preoperative criteria are met: right atrial transverse diameter > 57 mm; right ventricular end-diastolic diameter > 55 mm; pulmonary artery pressure > 58 mmHg. Conclusions Our findings suggest echocardiography may be used as a rapid and simple means of determining which patients require TR repair during mitral valve replacement. PMID:22443513

  18. The Oceanographic Multipurpose Software Environment (OMUSE v1.0)

    NASA Astrophysics Data System (ADS)

    Pelupessy, Inti; van Werkhoven, Ben; van Elteren, Arjen; Viebahn, Jan; Candy, Adam; Portegies Zwart, Simon; Dijkstra, Henk

    2017-08-01

    In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model.

  19. In Vitro Cell Death Discrimination and Screening Method by Simple and Cost-Effective Viability Analysis.

    PubMed

    Helm, Katharina; Beyreis, Marlena; Mayr, Christian; Ritter, Markus; Jakab, Martin; Kiesslich, Tobias; Plaetzer, Kristjan

    2017-01-01

    For in vitro cytotoxicity testing, discrimination of apoptosis and necrosis represents valuable information. Viability analysis performed at two different time points post treatment could serve such a purpose because the dynamics of metabolic activity of apoptotic and necrotic cells is different, i.e. a more rapid decline of cellular metabolism during necrosis whereas cellular metabolism is maintained during the entire execution phase of apoptosis. This study describes a straightforward approach to distinguish apoptosis and necrosis. A431 human epidermoid carcinoma cells were treated with different concentrations/doses of actinomycin D (Act-D), 4,5,6,7-tetrabromo-2-azabenzimidazole (TBB), Ro 31-8220, H2O2 and photodynamic treatment (PDT). The resazurin viability signal was recorded at 2 and 24 hrs post treatment. Apoptosis and necrosis were verified by measuring caspase 3/7 and membrane integrity. Calculation of the difference curve between the 2 and 24 hrs resazurin signals yields the following information: a positive difference signal indicates apoptosis (i.e. high metabolic activity at early time points and low signal at 24 hrs post treatment) while an early reduction of the viability signal indicates necrosis. For all treatments, this dose-dependent sequence of cellular responses could be confirmed by independent assays. Simple and cost-effective viability analysis provides reliable information about the dose ranges of a cytotoxic agent where apoptosis or necrosis occurs. This may serve as a starting point for further in-depth characterisation of cytotoxic treatments. © 2017 The Author(s)Published by S. Karger AG, Basel.

  20. Identification of regions of normal grey matter and white matter from pathologic glioblastoma and necrosis in frozen sections using Raman imaging.

    PubMed

    Kast, Rachel; Auner, Gregory; Yurgelevic, Sally; Broadbent, Brandy; Raghunathan, Aditya; Poisson, Laila M; Mikkelsen, Tom; Rosenblum, Mark L; Kalkanis, Steven N

    2015-11-01

    In neurosurgical applications, a tool capable of distinguishing grey matter, white matter, and areas of tumor and/or necrosis in near-real time could greatly aid in tumor resection decision making. Raman spectroscopy is a non-destructive spectroscopic technique which provides molecular information about the tissue under examination based on the vibrational properties of the constituent molecules. With careful measurement and data processing, a spatial step and repeat acquisition of Raman spectra can be used to create Raman images. Forty frozen brain tissue sections were imaged in their entirety using a 300-µm-square measurement grid, and two or more regions of interest within each tissue were also imaged using a 25 µm-square step size. Molecular correlates for histologic features of interest were identified within the Raman spectra, and novel imaging algorithms were developed to compare molecular features across multiple tissues. In previous work, the relative concentration of individual biomolecules was imaged. Here, the relative concentrations of 1004, 1300:1344, and 1660 cm(-1), which correspond primarily to protein and lipid content, were simultaneously imaged across all tissues. This provided simple interpretation of boundaries between grey matter, white matter, and diseased tissue, and corresponded with findings from adjacent hematoxylin and eosin-stained sections. This novel, yet simple, multi-channel imaging technique allows clinically-relevant resolution with straightforward molecular interpretation of Raman images not possible by imaging any single peak. This method can be applied to either surgical or laboratory tools for rapid, non-destructive imaging of grey and white matter.

  1. Adjustment of total suspended solids data for use in sediment studies

    USGS Publications Warehouse

    Glysson, G. Douglas; Gray, John R.; Conge, L.M.; Hotchkiss, Rollin H.; Glade, Michael

    2000-01-01

    The U.S. Environmental Protection Agency identifies fluvial sediment as the single most widespread pollutant in the Nation's rivers and streams, affecting aquatic habitat, drinking water treatment processes, and recreational uses of rivers, lakes, and estuaries. A significant amount of suspended-sediment data has been produced using the total suspended solids (TSS) laboratory analysis method. An evaluation of data collected and analyzed by the U.S. Geological Survey and others has shown that the variation in TSS analytical results is considerably larger than that for traditional suspended-sediment concentration analyses (SSC) and that the TSS data show a negative bias when compared to SSC data. This paper presents the initial results of a continuing investigation into the differences between TSS and SSC results. It explores possible relations between these differences and other hydrologic data collected at the same stations. A general equation was developed to relate TSS data to SSC data. However, this general equation is not applicable for data from individual stations. Based on these analyses, there appears to be no simple, straightforward way to relate TSS and SSC data unless pairs of TSS and SSC results are available for a station.

  2. Determination of Carbonyl Compounds in Cork Agglomerates by GDME-HPLC-UV: Identification of the Extracted Compounds by HPLC-MS/MS.

    PubMed

    Brandão, Pedro Francisco; Ramos, Rui Miguel; Almeida, Paulo Joaquim; Rodrigues, José António

    2017-02-08

    A new approach is proposed for the extraction and determination of carbonyl compounds in solid samples, such as wood or cork materials. Cork products are used as building materials due to their singular characteristics; however, little is known about its aldehyde emission potential and content. Sample preparation was done by using a gas-diffusion microextraction (GDME) device for the direct extraction of volatile aldehydes and derivatization with 2,4-dinitrophenylhydrazine. Analytical determination of the extracts was done by HPLC-UV, with detection at 360 nm. The developed methodology proved to be a reliable tool for aldehyde determination in cork agglomerate samples with suitable method features. Mass spectrometry studies were performed for each sample, which enabled the identification, in the extracts, of the derivatization products of a total of 13 aldehydes (formaldehyde, acetaldehyde, furfural, propanal, 5-methylfurfural, butanal, benzaldehyde, pentanal, hexanal, trans-2-heptenal, heptanal, octanal, and trans-2-nonenal) and 4 ketones (3-hydroxy-2-butanone, acetone, cyclohexanone, and acetophenone). This new analytical methodology simultaneously proved to be consistent for the identification and determination of aldehydes in cork agglomerates and a very simple and straightforward procedure.

  3. A Method for Whole Brain Ex Vivo Magnetic Resonance Imaging with Minimal Susceptibility Artifacts

    PubMed Central

    Shatil, Anwar S.; Matsuda, Kant M.; Figley, Chase R.

    2016-01-01

    Magnetic resonance imaging (MRI) is a non-destructive technique that is capable of localizing pathologies and assessing other anatomical features (e.g., tissue volume, microstructure, and white matter connectivity) in postmortem, ex vivo human brains. However, when brains are removed from the skull and cerebrospinal fluid (i.e., their normal in vivo magnetic environment), air bubbles and air–tissue interfaces typically cause magnetic susceptibility artifacts that severely degrade the quality of ex vivo MRI data. In this report, we describe a relatively simple and cost-effective experimental setup for acquiring artifact-free ex vivo brain images using a clinical MRI system with standard hardware. In particular, we outline the necessary steps, from collecting an ex vivo human brain to the MRI scanner setup, and have also described changing the formalin (as might be necessary in longitudinal postmortem studies). Finally, we share some representative ex vivo MRI images that have been acquired using the proposed setup in order to demonstrate the efficacy of this approach. We hope that this protocol will provide both clinicians and researchers with a straight-forward and cost-effective solution for acquiring ex vivo MRI data from whole postmortem human brains. PMID:27965620

  4. Electrospun nanofibers-mediated on-demand drug release.

    PubMed

    Chen, Menglin; Li, Yan-Fang; Besenbacher, Flemming

    2014-11-01

    A living system has a complex and accurate regulation system with intelligent sensor-processor-effector components to enable the release of vital bioactive substances on demand at a specific site and time. Stimuli-responsive polymers mimic biological systems in a crude way where an external stimulus results in a change in conformation, solubility, or alternation of the hydrophilic/hydrophobic balance, and consequently release of a bioactive substance. Electrospinning is a straightforward and robust method to produce nanofibers with the potential to incorporate drugs in a simple, rapid, and reproducible process. This feature article emphasizes an emerging area using an electrospinning technique to generate biomimetic nanofibers as drug delivery devices that are responsive to different stimuli, such as temperature, pH, light, and electric/magnetic field for controlled release of therapeutic substances. Although at its infancy, the mimicry of these stimuli-responsive nanofibers to the function of the living systems includes both the fibrous structural feature and bio-regulation function as an on demand drug release depot. The electrospun nanofibers with extracellular matrix morphology intrinsically guide cellular drug uptake, which will be highly desired to translate the promise of drug delivery for the clinical success. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Social research Sotirios Sarantakos Social research Palgrave Macmillan 500 £19.99 1403943206 1403943206 [Formula: see text].

    PubMed

    2005-10-01

    I found Sarantakos's book to be a clear and straightforward guide to social research methods. The book is aimed at undergraduate level, and I am sure will appeal to students from a range of disciplines. The structure of the book reflects the different stages of the research process and as such it is easy to locate all the required sections. It introduces qualitative and quantitative approaches in a balanced way and includes sufficient detail of the philosophical roots of each of these research traditions. It was good to find simple and easy to follow accounts of the complex underpinning of a number of research approach that are popular in nursing research. Included are topics such as interpretivism, symbolic interactionism and phenomenology as well the more usually found hallmarks of positivistic-type research. While feminism and feministic research in nursing is an important subject area I did wonder if the amount of attention they received were perhaps a little unbalanced when related to the other subject areas covered. There are, however, useful sections on data collection and analysis. I found, for example, the section on grounded theory approaches to data analysis particularly good.

  6. Determination of essential elements in beverages, herbal infusions and dietary supplements using a new straightforward sequential approach based on flame atomic absorption spectrometry.

    PubMed

    Gómez-Nieto, Beatriz; Gismera, Mª Jesús; Sevilla, Mª Teresa; Procopio, Jesús R

    2017-03-15

    A simple method based on FAAS was developed for the sequential multi-element determination of Cu, Zn, Mn, Mg and Si in beverages and food supplements with successful results. The main absorption lines for Cu, Zn and Si and secondary lines for Mn and Mg were selected to carry out the measurements. The sample introduction was performed using a flow injection system. Using the choice of the absorption line wings, the upper limit of the linear range increased up to 110mgL -1 for Mg, 200mgL -1 for Si and 13mgL -1 for Zn. The determination of the five elements was carried out, in triplicate, without the need of additional sample dilutions and/or re-measurements, using less than 3.5mL of sample to perform the complete analysis. The LODs were 0.008mgL -1 for Cu, 0.017mgL -1 for Zn, 0.011mgL -1 for Mn, 0.16mgL -1 for Si and 0.11mgL -1 for Mg. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Unpolarized resonance grating reflectors with 44% fractional bandwidth.

    PubMed

    Niraula, Manoj; Magnusson, Robert

    2016-06-01

    There is immense scientific interest in the properties of resonant thin films embroidered with periodic nanoscale features. This device class possesses considerable innovation potential. Accordingly, we report unpolarized broadband reflectors enabled by a serial arrangement of a pair of polarized subwavelength gratings. Optimized with numerical methods, our elemental gratings consist of a partially etched crystalline-silicon film on a quartz substrate. The resulting reflectors exhibit extremely wide spectral reflection bands in one polarization. By arranging two such reflectors sequentially with orthogonal periodicities, there results an unpolarized spectral band that exceeds those of the individual polarized bands. In the experiments reported herein, we achieve zero-order reflectance exceeding 97% under unpolarized light incidence over a 500 nm wide wavelength band. This wideband represents a ∼44% fractional band in the near infrared. Moreover, the resonant unpolarized broadband accommodates an ultra-high reflection band spanning ∼85  nm and exceeding 99.9% in efficiency. The elemental polarization-sensitive reflectors based on one-dimensional (1D) resonant gratings have a simple design and robust performance, and are straightforward to fabricate. Hence, this technology is a promising alternative to traditional multilayer thin-film reflectors, especially at longer wavelengths of light where multilayer deposition may be infeasible or impractical.

  8. Modeling direct band-to-band tunneling: From bulk to quantum-confined semiconductor devices

    NASA Astrophysics Data System (ADS)

    Carrillo-Nuñez, H.; Ziegler, A.; Luisier, M.; Schenk, A.

    2015-06-01

    A rigorous framework to study direct band-to-band tunneling (BTBT) in homo- and hetero-junction semiconductor nanodevices is introduced. An interaction Hamiltonian coupling conduction and valence bands (CVBs) is derived using a multiband envelope method. A general form of the BTBT probability is then obtained from the linear response to the "CVBs interaction" that drives the system out of equilibrium. Simple expressions in terms of the one-electron spectral function are developed to compute the BTBT current in two- and three-dimensional semiconductor structures. Additionally, a two-band envelope equation based on the Flietner model of imaginary dispersion is proposed for the same purpose. In order to characterize their accuracy and differences, both approaches are compared with full-band, atomistic quantum transport simulations of Ge, InAs, and InAs-Si Esaki diodes. As another numerical application, the BTBT current in InAs-Si nanowire tunnel field-effect transistors is computed. It is found that both approaches agree with high accuracy. The first one is considerably easier to conceive and could be implemented straightforwardly in existing quantum transport tools based on the effective mass approximation to account for BTBT in nanodevices.

  9. Comparison of Clobetasol Propionate Generics Using Simplified in Vitro Bioequivalence Method for Topical Drug Products.

    PubMed

    Soares, Kelen Carine Costa; de Souza, Weidson Carlos; de Souza Texeira, Leonardo; da Cunha-Filho, Marcilio Sergio Soares; Gelfuso, Guilherme Martins; Gratieri, Tais

    2017-11-20

    The aim of this paper is to propose a simple in vitro skin penetration experiment in which the drug is extracted from the whole skin piece as a test valid for formulation screening and optimization during development process, equivalence assessment during quality control or post-approval after changes to the product. Twelve clobetasol propionate (CP) formulations (six creams and six ointments) from the local market were used as a model to challenge the proposed methodology in comparison to in vitro skin penetration following tape-stripping for drug extraction. To support the results, physicochemical tests for pH, viscosity, density and assay, as well as in vitro release were performed. Both protocols, extracting the drug from the skin using the tape-stripping technique or extracting from the full skin were capable of differentiating CP formulations. Only one formulation did not present statistical difference from the reference drug product in penetration tests and only other two oitments presented equivalent release to the reference. The proposed protocol is straightforward and reproducible. Results suggest the bioinequavalence of tested CP formulations reinforcing the necessity of such evaluations. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Vertically oriented TiO(x)N(y) nanopillar arrays with embedded Ag nanoparticles for visible-light photocatalysis.

    PubMed

    Jiang, Weitao; Ullah, Najeeb; Divitini, Giorgio; Ducati, Caterina; Kumar, R Vasant; Ding, Yucheng; Barber, Zoe H

    2012-03-27

    We present a straightforward method to produce highly crystalline, vertically oriented TiO(x)N(y) nanopillars (up to 1 μm in length) with a band gap in the visible-light region. This process starts with reactive dc sputtering to produce a TiN porous film, followed by a simple oxidation process at elevated temperatures in oxygen or air. By controlling the oxidation conditions, the band gap of the prepared TiO(x)N(y) can be tuned to different wavelength within the range of visible light. Furthermore, in order to inhibit carrier recombination to enhance the photocatalytic activity, Ag nanoparticles have been embedded into the nanogaps between the TiO(x)N(y) pillars by photoinduced reduction of Ag(+) (aq) irradiated with visible light. Transmission electron microscopy reveals that the Ag nanoparticles with a diameter of about 10 nm are uniformly dispersed along the pillars. The prepared TiO(x)N(y) nanopillar matrix and Ag:TiO(x)N(y) network show strong photocatalytic activity under visible-light irradiation, evaluated via degradation of Rhodamine B. © 2012 American Chemical Society

  11. X-Ray Micro-CT Observations of Hydrate Pore Habit and Lattice Boltzmann Simulations on Permeability Evolution in Hydrate Bearing Sediments (HBS)

    NASA Astrophysics Data System (ADS)

    Chen, X.; Espinoza, N.; Verma, R.; Prodanovic, M.

    2017-12-01

    We use X-ray micro-computed tomography (μCT) to observe xenon hydrate growth. During xenon hydrate formation in a single pore and a sandpack, we observe heterogeneous (patchy) hydrate distribution at both pore (10 μm) and core scales (10 cm). These results present similarities with earlier observations on naturally occurring and synthetic hydrate-bearing sediment (HBS). Based on image analyses of xenon hydrate in the single pore, we find that, under the quasi-isothermal condition, the xenon volumetric growth rate versus overpressurization curve fits an Arrhenius type equation. Using the μCT images of HBS, we are able to calculate the permeability of HBS using a lattice Boltzmann method. We find the reduced permeability versus hydrate saturation curve fits a simple Corey-type model as suggested by earlier studies. However, patchy distribution of hydrate does not permit a straightforward interpretation of the saturation exponent. This work provides fundamental observations of hydrate growth and pore habit in sediments and how hydrate habit affects the hydraulic conductivity of HBS. Further implications can be extended to the strength, seismic velocities and electrical properties of HBS.

  12. High contrast imaging and flexible photomanipulation for quantitative in vivo multiphoton imaging with polygon scanning microscope.

    PubMed

    Li, Yongxiao; Montague, Samantha J; Brüstle, Anne; He, Xuefei; Gillespie, Cathy; Gaus, Katharina; Gardiner, Elizabeth E; Lee, Woei Ming

    2018-02-28

    In this study, we introduce two key improvements that overcome limitations of existing polygon scanning microscopes while maintaining high spatial and temporal imaging resolution over large field of view (FOV). First, we proposed a simple and straightforward means to control the scanning angle of the polygon mirror to carry out photomanipulation without resorting to high speed optical modulators. Second, we devised a flexible data sampling method directly leading to higher image contrast by over 2-fold and digital images with 100 megapixels (10 240 × 10 240) per frame at 0.25 Hz. This generates sub-diffraction limited pixels (60 nm per pixels over the FOV of 512 μm) which increases the degrees of freedom to extract signals computationally. The unique combined optical and digital control recorded fine fluorescence recovery after localized photobleaching (r ~10 μm) within fluorescent giant unilamellar vesicles and micro-vascular dynamics after laser-induced injury during thrombus formation in vivo. These new improvements expand the quantitative biological-imaging capacity of any polygon scanning microscope system. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Fractal pharmacokinetics.

    PubMed

    Pereira, Luis M

    2010-06-01

    Pharmacokinetics (PK) has been traditionally dealt with under the homogeneity assumption. However, biological systems are nowadays comprehensively understood as being inherently fractal. Specifically, the microenvironments where drug molecules interact with membrane interfaces, metabolic enzymes or pharmacological receptors, are unanimously recognized as unstirred, space-restricted, heterogeneous and geometrically fractal. Therefore, classical Fickean diffusion and the notion of the compartment as a homogeneous kinetic space must be revisited. Diffusion in fractal spaces has been studied for a long time making use of fractional calculus and expanding on the notion of dimension. Combining this new paradigm with the need to describe and explain experimental data results in defining time-dependent rate constants with a characteristic fractal exponent. Under the one-compartment simplification this strategy is straightforward. However, precisely due to the heterogeneity of the underlying biology, often at least a two-compartment model is required to address macroscopic data such as drug concentrations. This simple modelling step-up implies significant analytical and numerical complications. However, a few methods are available that make possible the original desideratum. In fact, exploring the full range of parametric possibilities and looking at different drugs and respective biological concentrations, it may be concluded that all PK modelling approaches are indeed particular cases of the fractal PK theory.

  14. Rheological Properties and Electrospinnability of High-Amylose Starch in Formic Acid.

    PubMed

    Lancuški, Anica; Vasilyev, Gleb; Putaux, Jean-Luc; Zussman, Eyal

    2015-08-10

    Starch derivatives, such as starch-esters, are commonly used as alternatives to pure starch due to their enhanced mechanical properties. However, simple and efficient processing routes are still being sought out. In the present article, we report on a straightforward method for electrospinning high-amylose starch-formate nanofibers from 17 wt % aqueous formic acid (FA) dispersions. The diameter of the electrospun starch-formate fibers ranged from 80 to 300 nm. The electrospinnability window between starch gelatinization and phase separation was determined using optical microscopy and rheological studies. This window was shown to strongly depend on the water content in the FA dispersions. While pure FA rapidly gelatinized starch, yielding solutions suitable for electrospinning within a few hours at room temperature, the presence of water (80 and 90 vol % FA) significantly delayed gelatinization and dissolution, which deteriorated fiber quality. A complete destabilization of the electrospinning process was observed in 70 vol % FA dispersions. Optical micrographs showed that FA induced a disruption of starch granule with a loss of crystallinity confirmed by X-ray diffraction. As a result, starch fiber mats exhibited a higher elongation at break when compared to brittle starch films.

  15. The magnetic lead field theorem in the quasi-static approximation and its use for magnetoencephalography forward calculation in realistic volume conductors.

    PubMed

    Nolte, Guido

    2003-11-21

    The equation for the magnetic lead field for a given magnetoencephalography (MEG) channel is well known for arbitrary frequencies omega but is not directly applicable to MEG in the quasi-static approximation. In this paper we derive an equation for omega = 0 starting from the very definition of the lead field instead of using Helmholtz's reciprocity theorems. The results are (a) the transpose of the conductivity times the lead field is divergence-free, and (b) the lead field differs from the one in any other volume conductor by a gradient of a scalar function. Consequently, for a piecewise homogeneous and isotropic volume conductor, the lead field is always tangential at the outermost surface. Based on this theoretical result, we formulated a simple and fast method for the MEG forward calculation for one shell of arbitrary shape: we correct the corresponding lead field for a spherical volume conductor by a superposition of basis functions, gradients of harmonic functions constructed here from spherical harmonics, with coefficients fitted to the boundary conditions. The algorithm was tested for a prolate spheroid of realistic shape for which the analytical solution is known. For high order in the expansion, we found the solutions to be essentially exact and for reasonable accuracies much fewer multiplications are needed than in typical implementations of the boundary element methods. The generalization to more shells is straightforward.

  16. Learning how to rate video-recorded therapy sessions: a practical guide for trainees and advanced clinicians.

    PubMed

    McCullough, Leigh; Bhatia, Maneet; Ulvenes, Pal; Berggraf, Lene; Osborn, Kristin

    2011-06-01

    Watching and rating psychotherapy sessions is an important yet often overlooked component of psychotherapy training. This article provides a simple and straightforward guide for using one Website (www.ATOStrainer.com) that provides an automated training protocol for rating of psychotherapy sessions. By the end of the article, readers will be able to have the knowledge to go to the Website and begin using this training method as soon as they have a recorded session to view. This article presents, (a) an overview of the Achievement of Therapeutic Objectives Scale (ATOS; McCullough et al., 2003a), a research tool used to rate psychotherapy sessions; (b) a description of APA training tapes, available for purchase from APA Books, that have been rated and scored by ATOS trained clinicians and posted on the Website; (c) step-by-step procedures on how ratings can be done; (d) an introduction to www.ATOStrainer.com where ratings can be entered and compared with expert ratings; and (e) first-hand personal experiences of the authors using this training method and the benefits it affords both trainees and experienced therapists. This psychotherapy training Website has the potential to be a key resource tool for graduate students, researchers, and clinicians. Our long-range goal is to promote the growth of our understanding of psychotherapy and to improve the quality of psychotherapy provided for patients.

  17. Cavitation and non-cavitation regime for large-scale ultrasonic standing wave particle separation systems--In situ gentle cavitation threshold determination and free radical related oxidation.

    PubMed

    Johansson, Linda; Singh, Tanoj; Leong, Thomas; Mawson, Raymond; McArthur, Sally; Manasseh, Richard; Juliano, Pablo

    2016-01-01

    We here suggest a novel and straightforward approach for liter-scale ultrasound particle manipulation standing wave systems to guide system design in terms of frequency and acoustic power for operating in either cavitation or non-cavitation regimes for ultrasound standing wave systems, using the sonochemiluminescent chemical luminol. We show that this method offers a simple way of in situ determination of the cavitation threshold for selected separation vessel geometry. Since the pressure field is system specific the cavitation threshold is system specific (for the threshold parameter range). In this study we discuss cavitation effects and also measure one implication of cavitation for the application of milk fat separation, the degree of milk fat lipid oxidation by headspace volatile measurements. For the evaluated vessel, 2 MHz as opposed to 1 MHz operation enabled operation in non-cavitation or low cavitation conditions as measured by the luminol intensity threshold method. In all cases the lipid oxidation derived volatiles were below the human sensory detection level. Ultrasound treatment did not significantly influence the oxidative changes in milk for either 1 MHz (dose of 46 kJ/L and 464 kJ/L) or 2 MHz (dose of 37 kJ/L and 373 kJ/L) operation. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Gradient Pre-Emphasis to Counteract First-Order Concomitant Fields on Asymmetric MRI Gradient Systems

    PubMed Central

    Tao, Shengzhen; Weavers, Paul T.; Trzasko, Joshua D.; Shu, Yunhong; Huston, John; Lee, Seung-Kyun; Frigo, Louis M.; Bernstein, Matt A.

    2016-01-01

    PURPOSE To develop a gradient pre-emphasis scheme that prospectively counteracts the effects of the first-order concomitant fields for any arbitrary gradient waveform played on asymmetric gradient systems, and to demonstrate the effectiveness of this approach using a real-time implementation on a compact gradient system. METHODS After reviewing the first-order concomitant fields that are present on asymmetric gradients, a generalized gradient pre-emphasis model assuming arbitrary gradient waveforms is developed to counteract their effects. A numerically straightforward, simple to implement approximate solution to this pre-emphasis problem is derived, which is compatible with the current hardware infrastructure used on conventional MRI scanners for eddy current compensation. The proposed method was implemented on the gradient driver sub-system, and its real-time use was tested using a series of phantom and in vivo data acquired from 2D Cartesian phase-difference, echo-planar imaging (EPI) and spiral acquisitions. RESULTS The phantom and in vivo results demonstrate that unless accounted for, first-order concomitant fields introduce considerable phase estimation error into the measured data and result in images exhibiting spatially dependent blurring/distortion. The resulting artifacts are effectively prevented using the proposed gradient pre-emphasis. CONCLUSION An efficient and effective gradient pre-emphasis framework is developed to counteract the effects of first-order concomitant fields of asymmetric gradient systems. PMID:27373901

  19. Investigation of phosphoserine and cytidine 5'-phosphate by heteronuclear two-dimensional spectroscopy: samples with strong proton coupling

    NASA Astrophysics Data System (ADS)

    Bolton, Philip H.

    Heteronuclear two-dimensional magnetic resonance is a novel method for investigating the conformations of cellular phosphates. The two-dimensional proton spectra are detected indirectly via the phosphorus-31 nucleus and thus allow determination of proton chemical shifts and coupling constants in situations in which the normal proton spectrum is obscured. Previous investigations of cellular phosphates with relatively simple spin systems have shown that the two-dimensional proton spectrum can be readily related to the normal proton spectrum by subspectral analysis. The normal proton spectrum can be decomposed into two subspectra, one for each polarization of the phosphorus-31 nucleus. The two-dimensional spectrum arises from the difference between the subspectra, and the normal proton spectrum is the sum. This allows simulation of the two-dimensional spectra and hence determination of the proton chemical shifts and coupling constants. Many cellular phosphates of interest, such as 5'-nucleotides and phosphoserine, contain three protons coupled to the phosphorus which are strongly coupled to one another. These samples are amenable to the two-dimensional method and the straightforward subspectral analysis is preserved when a 90° pulse is applied to the protons in the magnetization transfer step. The two-dimensional proton spectra of the samples investigated here have higher resolution than the normal proton spectra, revealing spectral features not readily apparent in the normal proton spectra.

  20. High-throughput DNA extraction of forensic adhesive tapes.

    PubMed

    Forsberg, Christina; Jansson, Linda; Ansell, Ricky; Hedman, Johannes

    2016-09-01

    Tape-lifting has since its introduction in the early 2000's become a well-established sampling method in forensic DNA analysis. Sampling is quick and straightforward while the following DNA extraction is more challenging due to the "stickiness", rigidity and size of the tape. We have developed, validated and implemented a simple and efficient direct lysis DNA extraction protocol for adhesive tapes that requires limited manual labour. The method uses Chelex beads and is applied with SceneSafe FAST tape. This direct lysis protocol provided higher mean DNA yields than PrepFiler Express BTA on Automate Express, although the differences were not significant when using clothes worn in a controlled fashion as reference material (p=0.13 and p=0.34 for T-shirts and button-down shirts, respectively). Through in-house validation we show that the method is fit-for-purpose for application in casework, as it provides high DNA yields and amplifiability, as well as good reproducibility and DNA extract stability. After implementation in casework, the proportion of extracts with DNA concentrations above 0.01ng/μL increased from 71% to 76%. Apart from providing higher DNA yields compared with the previous method, the introduction of the developed direct lysis protocol also reduced the amount of manual labour by half and doubled the potential throughput for tapes at the laboratory. Generally, simplified manual protocols can serve as a cost-effective alternative to sophisticated automation solutions when the aim is to enable high-throughput DNA extraction of complex crime scene samples. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  1. Using shape contexts method for registration of contra lateral breasts in thermal images.

    PubMed

    Etehadtavakol, Mahnaz; Ng, Eddie Yin-Kwee; Gheissari, Niloofar

    2014-12-10

    To achieve symmetric boundaries for left and right breasts boundaries in thermal images by registration. The proposed method for registration consists of two steps. In the first step, shape context, an approach as presented by Belongie and Malik was applied for registration of two breast boundaries. The shape context is an approach to measure shape similarity. Two sets of finite sample points from shape contours of two breasts are then presented. Consequently, the correspondences between the two shapes are found. By finding correspondences, the sample point which has the most similar shape context is obtained. In this study, a line up transformation which maps one shape onto the other has been estimated in order to complete shape. The used of a thin plate spline permitted good estimation of a plane transformation which has capability to map unselective points from one shape onto the other. The obtained aligning transformation of boundaries points has been applied successfully to map the two breasts interior points. Some of advantages for using shape context method in this work are as follows: (1) no special land marks or key points are needed; (2) it is tolerant to all common shape deformation; and (3) although it is uncomplicated and straightforward to use, it gives remarkably powerful descriptor for point sets significantly upgrading point set registration. Results are very promising. The proposed algorithm was implemented for 32 cases. Boundary registration is done perfectly for 28 cases. We used shape contexts method that is simple and easy to implement to achieve symmetric boundaries for left and right breasts boundaries in thermal images.

  2. Theory of the Quantized Hall Conductance in Periodic Systems: a Topological Analysis.

    NASA Astrophysics Data System (ADS)

    Czerwinski, Michael Joseph

    The integral quantization of the Hall conductance in two-dimensional periodic systems is investigated from a topological point of view. Attention is focused on the contributions from the electronic sub-bands which arise from perturbed Landau levels. After reviewing the theoretical work leading to the identification of the Hall conductance as a topological quantum number, both a determination and interpretation of these quantized values for the sub-band conductances is made. It is shown that the Hall conductance of each sub-band can be regarded as the sum of two terms which will be referred to as classical and nonclassical. Although each of these contributions individually leads to a fractional conductance, the sum of these two contributions does indeed yield an integer. These integral conductances are found to be given by the solution of a simple Diophantine equation which depends on the periodic perturbation. A connection between the quantized value of the Hall conductance and the covering of real space by the zeroes of the sub-band wavefunctions allows for a determination of these conductances under more general potentials. A method is described for obtaining the conductance values from only those states bordering the Brillouin zone, and not the states in its interior. This method is demonstrated to give Hall conductances in agreement with those obtained from the Diophantine equation for the sinusoidal potential case explored earlier. Generalizing a simple gauge invariance argument from real space to k-space, a k-space 'vector potential' is introduced. This allows for a explicit identification of the Hall conductance with the phase winding number of the sub-band wavefunction around the Brillouin zone. The previously described division of the Hall conductance into classical and nonclassical contributions is in this way made more rigorous; based on periodicity considerations alone, these terms are identified as the winding numbers associated with (i) the basis states and (ii) the coefficients of these basis states, respectively. In this way a general Diophantine equation, independent of the periodic potential, is obtained. Finally, the use of the 'parallel transport' of state vectors in the determination of an overall phase convention for these states is described. This is seen to lead to a simple and straightforward method for determining the Hall conductance. This method is based on the states directly, without reference to the particular component wavefunctions of these states. Mention is made of the generality of calculations of this type, within the context of the geometric (or Berry) phases acquired by systems under an adiabatic modification of their environment.

  3. Polyaniline nanofibers: a unique polymer nanostructure for versatile applications.

    PubMed

    Li, Dan; Huang, Jiaxing; Kaner, Richard B

    2009-01-20

    Known for more than 150 years, polyaniline is the oldest and potentially one of the most useful conducting polymers because of its facile synthesis, environmental stability, and simple acid/base doping/dedoping chemistry. Because a nanoform of this polymer could offer new properties or enhanced performance, nanostructured polyaniline has attracted a great deal of interest during the past few years. This Account summarizes our recent research on the syntheses, processing, properties, and applications of polyaniline nanofibers. By monitoring the nucleation behavior of polyaniline, we demonstrate that high-quality nanofibers can be readily produced in bulk quantity using the conventional chemical oxidative polymerization of aniline. The polyaniline nanostructures formed using this simple method have led to a number of exciting discoveries. For example, we can readily prepare aqueous polyaniline colloids by purifying polyaniline nanofibers and controlling the pH. The colloids formed are self-stabilized via electrostatic repulsions without the need for any chemical modification or steric stabilizer, thus providing a simple and environmentally friendly way to process this polymer. An unusual nanoscale photothermal effect called "flash welding", which we discovered with polyaniline nanofibers, has led to the development of new techniques for making asymmetric polymer membranes and patterned nanofiber films and creating polymer-based nanocomposites. We also demonstrate the use of flash-welded polyaniline films for monolithic actuators. Taking advantage of the unique reduction/oxidation chemistry of polyaniline, we can decorate polyaniline nanofibers with metal nanoparticles through in situ reduction of selected metal salts. The resulting polyaniline/metal nanoparticle composites show promise for use in ultrafast nonvolatile memory devices and for chemical catalysis. In addition, the use of polyaniline nanofibers or their composites can significantly enhance the sensitivity, selectivity, and response time of polyaniline-based chemical sensors. By combining straightforward synthesis and composite formation with exceptional solution processability, we have developed a range of new useful functionalities. Further research on nanostructured conjugated polymers holds promise for even more exciting discoveries and intriguing applications.

  4. Optical Characterization of Light-Bending Mechanisms in Photonic Crystals with Simple Cubic Symmetry

    NASA Astrophysics Data System (ADS)

    Frey, Brian James

    For much of Earth's history, light was reputed to be an intangible, intractable, and transient quantity, but our understanding of light has since been revolutionized. The flow of electromagnetic energy through space can today be manipulated with a degree of precision and control once only dreamed of; rapidly developing technologies can create, guide, bend, and detect light to produce useful energy and information. One field where these technologies are most relevant is the field of light trapping, which concerns the harvesting of incident photons within a limited space by scattering, slowing, or otherwise prolonging and enhancing their interaction with matter. Over the past few decades, a class of materials, called photonic crystals (PCs), has emerged that is ideally suited for this task. This is because their wavelength-scale periodicity in one, two, or three dimensions can be designed to alter the dispersion relation and photonic density-of-states in a controllable manner. In this work, a TiO2 simple cubic PC with high dielectric contrast ( > 4:1) is fabricated with a lattice constant of 450 nm, and a newly discovered light-trapping mechanism is demonstrated, which bends light by 90 degrees and enhances optical absorption by one to two orders-of-magnitude over that in a reference film of the same thickness. It is shown that, for wavelengths from 450-950 nm, the achievable enhancement factor for this structure surpasses the theoretical limit of 4n2 derived under the assumption of ergodic system by multiple times. These results derive directly from the symmetry of the simple cubic lattice and are fundamental in nature, not depending on the material used or on the method of fabrication. The light trapping capability of these PCs has straight-forward applications that would be useful in a variety of areas where increased light-matter interaction is desirable, such as white-light generation, thin-film solar cells, photocatalytic pollutant degradation and hydrogen fuel production, and chemical sensing.

  5. Methods for Assessing College Student Use of Alcohol and Other Drugs. A Prevention 101 Series Publication

    ERIC Educational Resources Information Center

    Higher Education Center for Alcohol and Other Drug Abuse and Violence Prevention, 2008

    2008-01-01

    This guide offers a straightforward method for gathering and reporting student survey data on substance use-related problems. It will be of particular interest to program directors for AOD prevention programs on campus, or to members of a campus-based task force or campus and community coalition that is charged with assessing the need for new…

  6. Neutron-Encoded Protein Quantification by Peptide Carbamylation

    NASA Astrophysics Data System (ADS)

    Ulbrich, Arne; Merrill, Anna E.; Hebert, Alexander S.; Westphall, Michael S.; Keller, Mark P.; Attie, Alan D.; Coon, Joshua J.

    2014-01-01

    We describe a chemical tag for duplex proteome quantification using neutron encoding (NeuCode). The method utilizes the straightforward, efficient, and inexpensive carbamylation reaction. We demonstrate the utility of NeuCode carbamylation by accurately measuring quantitative ratios from tagged yeast lysates mixed in known ratios and by applying this method to quantify differential protein expression in mice fed a either control or high-fat diet.

  7. Exact Solutions for the Integrable Sixth-Order Drinfeld-Sokolov-Satsuma-Hirota System by the Analytical Methods.

    PubMed

    Manafian Heris, Jalil; Lakestani, Mehrdad

    2014-01-01

    We establish exact solutions including periodic wave and solitary wave solutions for the integrable sixth-order Drinfeld-Sokolov-Satsuma-Hirota system. We employ this system by using a generalized (G'/G)-expansion and the generalized tanh-coth methods. These methods are developed for searching exact travelling wave solutions of nonlinear partial differential equations. It is shown that these methods, with the help of symbolic computation, provide a straightforward and powerful mathematical tool for solving nonlinear partial differential equations.

  8. Forces in General Relativity

    ERIC Educational Resources Information Center

    Ridgely, Charles T.

    2010-01-01

    Many textbooks dealing with general relativity do not demonstrate the derivation of forces in enough detail. The analyses presented herein demonstrate straightforward methods for computing forces by way of general relativity. Covariant divergence of the stress-energy-momentum tensor is used to derive a general expression of the force experienced…

  9. Teaching Stress Physiology Using Zebrafish ("Danio Rerio")

    ERIC Educational Resources Information Center

    Cooper, Michael; Dhawale, Shree; Mustafa, Ahmed

    2009-01-01

    A straightforward and inexpensive laboratory experiment is presented that investigates the physiological stress response of zebrafish after a 5 degree C increase in water temperature. This experiment is designed for an undergraduate physiology lab and allows students to learn the scientific method and relevant laboratory techniques without causing…

  10. "Quantum Interference with Slits" Revisited

    ERIC Educational Resources Information Center

    Rothman, Tony; Boughn, Stephen

    2011-01-01

    Marcella has presented a straightforward technique employing the Dirac formalism to calculate single- and double-slit interference patterns. He claims that no reference is made to classical optics or scattering theory and that his method therefore provides a purely quantum mechanical description of these experiments. He also presents his…

  11. Theory of the fundamental laser linewidth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, P.; Milonni, P.W.; Sundaram, B.

    1991-08-01

    The theory of the laser linewidth is formulated to account for arbitrarily large output couplings and spatial hole burning. We show explicitly that the linewidth can be interpreted in terms of either spontaneous-emission noise or the amplification of vacuum field modes leaking into the cavity, depending on the ordering of operators in the correlation function determining the laser spectrum. This allows us to derive the Petermann {ital K} factor associated with excess spontaneous-emission noise'' in a physically transparent and mathematically simple way, without the need to introduce adjoint modes of the resonator. It also allows us to straightforwardly include spatial-hole-burningmore » effects, which are found to increase the {ital K} factor and the linewidth in high-gain systems appreciably.« less

  12. Tissue differentiation by diffuse reflectance spectroscopy for automated oral and maxillofacial laser surgery: ex vivo pilot study

    NASA Astrophysics Data System (ADS)

    Zam, Azhar; Stelzle, Florian; Tangermann-Gerk, Katja; Adler, Werner; Nkenke, Emeka; Schmidt, Michael; Douplik, Alexandre

    2010-02-01

    Remote laser surgery lacks of haptic feedback during the laser ablation of tissue. Hence, there is a risk of iatrogenic damage or destruction of anatomical structures like nerves or salivary glands. Diffuse reflectance spectroscopy provides a straightforward and simple approach for optical tissue differentiation. We measured diffuse reflectance from seven various tissue types ex vivo. We applied Linear Discriminant Analysis (LDA) to differentiate the seven tissue types and computed the area under the ROC curve (AUC). Special emphasis was taken on the identification of nerves and salivary glands as the most crucial tissue for maxillofacial surgery. The results show a promise for differentiating tissues as guidance for oral and maxillofacial laser surgery by means of diffuse reflectance.

  13. The 57Fe Mössbauer parameters of pyrite and marcasite with different provenances

    USGS Publications Warehouse

    Evans, B.J.; Johnson, R.G.; Senftle, F.E.; Cecil, C.B.; Dulong, F.

    1982-01-01

    The Mössbauer parameters of pyrite and marcasite exhibit appreciable variations, which bear no simple relationship to the geological environment in which they occur but appear to be selectively influenced by impurities, especially arsenic, in the pyrite lattice. Quantitative and qualitative determinations of pyrite/marcasite mechanical mixtures are straightforward at 298 K and 77 K but do require least-squares computer fittings and are limited to accuracies ranging from ±5 to ±15 per cent by uncertainties in the parameter values of the pure phases. The methodology and results of this investigation are directly applicable to coals for which the presence and relative amounts of pyrite and marcasite could be of considerable genetic significance.

  14. Action languages: Dimensions, effects

    NASA Technical Reports Server (NTRS)

    Hayes, Daniel G.; Streeter, Gordon

    1989-01-01

    Dimensions of action languages are discussed for communication between humans and machines, and the message handling capabilities of object oriented programming systems are examined. Design of action languages is seen to be very contextual. Economical and effective design will depend on features of situations, the tasks intended to be accomplished, and the nature of the devices themselves. Current object oriented systems turn out to have fairly simple and straightforward message handling facilities, which in themselves do little to buffer action or even in some cases to handle competing messages. Even so, it is possible to program a certain amount of discretion about how they react to messages. Such thoughtfulness and perhaps relative autonomy of program modules seems prerequisite to future systems to handle complex interactions in changing situations.

  15. On the physical parameters for Centaurus X-3 and Hercules X-1.

    NASA Technical Reports Server (NTRS)

    Mccluskey, G. E., Jr.; Kondo, Y.

    1972-01-01

    It is shown how upper and lower limits on the physical parameters of X-ray sources in Centaurus X-3 and Hercules X-1 may be determined from a reasonably simple and straightforward consideration. The basic assumption is that component A (the non-X-ray emitting component) is not a star collapsing toward its Schwartzschild radius (i.e., a black hole). This assumption appears reasonable since component A (the radius of the central occulting star) appears to physically occult component X. If component A is a 'normal' star, both observation and theory indicate that its mass is not greater than about 60 solar masses. The possibility in which component X is either a neutron star or a white dwarf is considered.

  16. The factor structure and screening utility of the Social Interaction Anxiety Scale.

    PubMed

    Rodebaugh, Thomas L; Woods, Carol M; Heimberg, Richard G; Liebowitz, Michael R; Schneier, Franklin R

    2006-06-01

    The widely used Social Interaction Anxiety Scale (SIAS; R. P. Mattick & J. C. Clarke, 1998) possesses favorable psychometric properties, but questions remain concerning its factor structure and item properties. Analyses included 445 people with social anxiety disorder and 1,689 undergraduates. Simple unifactorial models fit poorly, and models that accounted for differences due to item wording (i.e., reverse scoring) provided superior fit. It was further found that clients and undergraduates approached some items differently, and the SIAS may be somewhat overly conservative in selecting analogue participants from an undergraduate sample. Overall, this study provides support for the excellent properties of the SIAS's straightforwardly worded items, although questions remain regarding its reverse-scored items. Copyright 2006 APA, all rights reserved.

  17. A toy model for the yield of a tamped fission bomb

    NASA Astrophysics Data System (ADS)

    Reed, B. Cameron

    2018-02-01

    A simple expression is developed for estimating the yield of a tamped fission bomb, that is, a basic nuclear weapon comprising a fissile core jacketed by a surrounding neutron-reflecting tamper. This expression is based on modeling the nuclear chain reaction as a geometric progression in combination with a previously published expression for the threshold-criticality condition for such a core. The derivation is especially straightforward, as it requires no knowledge of diffusion theory and should be accessible to students of both physics and policy. The calculation can be set up as a single page spreadsheet. Application to the Little Boy and Fat Man bombs of World War II gives results in reasonable accord with published yield estimates for these weapons.

  18. A von Hamos x-ray spectrometer based on a segmented-type diffraction crystal for single-shot x-ray emission spectroscopy and time-resolved resonant inelastic x-ray scattering studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szlachetko, J.; Institute of Physics, Jan Kochanowski University, 25-406 Kielce; Nachtegaal, M.

    2012-10-15

    We report on the design and performance of a wavelength-dispersive type spectrometer based on the von Hamos geometry. The spectrometer is equipped with a segmented-type crystal for x-ray diffraction and provides an energy resolution in the order of 0.25 eV and 1 eV over an energy range of 8000 eV-9600 eV. The use of a segmented crystal results in a simple and straightforward crystal preparation that allows to preserve the spectrometer resolution and spectrometer efficiency. Application of the spectrometer for time-resolved resonant inelastic x-ray scattering and single-shot x-ray emission spectroscopy is demonstrated.

  19. A zero-equation turbulence model for two-dimensional hybrid Hall thruster simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappelli, Mark A., E-mail: cap@stanford.edu; Young, Christopher V.; Cha, Eunsun

    2015-11-15

    We present a model for electron transport across the magnetic field of a Hall thruster and integrate this model into 2-D hybrid particle-in-cell simulations. The model is based on a simple scaling of the turbulent electron energy dissipation rate and the assumption that this dissipation results in Ohmic heating. Implementing the model into 2-D hybrid simulations is straightforward and leverages the existing framework for solving the electron fluid equations. The model recovers the axial variation in the mobility seen in experiments, predicting the generation of a transport barrier which anchors the region of plasma acceleration. The predicted xenon neutral andmore » ion velocities are found to be in good agreement with laser-induced fluorescence measurements.« less

  20. How do I write a scientific article?-A personal perspective.

    PubMed

    Lippi, Giuseppe

    2017-10-01

    Scientific writing is not an easy task. Although there is no single and universally agreed strategy for assembling a successful scientific article, it is undeniable that some basic notions, gathered after decades of experience, may help increasing the chance of acceptance of a scientific manuscript. Therefore, the purpose of this article is to present a personal and arbitrary perspective on how to write a scientific article, entailing a tentative flowchart and a checklist describing the most important aspects characterizing each section of the manuscript. The final suggestion, which can be summarized in one simple and straightforward concept, is that you should always remember that a scientific article is meant to be read by others (i.e., referees and readers) and not by yourself.

Top