Sample records for straightforward method based

  1. Arrival Time Tracking of Partially Resolved Acoustic Rays with Application to Ocean Acoustic Tomography

    DTIC Science & Technology

    1991-03-01

    ocean acoustic tomography. A straightforward method of arrival time estimation, based on locating the maximum value of an interpolated arrival, was...used with limited success for analysis of data from the December 1988 Monterey Bay Tomography Experiment. Close examination of the data revealed multiple...estimation of arrival times along an ocean acoustic ray path is an important component of ocean acoustic tomography. A straightforward method of arrival time

  2. The Computation of Global Viscoelastic Co- and Post-seismic Displacement in a Realistic Earth Model by Straightforward Numerical Inverse Laplace Integration

    NASA Astrophysics Data System (ADS)

    Tang, H.; Sun, W.

    2016-12-01

    The theoretical computation of dislocation theory in a given earth model is necessary in the explanation of observations of the co- and post-seismic deformation of earthquakes. For this purpose, computation theories based on layered or pure half space [Okada, 1985; Okubo, 1992; Wang et al., 2006] and on spherically symmetric earth [Piersanti et al., 1995; Pollitz, 1997; Sabadini & Vermeersen, 1997; Wang, 1999] have been proposed. It is indicated that the compressibility, curvature and the continuous variation of the radial structure of Earth should be simultaneously taken into account for modern high precision displacement-based observations like GPS. Therefore, Tanaka et al. [2006; 2007] computed global displacement and gravity variation by combining the reciprocity theorem (RPT) [Okubo, 1993] and numerical inverse Laplace integration (NIL) instead of the normal mode method [Peltier, 1974]. Without using RPT, we follow the straightforward numerical integration of co-seismic deformation given by Sun et al. [1996] to present a straightforward numerical inverse Laplace integration method (SNIL). This method is used to compute the co- and post-seismic displacement of point dislocations buried in a spherically symmetric, self-gravitating viscoelastic and multilayered earth model and is easy to extended to the application of geoid and gravity. Comparing with pre-existing method, this method is relatively more straightforward and time-saving, mainly because we sum associated Legendre polynomials and dislocation love numbers before using Riemann-Merlin formula to implement SNIL.

  3. Simple method to detect triacylglycerol biosynthesis in a yeast-based recombinant system

    USDA-ARS?s Scientific Manuscript database

    Standard methods to quantify the activity of triacylglycerol (TAG) synthesizing enzymes DGAT and PDAT (TAG-SE) require a sensitive but rather arduous laboratory assay based on radio-labeled substrates. Here we describe two straightforward methods to detect TAG production in baker’s yeast Saccharomyc...

  4. Nodal Analysis Optimization Based on the Use of Virtual Current Sources: A Powerful New Pedagogical Method

    ERIC Educational Resources Information Center

    Chatzarakis, G. E.

    2009-01-01

    This paper presents a new pedagogical method for nodal analysis optimization based on the use of virtual current sources, applicable to any linear electric circuit (LEC), regardless of its complexity. The proposed method leads to straightforward solutions, mostly arrived at by inspection. Furthermore, the method is easily adapted to computer…

  5. Ion beam induced 18F-radiofluorination: straightforward synthesis of gaseous radiotracers for the assessment of regional lung ventilation using positron emission tomography.

    PubMed

    Gómez-Vallejo, V; Lekuona, A; Baz, Z; Szczupak, B; Cossío, U; Llop, J

    2016-09-29

    A simple, straightforward and efficient method for the synthesis of [ 18 F]CF 4 and [ 18 F]SF 6 based on an ion beam-induced isotopic exchange reaction is presented. Positron emission tomography ventilation studies in rodents using [ 18 F]CF 4 showed a uniform distribution of the radiofluorinated gas within the lungs and rapid elimination after discontinuation of the administration.

  6. Energy Expansion for the Period of Anharmonic Oscillators by the Method of Lindstedt-Poincare

    ERIC Educational Resources Information Center

    Fernandez, Francisco M.

    2004-01-01

    A simple, straightforward and efficient method is proposed for the calculation of the period of anharmonic oscillators as an energy series. The approach is based on perturbation theory and the method of Lindstedt-Poincare.

  7. The Effects of Computer-Supported Inquiry-Based Learning Methods and Peer Interaction on Learning Stellar Parallax

    ERIC Educational Resources Information Center

    Ruzhitskaya, Lanika

    2011-01-01

    The presented research study investigated the effects of computer-supported inquiry-based learning and peer interaction methods on effectiveness of learning a scientific concept. The stellar parallax concept was selected as a basic, and yet important in astronomy, scientific construct, which is based on a straightforward relationship of several…

  8. Determining flexor-tendon repair techniques via soft computing

    NASA Technical Reports Server (NTRS)

    Johnson, M.; Firoozbakhsh, K.; Moniem, M.; Jamshidi, M.

    2001-01-01

    An SC-based multi-objective decision-making method for determining the optimal flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the Taguchi method. Using the Taguchi method results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.

  9. Determining flexor-tendon repair techniques via soft computing.

    PubMed

    Johnson, M; Firoozbakhsh, K; Moniem, M; Jamshidi, M

    2001-01-01

    An SC-based multi-objective decision-making method for determining the optimal flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the Taguchi method. Using the Taguchi method results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.

  10. Wronskian Method for Bound States

    ERIC Educational Resources Information Center

    Fernandez, Francisco M.

    2011-01-01

    We propose a simple and straightforward method based on Wronskians for the calculation of bound-state energies and wavefunctions of one-dimensional quantum-mechanical problems. We explicitly discuss the asymptotic behaviour of the wavefunction and show that the allowed energies make the divergent part vanish. As illustrative examples we consider…

  11. A branch-migration based fluorescent probe for straightforward, sensitive and specific discrimination of DNA mutations

    PubMed Central

    Xiao, Xianjin; Wu, Tongbo; Xu, Lei; Chen, Wei

    2017-01-01

    Abstract Genetic mutations are important biomarkers for cancer diagnostics and surveillance. Preferably, the methods for mutation detection should be straightforward, highly specific and sensitive to low-level mutations within various sequence contexts, fast and applicable at room-temperature. Though some of the currently available methods have shown very encouraging results, their discrimination efficiency is still very low. Herein, we demonstrate a branch-migration based fluorescent probe (BM probe) which is able to identify the presence of known or unknown single-base variations at abundances down to 0.3%-1% within 5 min, even in highly GC-rich sequence regions. The discrimination factors between the perfect-match target and single-base mismatched target are determined to be 89–311 by measurement of their respective branch-migration products via polymerase elongation reactions. The BM probe not only enabled sensitive detection of two types of EGFR-associated point mutations located in GC-rich regions, but also successfully identified the BRAF V600E mutation in the serum from a thyroid cancer patient which could not be detected by the conventional sequencing method. The new method would be an ideal choice for high-throughput in vitro diagnostics and precise clinical treatment. PMID:28201758

  12. A Simple Estimation Method for Aggregate Government Outsourcing

    ERIC Educational Resources Information Center

    Minicucci, Stephen; Donahue, John D.

    2004-01-01

    The scholarly and popular debate on the delegation to the private sector of governmental tasks rests on an inadequate empirical foundation, as no systematic data are collected on direct versus indirect service delivery. We offer a simple method for approximating levels of service outsourcing, based on relatively straightforward combinations of and…

  13. Straightforward rapid spectrophotometric quantification of total cyanogenic glycosides in fresh and processed cassava products.

    PubMed

    Tivana, Lucas Daniel; Da Cruz Francisco, Jose; Zelder, Felix; Bergenståhl, Bjorn; Dejmek, Petr

    2014-09-01

    In this study, we extend pioneering studies and demonstrate straightforward applicability of the corrin-based chemosensor, aquacyanocobyrinic acid (ACCA), for the instantaneous detection and rapid quantification of endogenous cyanide in fresh and processed cassava roots. Hydrolytically liberated endogenous cyanide from cyanogenic glycosides (CNp) reacts with ACCA to form dicyanocobyrinic acid (DCCA), accompanied by a change of colour from orange to violet. The method was successfully tested on various cassava samples containing between 6 and 200 mg equiv. HCN/kg as verified with isonicotinate/1,3-dimethylbarbiturate as an independent method. The affinity of ACCA sensor to cyanide is high, coordination occurs fast and the colorimetric response can therefore be instantaneously monitored with spectrophotometric methods. Direct applications of the sensor without need of extensive and laborious extraction processes are demonstrated in water-extracted samples, in acid-extracted samples, and directly on juice drops. ACCA showed high precision with a standard deviation (STDV) between 0.03 and 0.06 and high accuracy (93-96%). Overall, the ACCA procedure is straightforward, safe and easily performed. In a proof-of-concept study, rapid screening of ten samples within 20 min has been tested. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. A Method of Assembling Compact Coherent Fiber-Optic Bundles

    NASA Technical Reports Server (NTRS)

    Martin, Stefan; Liu, Duncan; Levine, Bruce Martin; Shao, Michael; Wallace, James

    2007-01-01

    A method of assembling coherent fiber-optic bundles in which all the fibers are packed together as closely as possible is undergoing development. The method is based, straightforwardly, on the established concept of hexagonal close packing; hence, the development efforts are focused on fixtures and techniques for practical implementation of hexagonal close packing of parallel optical fibers.

  15. Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation

    DTIC Science & Technology

    2004-12-01

    area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the

  16. Label-free SERS detection of Salmonella Typhimurium on DNA aptamer modified AgNR substrates

    USDA-ARS?s Scientific Manuscript database

    A straightforward label-free method based on aptamer binding and surface enhanced Raman specstroscopy (SERS) has been developed for the detection of Salmonella Typhimurium, an important foodborne pathogen that causes gastroenteritis in both humans and animals. Surface of the SERS-active silver nanor...

  17. Three-dimensional motor schema based navigation

    NASA Technical Reports Server (NTRS)

    Arkin, Ronald C.

    1989-01-01

    Reactive schema-based navigation is possible in space domains by extending the methods developed for ground-based navigation found within the Autonomous Robot Architecture (AuRA). Reformulation of two dimensional motor schemas for three dimensional applications is a straightforward process. The manifold advantages of schema-based control persist, including modular development, amenability to distributed processing, and responsiveness to environmental sensing. Simulation results show the feasibility of this methodology for space docking operations in a cluttered work area.

  18. Methods for Assessing College Student Use of Alcohol and Other Drugs. A Prevention 101 Series Publication

    ERIC Educational Resources Information Center

    Higher Education Center for Alcohol and Other Drug Abuse and Violence Prevention, 2008

    2008-01-01

    This guide offers a straightforward method for gathering and reporting student survey data on substance use-related problems. It will be of particular interest to program directors for AOD prevention programs on campus, or to members of a campus-based task force or campus and community coalition that is charged with assessing the need for new…

  19. Extraction of quasi-straightforward-propagating photons from diffused light transmitting through a scattering medium by polarization modulation

    NASA Astrophysics Data System (ADS)

    Horinaka, Hiromichi; Hashimoto, Koji; Wada, Kenji; Cho, Yoshio; Osawa, Masahiko

    1995-07-01

    The utilization of light polarization is proposed to extract quasi-straightforward-propagating photons from diffused light transmitting through a scattering medium under continuously operating conditions. Removal of a floor level normally appearing on the dynamic range over which the extraction capability is maintained is demonstrated. By use of pulse-based observations this cw scheme of extraction of quasi-straightforward-propagating photons is directly shown to be equivalent to the use of a temporal gate in the pulse-based operation.

  20. Robust regression on noisy data for fusion scaling laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be; Laboratoire de Physique des Plasmas de l'ERM - Laboratorium voor Plasmafysica van de KMS

    2014-11-15

    We introduce the method of geodesic least squares (GLS) regression for estimating fusion scaling laws. Based on straightforward principles, the method is easily implemented, yet it clearly outperforms established regression techniques, particularly in cases of significant uncertainty on both the response and predictor variables. We apply GLS for estimating the scaling of the L-H power threshold, resulting in estimates for ITER that are somewhat higher than predicted earlier.

  1. Maxwell iteration for the lattice Boltzmann method with diffusive scaling

    NASA Astrophysics Data System (ADS)

    Zhao, Weifeng; Yong, Wen-An

    2017-03-01

    In this work, we present an alternative derivation of the Navier-Stokes equations from Bhatnagar-Gross-Krook models of the lattice Boltzmann method with diffusive scaling. This derivation is based on the Maxwell iteration and can expose certain important features of the lattice Boltzmann solutions. Moreover, it will be seen to be much more straightforward and logically clearer than the existing approaches including the Chapman-Enskog expansion.

  2. Straightforward analytical method to determine opium alkaloids in poppy seeds and bakery products.

    PubMed

    López, Patricia; Pereboom-de Fauw, Diana P K H; Mulder, Patrick P J; Spanjer, Martien; de Stoppelaar, Joyce; Mol, Hans G J; de Nijs, Monique

    2018-03-01

    A straightforward method to determine the content of six opium alkaloids (morphine, codeine, thebaine, noscapine, papaverine and narceine) in poppy seeds and bakery products was developed and validated down to a limit of quantification (LOQ) of 0.1mg/kg. The method was based on extraction with acetonitrile/water/formic acid, ten-fold dilution and analysis by LC-MS/MS using a pH 10 carbonate buffer. The method was applied for the analysis of 41 samples collected in 2015 in the Netherlands and Germany. All samples contained morphine ranging from 0.2 to 240mg/kg. The levels of codeine and thebaine ranged from below LOQ to 348mg/kg and from below LOQ to 106mg/kg, respectively. Sixty percent of the samples exceeded the guidance reference value of 4mg/kg of morphine set by BfR in Germany, whereas 25% of the samples did not comply with the limits set for morphine, codeine, thebaine and noscapine by Hungarian legislation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A simple method for processing data with least square method

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning

    2017-08-01

    The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.

  4. Ultratrace level determination and quantitative analysis of kidney injury biomarkers in patient samples attained by zinc oxide nanorods

    NASA Astrophysics Data System (ADS)

    Singh, Manpreet; Alabanza, Anginelle; Gonzalez, Lorelis E.; Wang, Weiwei; Reeves, W. Brian; Hahm, Jong-In

    2016-02-01

    Determining ultratrace amounts of protein biomarkers in patient samples in a straightforward and quantitative manner is extremely important for early disease diagnosis and treatment. Here, we successfully demonstrate the novel use of zinc oxide nanorods (ZnO NRs) in the ultrasensitive and quantitative detection of two acute kidney injury (AKI)-related protein biomarkers, tumor necrosis factor (TNF)-α and interleukin (IL)-8, directly from patient samples. We first validate the ZnO NRs-based IL-8 results via comparison with those obtained from using a conventional enzyme-linked immunosorbent method in samples from 38 individuals. We further assess the full detection capability of the ZnO NRs-based technique by quantifying TNF-α, whose levels in human urine are often below the detection limits of conventional methods. Using the ZnO NR platforms, we determine the TNF-α concentrations of all 46 patient samples tested, down to the fg per mL level. Subsequently, we screen for TNF-α levels in approximately 50 additional samples collected from different patient groups in order to demonstrate a potential use of the ZnO NRs-based assay in assessing cytokine levels useful for further clinical monitoring. Our research efforts demonstrate that ZnO NRs can be straightforwardly employed in the rapid, ultrasensitive, quantitative, and simultaneous detection of multiple AKI-related biomarkers directly in patient urine samples, providing an unparalleled detection capability beyond those of conventional analysis methods. Additional key advantages of the ZnO NRs-based approach include a fast detection speed, low-volume assay condition, multiplexing ability, and easy automation/integration capability to existing fluorescence instrumentation. Therefore, we anticipate that our ZnO NRs-based detection method will be highly beneficial for overcoming the frequent challenges in early biomarker development and treatment assessment, pertaining to the facile and ultrasensitive quantification of hard-to-trace biomolecules.Determining ultratrace amounts of protein biomarkers in patient samples in a straightforward and quantitative manner is extremely important for early disease diagnosis and treatment. Here, we successfully demonstrate the novel use of zinc oxide nanorods (ZnO NRs) in the ultrasensitive and quantitative detection of two acute kidney injury (AKI)-related protein biomarkers, tumor necrosis factor (TNF)-α and interleukin (IL)-8, directly from patient samples. We first validate the ZnO NRs-based IL-8 results via comparison with those obtained from using a conventional enzyme-linked immunosorbent method in samples from 38 individuals. We further assess the full detection capability of the ZnO NRs-based technique by quantifying TNF-α, whose levels in human urine are often below the detection limits of conventional methods. Using the ZnO NR platforms, we determine the TNF-α concentrations of all 46 patient samples tested, down to the fg per mL level. Subsequently, we screen for TNF-α levels in approximately 50 additional samples collected from different patient groups in order to demonstrate a potential use of the ZnO NRs-based assay in assessing cytokine levels useful for further clinical monitoring. Our research efforts demonstrate that ZnO NRs can be straightforwardly employed in the rapid, ultrasensitive, quantitative, and simultaneous detection of multiple AKI-related biomarkers directly in patient urine samples, providing an unparalleled detection capability beyond those of conventional analysis methods. Additional key advantages of the ZnO NRs-based approach include a fast detection speed, low-volume assay condition, multiplexing ability, and easy automation/integration capability to existing fluorescence instrumentation. Therefore, we anticipate that our ZnO NRs-based detection method will be highly beneficial for overcoming the frequent challenges in early biomarker development and treatment assessment, pertaining to the facile and ultrasensitive quantification of hard-to-trace biomolecules. Electronic supplementary information (ESI) available: Typical SEM images of the ZnO NRs used in the biomarker assays are provided in Fig. S1. See DOI: 10.1039/c5nr08706f

  5. Straightforward fabrication of black nano silica dusting powder for latent fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Komalasari, Isna; Krismastuti, Fransiska Sri Herwahyu; Elishian, Christine; Handayani, Eka Mardika; Nugraha, Willy Cahya; Ketrin, Rosi

    2017-11-01

    Imaging of latent fingerprint pattern (aka fingermark) is one of the most important and accurate detection methods in forensic investigation because of the characteristic of individual fingerprint. This detection technique relies on the mechanical adherence of fingerprint powder to the moisture and oily component of the skin left on the surface. The particle size of fingerprint powder is one of the critical parameter to obtain excellent fingerprint image. This study develops a simple, cheap and straightforward method to fabricate Nano size black dusting fingerprint powder based on Nano silica and applies the powder to visualize latent fingerprint. The nanostructured silica was prepared from tetraethoxysilane (TEOS) and then modified with Nano carbon, methylene blue and sodium acetate to color the powder. Finally, as a proof-of-principle, the ability of this black Nano silica dusting powder to image latent fingerprint is successfully demonstrated and the results show that this fingerprint powder provides clearer fingerprint pattern compared to the commercial one highlighting the potential application of the nanostructured silica in forensic science.

  6. Newton-Euler Dynamic Equations of Motion for a Multi-body Spacecraft

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric

    2007-01-01

    The Magnetospheric MultiScale (MMS) mission employs a formation of spinning spacecraft with several flexible appendages and thruster-based control. To understand the complex dynamic interaction of thruster actuation, appendage motion, and spin dynamics, each spacecraft is modeled as a tree of rigid bodies connected by spherical or gimballed joints. The method presented facilitates assembling by inspection the exact, nonlinear dynamic equations of motion for a multibody spacecraft suitable for solution by numerical integration. The building block equations are derived by applying Newton's and Euler's equations of motion to an "element" consisting of two bodies and one joint (spherical and gimballed joints are considered separately). Patterns in the "mass" and L'force" matrices guide assembly by inspection of a general N-body tree-topology system. Straightforward linear algebra operations are employed to eliminate extraneous constraint equations, resulting in a minimum-dimension system of equations to solve. This method thus combines a straightforward, easily-extendable, easily-mechanized formulation with an efficient computer implementation.

  7. Hydrophenoxylation of internal alkynes catalysed with a heterobimetallic Cu-NHC/Au-NHC system.

    PubMed

    Lazreg, Faïma; Guidone, Stefano; Gómez-Herrera, Alberto; Nahra, Fady; Cazin, Catherine S J

    2017-02-21

    A straightforward method for the hydrophenoxylation of internal alkynes, using N-heterocyclic carbene-based copper(i) and gold(i) complexes, is described. The heterobimetallic catalytic system proceeds via dual activation of the substrates to afford the desired vinylether derivatives. This methodology is shown to be highly efficient and tolerates a wide range of substituted phenols and alkynes.

  8. A straightforward method for Vacuum-Ultraviolet flux measurements: The case of the hydrogen discharge lamp and implications for solid-phase actinometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulvio, D., E-mail: daniele.fulvio@uni-jena.de, E-mail: dfu@oact.inaf.it; Brieva, A. C.; Jäger, C.

    2014-07-07

    Vacuum-Ultraviolet (VUV) radiation is responsible for the photo-processing of simple and complex molecules in several terrestrial and extraterrestrial environments. In the laboratory such radiation is commonly simulated by inexpensive and easy-to-use microwave-powered hydrogen discharge lamps. However, VUV flux measurements are not trivial and the methods/devices typically used for this purpose, mainly actinometry and calibrated VUV silicon photodiodes, are not very accurate or expensive and lack of general suitability to experimental setups. Here, we present a straightforward method for measuring the VUV photon flux based on the photoelectric effect and using a gold photodetector. This method is easily applicable to mostmore » experimental setups, bypasses the major problems of the other methods, and provides reliable flux measurements. As a case study, the method is applied to a microwave-powered hydrogen discharge lamp. In addition, the comparison of these flux measurements to those obtained by O{sub 2} actinometry experiments allow us to estimate the quantum yield (QY) values QY{sub 122} = 0.44 ± 0.16 and QY{sub 160} = 0.87 ± 0.30 for solid-phase O{sub 2} actinometry.« less

  9. Immunodiagnosis of childhood malignancies.

    PubMed

    Parham, D M; Holt, H

    1999-09-01

    Immunodiagnosis utilizing immunohistochemical techniques is currently the most commonly utilized and readily available method of ancillary diagnosis in pediatric oncopathology. The methodology comprises relatively simple steps, based on straightforward biologic concepts, and the reagents used are generally well characterized and widely used. The principle of cancer immunodiagnosis is based on the determination of neoplastic lineage using detection of proteins typical of cell differentiation pathways. Methodology sensitivity varies and has become greater with each new generation of tests, but technical draw-backs should be considered to avoid excessive background or nonspecific results. Automated instrumentation offers a degree of accuracy and reproducibility not easily attainable by manual methods.

  10. Background estimation and player detection in badminton video clips using histogram of pixel values along temporal dimension

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu

    2015-12-01

    Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.

  11. A stiffness derivative finite element technique for determination of crack tip stress intensity factors

    NASA Technical Reports Server (NTRS)

    Parks, D. M.

    1974-01-01

    A finite element technique for determination of elastic crack tip stress intensity factors is presented. The method, based on the energy release rate, requires no special crack tip elements. Further, the solution for only a single crack length is required, and the crack is 'advanced' by moving nodal points rather than by removing nodal tractions at the crack tip and performing a second analysis. The promising straightforward extension of the method to general three-dimensional crack configurations is presented and contrasted with the practical impossibility of conventional energy methods.

  12. Photometry unlocks 3D information from 2D localization microscopy data.

    PubMed

    Franke, Christian; Sauer, Markus; van de Linde, Sebastian

    2017-01-01

    We developed a straightforward photometric method, temporal, radial-aperture-based intensity estimation (TRABI), that allows users to extract 3D information from existing 2D localization microscopy data. TRABI uses the accurate determination of photon numbers in different regions of the emission pattern of single emitters to generate a z-dependent photometric parameter. This method can determine fluorophore positions up to 600 nm from the focal plane and can be combined with biplane detection to further improve axial localization.

  13. A Universally Applicable and Rapid Method for Measuring the Growth of Streptomyces and Other Filamentous Microorganisms by Methylene Blue Adsorption-Desorption

    PubMed Central

    Fischer, Marco

    2013-01-01

    Quantitative assessment of growth of filamentous microorganisms, such as streptomycetes, is generally restricted to determination of dry weight. Here, we describe a straightforward methylene blue-based sorption assay to monitor microbial growth quantitatively, simply, and rapidly. The assay is equally applicable to unicellular and filamentous bacterial and eukaryotic microorganisms. PMID:23666340

  14. Parallelization of PANDA discrete ordinates code using spatial decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.

    2006-07-01

    We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less

  15. In-Gel Determination of L-Amino Acid Oxidase Activity Based on the Visualization of Prussian Blue-Forming Reaction

    PubMed Central

    Zhou, Ning; Zhao, Chuntian

    2013-01-01

    L-amino acid oxidase (LAAO) is attracting increasing attention due to its important functions. Diverse detection methods with their own properties have been developed for characterization of LAAO. In the present study, a simple, rapid, sensitive, cost-effective and reproducible method for quantitative in-gel determination of LAAO activity based on the visualization of Prussian blue-forming reaction is described. Coupled with SDS-PAGE, this Prussian blue agar assay can be directly used to determine the numbers and approximate molecular weights of LAAO in one step, allowing straightforward application for purification and sequence identification of LAAO from diverse samples. PMID:23383337

  16. Lean body mass correction of standardized uptake value in simultaneous whole-body positron emission tomography and magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Jochimsen, Thies H.; Schulz, Jessica; Busse, Harald; Werner, Peter; Schaudinn, Alexander; Zeisig, Vilia; Kurch, Lars; Seese, Anita; Barthel, Henryk; Sattler, Bernhard; Sabri, Osama

    2015-06-01

    This study explores the possibility of using simultaneous positron emission tomography—magnetic resonance imaging (PET-MRI) to estimate the lean body mass (LBM) in order to obtain a standardized uptake value (SUV) which is less dependent on the patients' adiposity. This approach is compared to (1) the commonly-used method based on a predictive equation for LBM, and (2) to using an LBM derived from PET-CT data. It is hypothesized that an MRI-based correction of SUV provides a robust method due to the high soft-tissue contrast of MRI. A straightforward approach to calculate an MRI-derived LBM is presented. It is based on the fat and water images computed from the two-point Dixon MRI primarily used for attenuation correction in PET-MRI. From these images, a water fraction was obtained for each voxel. Averaging over the whole body yielded the weight-normalized LBM. Performance of the new approach in terms of reducing variations of 18F-Fludeoxyglucose SUVs in brain and liver across 19 subjects was compared with results using predictive methods and PET-CT data to estimate the LBM. The MRI-based method reduced the coefficient of variation of SUVs in the brain by 41  ± 10% which is comparable to the reduction by the PET-CT method (35  ± 10%). The reduction of the predictive LBM method was 29  ± 8%. In the liver, the reduction was less clear, presumably due to other sources of variation. In conclusion, employing the Dixon data in simultaneous PET-MRI for calculation of lean body mass provides a brain SUV which is less dependent on patient adiposity. The reduced dependency is comparable to that obtained by CT and predictive equations. Therefore, it is more comparable across patients. The technique does not impose an overhead in measurement time and is straightforward to implement.

  17. Lean body mass correction of standardized uptake value in simultaneous whole-body positron emission tomography and magnetic resonance imaging.

    PubMed

    Jochimsen, Thies H; Schulz, Jessica; Busse, Harald; Werner, Peter; Schaudinn, Alexander; Zeisig, Vilia; Kurch, Lars; Seese, Anita; Barthel, Henryk; Sattler, Bernhard; Sabri, Osama

    2015-06-21

    This study explores the possibility of using simultaneous positron emission tomography--magnetic resonance imaging (PET-MRI) to estimate the lean body mass (LBM) in order to obtain a standardized uptake value (SUV) which is less dependent on the patients' adiposity. This approach is compared to (1) the commonly-used method based on a predictive equation for LBM, and (2) to using an LBM derived from PET-CT data. It is hypothesized that an MRI-based correction of SUV provides a robust method due to the high soft-tissue contrast of MRI. A straightforward approach to calculate an MRI-derived LBM is presented. It is based on the fat and water images computed from the two-point Dixon MRI primarily used for attenuation correction in PET-MRI. From these images, a water fraction was obtained for each voxel. Averaging over the whole body yielded the weight-normalized LBM. Performance of the new approach in terms of reducing variations of (18)F-Fludeoxyglucose SUVs in brain and liver across 19 subjects was compared with results using predictive methods and PET-CT data to estimate the LBM. The MRI-based method reduced the coefficient of variation of SUVs in the brain by 41 ± 10% which is comparable to the reduction by the PET-CT method (35 ± 10%). The reduction of the predictive LBM method was 29 ± 8%. In the liver, the reduction was less clear, presumably due to other sources of variation. In conclusion, employing the Dixon data in simultaneous PET-MRI for calculation of lean body mass provides a brain SUV which is less dependent on patient adiposity. The reduced dependency is comparable to that obtained by CT and predictive equations. Therefore, it is more comparable across patients. The technique does not impose an overhead in measurement time and is straightforward to implement.

  18. Nanoparticle Contrast Agents for Enhanced Microwave Imaging and Thermal Treatment of Breast Cancer

    DTIC Science & Technology

    2010-10-01

    continue to increase in step with de - creasing critical dimensions, electrodynamic effects directly influence high-frequency device performance, and...computational burden is significant. The Cellular Monte Carlo (CMC) method, originally de - veloped by Kometer et al. [50], was designed to reduce this...combination of a full-wave FDTD solver with a de - vice simulator based upon a stochastic transport kernel is conceptually straightforward, but the

  19. Next-Generation Sequencing-Based Approaches for Mutation Mapping and Identification in Caenorhabditis elegans

    PubMed Central

    Doitsidou, Maria; Jarriault, Sophie; Poole, Richard J.

    2016-01-01

    The use of next-generation sequencing (NGS) has revolutionized the way phenotypic traits are assigned to genes. In this review, we describe NGS-based methods for mapping a mutation and identifying its molecular identity, with an emphasis on applications in Caenorhabditis elegans. In addition to an overview of the general principles and concepts, we discuss the main methods, provide practical and conceptual pointers, and guide the reader in the types of bioinformatics analyses that are required. Owing to the speed and the plummeting costs of NGS-based methods, mapping and cloning a mutation of interest has become straightforward, quick, and relatively easy. Removing this bottleneck previously associated with forward genetic screens has significantly advanced the use of genetics to probe fundamental biological processes in an unbiased manner. PMID:27729495

  20. Computational chemistry in 25 years

    NASA Astrophysics Data System (ADS)

    Abagyan, Ruben

    2012-01-01

    Here we are making some predictions based on three methods: a straightforward extrapolations of the existing trends; a self-fulfilling prophecy; and picking some current grievances and predicting that they will be addressed or solved. We predict the growth of multicore computing and dramatic growth of data, as well as the improvements in force fields and sampling methods. We also predict that effects of therapeutic and environmental molecules on human body, as well as complex natural chemical signalling will be understood in terms of three dimensional models of their binding to specific pockets.

  1. Speedy milking of fresh venom from aculeate hymenopterans.

    PubMed

    Fox, Eduardo G P; Xu, Meng; Wang, Lei; Chen, Li; Lu, Yong-Yue

    2018-05-01

    A straightforward method for extracting aculeate arthropod venoms by centrifugation is described, based on adapting a glass insert containing a piece of metal mesh or glass wool into a centrifuge tube. Venom apparatuses are centrifuged for 30 s intervals at ≈2000-6000 g, with samples being dislodged between cycles. Venom from fire ants, honeybees, and a social wasp were extracted within minutes. The method is suited for small-scale bioassays and allows for faithful descriptions of unmodified toxin cocktails. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. PET attenuation correction for rigid MR Tx/Rx coils from 176Lu background activity

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Kaltsas, Theodoris; Caldeira, Liliana; Scheins, Jürgen; Rota Kops, Elena; Tellmann, Lutz; Pietrzyk, Uwe; Herzog, Hans; Shah, N. Jon

    2018-02-01

    One challenge for PET-MR hybrid imaging is the correction for attenuation of the 511 keV annihilation radiation by the required RF transmit and/or RF receive coils. Although there are strategies for building PET transparent Tx/Rx coils, such optimised coils still cause significant attenuation of the annihilation radiation leading to artefacts and biases in the reconstructed activity concentrations. We present a straightforward method to measure the attenuation of Tx/Rx coils in simultaneous MR-PET imaging based on the natural 176Lu background contained in the scintillator of the PET detector without the requirement of an external CT scanner or PET scanner with transmission source. The method was evaluated on a prototype 3T MR-BrainPET produced by Siemens Healthcare GmbH, both with phantom studies and with true emission images from patient/volunteer examinations. Furthermore, the count rate stability of the PET scanner and the x-ray properties of the Tx/Rx head coil were investigated. Even without energy extrapolation from the two dominant γ energies of 176Lu to 511 keV, the presented method for attenuation correction, based on the measurement of 176Lu background attenuation, shows slightly better performance than the coil attenuation correction currently used. The coil attenuation correction currently used is based on an external transmission scan with rotating 68Ge sources acquired on a Siemens ECAT HR  +  PET scanner. However, the main advantage of the presented approach is its straightforwardness and ready availability without the need for additional accessories.

  3. Extending the Li&Ma method to include PSF information

    NASA Astrophysics Data System (ADS)

    Nievas-Rosillo, M.; Contreras, J. L.

    2016-02-01

    The so called Li&Ma formula is still the most frequently used method for estimating the significance of observations carried out by Imaging Atmospheric Cherenkov Telescopes. In this work a straightforward extension of the method for point sources that profits from the good imaging capabilities of current instruments is proposed. It is based on a likelihood ratio under the assumption of a well-known PSF and a smooth background. Its performance is tested with Monte Carlo simulations based on real observations and its sensitivity is compared to standard methods which do not incorporate PSF information. The gain of significance that can be attributed to the inclusion of the PSF is around 10% and can be boosted if a background model is assumed or a finer binning is used.

  4. Landslide early warning based on failure forecast models: the example of Mt. de La Saxe rockslide, northern Italy

    NASA Astrophysics Data System (ADS)

    Manconi, A.; Giordan, D.

    2015-02-01

    We investigate the use of landslide failure forecast models by exploiting near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here we describe the main concepts of our method, and show an example of application to a real emergency scenario, the La Saxe rockslide, Aosta Valley region, northern Italy. Based on the herein presented case study, we identify operational thresholds based on the reliability of the forecast models, in order to support the management of early warning systems in the most critical phases of the landslide emergency.

  5. Aqua-vanadyl ion interaction with Nafion® membranes

    DOE PAGES

    Vijayakumar, Murugesan; Govind, Niranjan; Li, Bin; ...

    2015-03-23

    Lack of comprehensive understanding about the interactions between Nafion membrane and battery electrolytes prevents the straightforward tailoring of optimal materials for redox flow battery applications. In this work, we analyzed the interaction between aqua-vanadyl cation and sulfonic sites within the pores of Nafion membranes using combined theoretical and experimental X-ray spectroscopic methods. Molecular level interactions, namely, solvent share and contact pair mechanisms are discussed based on Vanadium and Sulfur K-edge spectroscopic analysis.

  6. Addressable-Matrix Integrated-Circuit Test Structure

    NASA Technical Reports Server (NTRS)

    Sayah, Hoshyar R.; Buehler, Martin G.

    1991-01-01

    Method of quality control based on use of row- and column-addressable test structure speeds collection of data on widths of resistor lines and coverage of steps in integrated circuits. By use of straightforward mathematical model, line widths and step coverages deduced from measurements of electrical resistances in each of various combinations of lines, steps, and bridges addressable in test structure. Intended for use in evaluating processes and equipment used in manufacture of application-specific integrated circuits.

  7. VLSI Architectures and CAD

    DTIC Science & Technology

    1989-11-01

    considerable promise is a variation of the familiar Lempel - Ziv adaptive data compression scheme that permits a straightforward mapping to hardware...types of data . The UNIX " compress " implementation is based upon Terry Welch’s 1984 variation of the Lempel - Ziv method (LZW). One flaw lies in the fact...or more; it must effec- tively compress all types of data (i.e. the algorithm must be universal); the implementation must be contained within a small

  8. A New Approach to Parallel Dynamic Partitioning for Adaptive Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.

    1999-01-01

    Classical mesh partitioning algorithms were designed for rather static situations, and their straightforward application in a dynamical framework may lead to unsatisfactory results, e.g., excessive data migration among processors. Furthermore, special attention should be paid to their amenability to parallelization. In this paper, a novel parallel method for the dynamic partitioning of adaptive unstructured meshes is described. It is based on a linear representation of the mesh using self-avoiding walks.

  9. B-spline Method in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.

  10. An exact noniterative linear method for locating sources based on measuring receiver arrival times.

    PubMed

    Militello, C; Buenafuente, S R

    2007-06-01

    In this paper an exact, linear solution to the source localization problem based on the time of arrival at the receivers is presented. The method is unique in that the source's position can be obtained by solving a system of linear equations, three for a plane and four for a volume. This simplification means adding an additional receiver to the minimum mathematically required (3+1 in two dimensions and 4+1 in three dimensions). The equations are easily worked out for any receiver configuration and their geometrical interpretation is straightforward. Unlike other methods, the system of reference used to describe the receivers' positions is completely arbitrary. The relationship between this method and previously published ones is discussed, showing how the present, more general, method overcomes nonlinearity and unknown dependency issues.

  11. Gaussian-based techniques for quantum propagation from the time-dependent variational principle: Formulation in terms of trajectories of coupled classical and quantum variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalashilin, Dmitrii V.; Burghardt, Irene

    2008-08-28

    In this article, two coherent-state based methods of quantum propagation, namely, coupled coherent states (CCS) and Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH), are put on the same formal footing, using a derivation from a variational principle in Lagrangian form. By this approach, oscillations of the classical-like Gaussian parameters and oscillations of the quantum amplitudes are formally treated in an identical fashion. We also suggest a new approach denoted here as coupled coherent states trajectories (CCST), which completes the family of Gaussian-based methods. Using the same formalism for all related techniques allows their systematization and a straightforward comparison of their mathematical structuremore » and cost.« less

  12. Two Formal Gas Models For Multi-Agent Sweeping and Obstacle Avoidance

    NASA Technical Reports Server (NTRS)

    Kerr, Wesley; Spears, Diana; Spears, William; Thayer, David

    2004-01-01

    The task addressed here is a dynamic search through a bounded region, while avoiding multiple large obstacles, such as buildings. In the case of limited sensors and communication, maintaining spatial coverage - especially after passing the obstacles - is a challenging problem. Here, we investigate two physics-based approaches to solving this task with multiple simulated mobile robots, one based on artificial forces and the other based on the kinetic theory of gases. The desired behavior is achieved with both methods, and a comparison is made between them. Because both approaches are physics-based, formal assurances about the multi-robot behavior are straightforward, and are included in the paper.

  13. Evaluation of radiation loading on finite cylindrical shells using the fast Fourier transform: A comparison with direct numerical integration.

    PubMed

    Liu, S X; Zou, M S

    2018-03-01

    The radiation loading on a vibratory finite cylindrical shell is conventionally evaluated through the direct numerical integration (DNI) method. An alternative strategy via the fast Fourier transform algorithm is put forward in this work based on the general expression of radiation impedance. To check the feasibility and efficiency of the proposed method, a comparison with DNI is presented through numerical cases. The results obtained using the present method agree well with those calculated by DNI. More importantly, the proposed calculating strategy can significantly save the time cost compared with the conventional approach of straightforward numerical integration.

  14. Three-dimensional analysis by electron diffraction methods of nanocrystalline materials.

    PubMed

    Gammer, Christoph; Mangler, Clemens; Karnthaler, Hans-Peter; Rentenberger, Christian

    2011-12-01

    To analyze nanocrystalline structures quantitatively in 3D, a novel method is presented based on electron diffraction. It allows determination of the average size and morphology of the coherently scattering domains (CSD) in a straightforward way without the need to prepare multiple sections. The method is applicable to all kinds of bulk nanocrystalline materials. As an example, the average size of the CSD in nanocrystalline FeAl made by severe plastic deformation is determined in 3D. Assuming ellipsoidal CSD, it is deduced that the CSD have a width of 19 ± 2 nm, a length of 18 ± 1 nm, and a height of 10 ± 1 nm.

  15. Distance Mapping in Proteins Using Fluorescence Spectroscopy: The Tryptophan-Induced Quenching (TrIQ) Method

    PubMed Central

    Mansoor, Steven E.; DeWitt, Mark A.; Farrens, David L.

    2014-01-01

    Studying the interplay between protein structure and function remains a daunting task. Especially lacking are methods for measuring structural changes in real time. Here we report our most recent improvements to a method that can be used to address such questions. This method, which we now call Tryptophan induced quenching (TrIQ), provides a straightforward, sensitive and inexpensive way to address questions of conformational dynamics and short-range protein interactions. Importantly, TrIQ only occurs over relatively short distances (~5 to 15 Å), making it complementary to traditional fluorescence resonance energy transfer (FRET) methods that occur over distances too large for precise studies of protein structure. As implied in the name, TrIQ measures the efficient quenching induced in some fluorophores by tryptophan (Trp). We present here our analysis of the TrIQ effect for five different fluorophores that span a range of sizes and spectral properties. Each probe was attached to four different cysteine residues on T4 lysozyme and the extent of TrIQ caused by a nearby Trp was measured. Our results show that for smaller probes, TrIQ is distance dependent. Moreover, we also demonstrate how TrIQ data can be analyzed to determine the fraction of fluorophores involved in a static, non-fluorescent complex with Trp. Based on this analysis, our study shows that each fluorophore has a different TrIQ profile, or "sphere of quenching", which correlates with its size, rotational flexibility, and the length of attachment linker. This TrIQ-based "sphere of quenching" is unique to every Trp-probe pair and reflects the distance within which one can expect to see the TrIQ effect. It provides a straightforward, readily accessible approach for mapping distances within proteins and monitoring conformational changes using fluorescence spectroscopy. PMID:20886836

  16. Mapping loci influencing blood pressure in the Framingham pedigrees using model-free LOD score analysis of a quantitative trait.

    PubMed

    Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David

    2003-12-31

    This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits.

  17. Mapping loci influencing blood pressure in the Framingham pedigrees using model-free LOD score analysis of a quantitative trait

    PubMed Central

    Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David

    2003-01-01

    This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits. PMID:14975142

  18. Young's Modulus of a Marshmallow

    NASA Astrophysics Data System (ADS)

    Pestka, Kenneth A.

    2008-03-01

    When teaching the subject of elasticity, it is often difficult to find a straightforward quantitative laboratory that can give a "hands-on" feel for the subject. This paper presents an experiment that demonstrates the essentials of elasticity by observing the behavior of marshmallows under a compressive load. Like other marshmallow-based activities,1,2 this experiment is straightforward, fun, and readily extendable to more complicated and advanced topics.

  19. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  20. The Dalgarno-Lewis summation technique: Some comments and examples

    NASA Astrophysics Data System (ADS)

    Mavromatis, Harry A.

    1991-08-01

    The Dalgarno-Lewis technique [A. Dalgarno and J. T. Lewis, ``The exact calculation of long-range forces between atoms by perturbation theory,'' Proc. R. Soc. London Ser. A 233, 70-74 (1955)] provides an elegant method to obtain exact results for various orders in perturbation theory, while avoiding the infinite sums which arise in each order. In the present paper this technique, which perhaps has not been exploited as much as it could be, is first reviewed with attention to some of its not-so-straightforward details, and then six examples of the method are given using three different one-dimensional bases.

  1. Low-cost fluorimetric determination of radicals based on fluorogenic dimerization of the natural phenol sesamol.

    PubMed

    Makino, Yumi; Uchiyama, Seiichi; Ohno, Ken-ichi; Arakawa, Hidetoshi

    2010-02-15

    A novel fluorimetric method for determining radicals using the natural phenol sesamol as a fluorogenic reagent is reported. In this assay, sesamol was reacted with aqueous radicals to yield one isomer of a sesamol dimer exclusively. The dimer emitted purple fluorescence near 400 nm around neutral pH, where it assumed the monoanionic form. This method was applied to the straightforward detection of radical nitric oxide (NO). The ready availability of sesamol should enable rapid implementation of applications utilizing this new assay, particularly in high-throughput analysis or screening.

  2. TEMPERATURE SCENARIO DEVELOPMENT USING REGRESSION METHODS

    EPA Science Inventory

    A method of developing scenarios of future temperature conditions resulting from climatic change is presented. he method is straightforward and can be used to provide information about daily temperature variations and diurnal ranges, monthly average high, and low temperatures, an...

  3. Industrial ecology: Quantitative methods for exploring a lower carbon future

    NASA Astrophysics Data System (ADS)

    Thomas, Valerie M.

    2015-03-01

    Quantitative methods for environmental and cost analyses of energy, industrial, and infrastructure systems are briefly introduced and surveyed, with the aim of encouraging broader utilization and development of quantitative methods in sustainable energy research. Material and energy flow analyses can provide an overall system overview. The methods of engineering economics and cost benefit analysis, such as net present values, are the most straightforward approach for evaluating investment options, with the levelized cost of energy being a widely used metric in electricity analyses. Environmental lifecycle assessment has been extensively developed, with both detailed process-based and comprehensive input-output approaches available. Optimization methods provide an opportunity to go beyond engineering economics to develop detailed least-cost or least-impact combinations of many different choices.

  4. DDOT MXD+ method development report.

    DOT National Transportation Integrated Search

    2015-09-01

    Mixed-use development has become increasingly common across the country, including Washington, D.C. : However, a straightforward and empirically validated method for evaluating the traffic impacts of such : projects is still needed. The data presente...

  5. In-situ polymerization PLOT columns I: divinylbenzene

    NASA Technical Reports Server (NTRS)

    Shen, T. C.

    1992-01-01

    A novel method for preparation of porous-layer open-tubular (PLOT) columns is described. The method involves a simple and reproducible, straight-forward in-situ polymerization of monomer directly on the metal tube.

  6. Effects of partitioning and scheduling sparse matrix factorization on communication and load balance

    NASA Technical Reports Server (NTRS)

    Venugopal, Sesh; Naik, Vijay K.

    1991-01-01

    A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.

  7. Introduction of biotin or folic acid into polypyrrole magnetite core-shell nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nan, Alexandrina; Turcu, Rodica; Liebscher, Jürgen

    2013-11-13

    In order to contribute to the trend in contemporary research to develop magnetic core shell nanoparticles with better properties (reduced toxicity, high colloidal and chemical stability, wide scope of application) in straightforward and reproducible methods new core shell magnetic nanoparticles were developed based on polypyrrole shells functionalized with biotin and folic acid. Magnetite nanoparticles stabilized by sebacic acid were used as magnetic cores. The morphology of magnetite was determined by transmission electron microscopy TEM, while the chemical structure investigated by FT-IR.

  8. New QCD sum rules based on canonical commutation relations

    NASA Astrophysics Data System (ADS)

    Hayata, Tomoya

    2012-04-01

    New derivation of QCD sum rules by canonical commutators is developed. It is the simple and straightforward generalization of Thomas-Reiche-Kuhn sum rule on the basis of Kugo-Ojima operator formalism of a non-abelian gauge theory and a suitable subtraction of UV divergences. By applying the method to the vector and axial vector current in QCD, the exact Weinberg’s sum rules are examined. Vector current sum rules and new fractional power sum rules are also discussed.

  9. Platinum adlayered ruthenium nanoparticles, method for preparing, and uses thereof

    DOEpatents

    Tong, YuYe; Du, Bingchen

    2015-08-11

    A superior, industrially scalable one-pot ethylene glycol-based wet chemistry method to prepare platinum-adlayered ruthenium nanoparticles has been developed that offers an exquisite control of the platinum packing density of the adlayers and effectively prevents sintering of the nanoparticles during the deposition process. The wet chemistry based method for the controlled deposition of submonolayer platinum is advantageous in terms of processing and maximizing the use of platinum and can, in principle, be scaled up straightforwardly to an industrial level. The reactivity of the Pt(31)-Ru sample was about 150% higher than that of the industrial benchmark PtRu (1:1) alloy sample but with 3.5 times less platinum loading. Using the Pt(31)-Ru nanoparticles would lower the electrode material cost compared to using the industrial benchmark alloy nanoparticles for direct methanol fuel cell applications.

  10. FT-IR imaging for quantitative determination of liver fat content in non-alcoholic fatty liver.

    PubMed

    Kochan, K; Maslak, E; Chlopicki, S; Baranska, M

    2015-08-07

    In this work we apply FT-IR imaging of large areas of liver tissue cross-section samples (∼5 cm × 5 cm) for quantitative assessment of steatosis in murine model of Non-Alcoholic Fatty Liver (NAFLD). We quantified the area of liver tissue occupied by lipid droplets (LDs) by FT-IR imaging and Oil Red O (ORO) staining for comparison. Two alternative FT-IR based approaches are presented. The first, straightforward method, was based on average spectra from tissues and provided values of the fat content by using a PLS regression model and the reference method. The second one – the chemometric-based method – enabled us to determine the values of the fat content, independently of the reference method by means of k-means cluster (KMC) analysis. In summary, FT-IR images of large size liver sections may prove to be useful for quantifying liver steatosis without the need of tissue staining.

  11. Information recovery in propagation-based imaging with decoherence effects

    NASA Astrophysics Data System (ADS)

    Froese, Heinrich; Lötgering, Lars; Wilhein, Thomas

    2017-05-01

    During the past decades the optical imaging community witnessed a rapid emergence of novel imaging modalities such as coherent diffraction imaging (CDI), propagation-based imaging and ptychography. These methods have been demonstrated to recover complex-valued scalar wave fields from redundant data without the need for refractive or diffractive optical elements. This renders these techniques suitable for imaging experiments with EUV and x-ray radiation, where the use of lenses is complicated by fabrication, photon efficiency and cost. However, decoherence effects can have detrimental effects on the reconstruction quality of the numerical algorithms involved. Here we demonstrate propagation-based optical phase retrieval from multiple near-field intensities with decoherence effects such as partially coherent illumination, detector point spread, binning and position uncertainties of the detector. Methods for overcoming these systematic experimental errors - based on the decomposition of the data into mutually incoherent modes - are proposed and numerically tested. We believe that the results presented here open up novel algorithmic methods to accelerate detector readout rates and enable subpixel resolution in propagation-based phase retrieval. Further the techniques are straightforward to be extended to methods such as CDI, ptychography and holography.

  12. Cascade multicomponent synthesis of indoles, pyrazoles, and pyridazinones by functionalization of alkenes.

    PubMed

    Matcha, Kiran; Antonchick, Andrey P

    2014-10-27

    The development of multicomponent reactions for indole synthesis is demanding and has hardly been explored. The present study describes the development of a novel multicomponent, cascade approach for indole synthesis. Various substituted indole derivatives were obtained from simple reagents, such as unfunctionalized alkenes, diazonium salts, and sodium triflinate, by using an established straightforward and regioselective method. The method is based on the radical trifluoromethylation of alkenes as an entry into Fischer indole synthesis. Besides indole synthesis, the application of the multicomponent cascade reaction to the synthesis of pyrazoles and pyridazinones is described. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Analytical solutions for systems of partial differential-algebraic equations.

    PubMed

    Benhammouda, Brahim; Vazquez-Leal, Hector

    2014-01-01

    This work presents the application of the power series method (PSM) to find solutions of partial differential-algebraic equations (PDAEs). Two systems of index-one and index-three are solved to show that PSM can provide analytical solutions of PDAEs in convergent series form. What is more, we present the post-treatment of the power series solutions with the Laplace-Padé (LP) resummation method as a useful strategy to find exact solutions. The main advantage of the proposed methodology is that the procedure is based on a few straightforward steps and it does not generate secular terms or depends of a perturbation parameter.

  14. Pooling across cells to normalize single-cell RNA sequencing data with many zero counts.

    PubMed

    Lun, Aaron T L; Bach, Karsten; Marioni, John C

    2016-04-27

    Normalization of single-cell RNA sequencing data is necessary to eliminate cell-specific biases prior to downstream analyses. However, this is not straightforward for noisy single-cell data where many counts are zero. We present a novel approach where expression values are summed across pools of cells, and the summed values are used for normalization. Pool-based size factors are then deconvolved to yield cell-based factors. Our deconvolution approach outperforms existing methods for accurate normalization of cell-specific biases in simulated data. Similar behavior is observed in real data, where deconvolution improves the relevance of results of downstream analyses.

  15. A rapid, straightforward, and print house compatible mass fabrication method for integrating 3D paper-based microfluidics.

    PubMed

    Xiao, Liangpin; Liu, Xianming; Zhong, Runtao; Zhang, Kaiqing; Zhang, Xiaodi; Zhou, Xiaomian; Lin, Bingcheng; Du, Yuguang

    2013-11-01

    Three-dimensional (3D) paper-based microfluidics, which is featured with high performance and speedy determination, promise to carry out multistep sample pretreatment and orderly chemical reaction, which have been used for medical diagnosis, cell culture, environment determination, and so on with broad market prospect. However, there are some drawbacks in the existing fabrication methods for 3D paper-based microfluidics, such as, cumbersome and time-consuming device assembly; expensive and difficult process for manufacture; contamination caused by organic reagents from their fabrication process. Here, we present a simple printing-bookbinding method for mass fabricating 3D paper-based microfluidics. This approach involves two main steps: (i) wax-printing, (ii) bookbinding. We tested the delivery capability, diffusion rate, homogeneity and demonstrated the applicability of the device to chemical analysis by nitrite colorimetric assays. The described method is rapid (<30 s), cheap, easy to manipulate, and compatible with the flat stitching method that is common in a print house, making itself an ideal scheme for large-scale production of 3D paper-based microfluidics. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Dehydration Polymerization for Poly(hetero)arene Conjugated Polymers.

    PubMed

    Mirabal, Rafael A; Vanderzwet, Luke; Abuadas, Sara; Emmett, Michael R; Schipper, Derek

    2018-02-18

    The lack of scalable and sustainable methods to prepare conjugated polymers belies their importance in many enabling technologies. Accessing high-performance poly(hetero)arene conjugated polymers by dehydration has remained an unsolved problem in synthetic chemistry and has historically required transitional-metal coupling reactions. Herein, we report a dehydration method that allows access to conjugated heterocyclic materials. By using the technique, we have prepared a series of small molecules and polymers. The reaction avoids using transition metals, proceeds at room temperature, the only required reactant is a simple base and water is the sole by-product. The dehydration reaction is technically simple and provides a sustainable and straightforward method to prepare conjugated heteroarene motifs. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A physics-based solver to optimize the illumination of cylindrical targets in spherically distributed high power laser systems.

    PubMed

    Gourdain, P-A

    2017-05-01

    In recent years, our understanding of high energy density plasmas has played an important role in improving inertial fusion confinement and in emerging new fields of physics, such as laboratory astrophysics. Every new idea required developing innovative experimental platforms at high power laser facilities, such as OMEGA or NIF. These facilities, designed to focus all their beams onto spherical targets or hohlraum windows, are now required to shine them on more complex targets. While the pointing on planar geometries is relatively straightforward, it becomes problematic for cylindrical targets or target with more complex geometries. This publication describes how the distribution of laser beams on a cylindrical target can be done simply by using a set of physical laws as a pointing procedure. The advantage of the method is threefold. First, it is straightforward, requiring no mathematical enterprise besides solving ordinary differential equations. Second, it will converge if a local optimum exists. Finally, it is computationally inexpensive. Experimental results show that this approach produces a geometrical beam distribution that yields cylindrically symmetric implosions.

  18. A physics-based solver to optimize the illumination of cylindrical targets in spherically distributed high power laser systems

    NASA Astrophysics Data System (ADS)

    Gourdain, P.-A.

    2017-05-01

    In recent years, our understanding of high energy density plasmas has played an important role in improving inertial fusion confinement and in emerging new fields of physics, such as laboratory astrophysics. Every new idea required developing innovative experimental platforms at high power laser facilities, such as OMEGA or NIF. These facilities, designed to focus all their beams onto spherical targets or hohlraum windows, are now required to shine them on more complex targets. While the pointing on planar geometries is relatively straightforward, it becomes problematic for cylindrical targets or target with more complex geometries. This publication describes how the distribution of laser beams on a cylindrical target can be done simply by using a set of physical laws as a pointing procedure. The advantage of the method is threefold. First, it is straightforward, requiring no mathematical enterprise besides solving ordinary differential equations. Second, it will converge if a local optimum exists. Finally, it is computationally inexpensive. Experimental results show that this approach produces a geometrical beam distribution that yields cylindrically symmetric implosions.

  19. “Smooth” Semiparametric Regression Analysis for Arbitrarily Censored Time-to-Event Data

    PubMed Central

    Zhang, Min; Davidian, Marie

    2008-01-01

    Summary A general framework for regression analysis of time-to-event data subject to arbitrary patterns of censoring is proposed. The approach is relevant when the analyst is willing to assume that distributions governing model components that are ordinarily left unspecified in popular semiparametric regression models, such as the baseline hazard function in the proportional hazards model, have densities satisfying mild “smoothness” conditions. Densities are approximated by a truncated series expansion that, for fixed degree of truncation, results in a “parametric” representation, which makes likelihood-based inference coupled with adaptive choice of the degree of truncation, and hence flexibility of the model, computationally and conceptually straightforward with data subject to any pattern of censoring. The formulation allows popular models, such as the proportional hazards, proportional odds, and accelerated failure time models, to be placed in a common framework; provides a principled basis for choosing among them; and renders useful extensions of the models straightforward. The utility and performance of the methods are demonstrated via simulations and by application to data from time-to-event studies. PMID:17970813

  20. Fabrication of organic-inorganic perovskite thin films for planar solar cells via pulsed laser deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Yangang; Zhang, Xiaohang; Gong, Yunhui

    2016-01-15

    We report on fabrication of organic-inorganic perovskite thin films using a hybrid method consisting of pulsed laser deposition (PLD) of lead iodide and spin-coating of methylammonium iodide. Smooth and highly crystalline CH{sub 3}NH{sub 3}PbI{sub 3} thin films have been fabricated on silicon and glass coated substrates with fluorine doped tin oxide using this PLD-based hybrid method. Planar perovskite solar cells with an inverted structure have been successfully fabricated using the perovskite films. Because of its versatility, the PLD-based hybrid fabrication method not only provides an easy and precise control of the thickness of the perovskite thin films, but also offersmore » a straightforward platform for studying the potential feasibility in using other metal halides and organic salts for formation of the organic-inorganic perovskite structure.« less

  1. Mesoscale energy deposition footprint model for kiloelectronvolt cluster bombardment of solids.

    PubMed

    Russo, Michael F; Garrison, Barbara J

    2006-10-15

    Molecular dynamics simulations have been performed to model 5-keV C60 and Au3 projectile bombardment of an amorphous water substrate. The goal is to obtain detailed insights into the dynamics of motion in order to develop a straightforward and less computationally demanding model of the process of ejection. The molecular dynamics results provide the basis for the mesoscale energy deposition footprint model. This model provides a method for predicting relative yields based on information from less than 1 ps of simulation time.

  2. Harmonic skeleton guided evaluation of stenoses in human coronary arteries.

    PubMed

    Yang, Yan; Zhu, Lei; Haker, Steven; Tannenbaum, Allen R; Giddens, Don P

    2005-01-01

    This paper presents a novel approach that three-dimensionally visualizes and evaluates stenoses in human coronary arteries by using harmonic skeletons. A harmonic skeleton is the center line of a multi-branched tubular surface extracted based on a harmonic function, which is the solution of the Laplace equation. This skeletonization method guarantees smoothness and connectivity and provides a fast and straightforward way to calculate local cross-sectional areas of the arteries, and thus provides the possibility to localize and evaluate coronary artery stenosis, which is a commonly seen pathology in coronary artery disease.

  3. Gaussian Mixture Model of Heart Rate Variability

    PubMed Central

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  4. Bandgap profiling in CIGS solar cells via valence electron energy-loss spectroscopy

    NASA Astrophysics Data System (ADS)

    Deitz, Julia I.; Karki, Shankar; Marsillac, Sylvain X.; Grassman, Tyler J.; McComb, David W.

    2018-03-01

    A robust, reproducible method for the extraction of relative bandgap trends from scanning transmission electron microscopy (STEM) based electron energy-loss spectroscopy (EELS) is described. The effectiveness of the approach is demonstrated by profiling the bandgap through a CuIn1-xGaxSe2 solar cell that possesses intentional Ga/(In + Ga) composition variation. The EELS-determined bandgap profile is compared to the nominal profile calculated from compositional data collected via STEM-based energy dispersive X-ray spectroscopy. The EELS based profile is found to closely track the calculated bandgap trends, with only a small, fixed offset difference. This method, which is particularly advantageous for relatively narrow bandgap materials and/or STEM systems with modest resolution capabilities (i.e., >100 meV), compromises absolute accuracy to provide a straightforward route for the correlation of local electronic structure trends with nanoscale chemical and physical structure/microstructure within semiconductor materials and devices.

  5. Lagrangian methods of cosmic web classification

    NASA Astrophysics Data System (ADS)

    Fisher, J. D.; Faltenbacher, A.; Johnson, M. S. T.

    2016-05-01

    The cosmic web defines the large-scale distribution of matter we see in the Universe today. Classifying the cosmic web into voids, sheets, filaments and nodes allows one to explore structure formation and the role environmental factors have on halo and galaxy properties. While existing studies of cosmic web classification concentrate on grid-based methods, this work explores a Lagrangian approach where the V-web algorithm proposed by Hoffman et al. is implemented with techniques borrowed from smoothed particle hydrodynamics. The Lagrangian approach allows one to classify individual objects (e.g. particles or haloes) based on properties of their nearest neighbours in an adaptive manner. It can be applied directly to a halo sample which dramatically reduces computational cost and potentially allows an application of this classification scheme to observed galaxy samples. Finally, the Lagrangian nature admits a straightforward inclusion of the Hubble flow negating the necessity of a visually defined threshold value which is commonly employed by grid-based classification methods.

  6. Detection of proteins using a colorimetric bio-barcode assay.

    PubMed

    Nam, Jwa-Min; Jang, Kyung-Jin; Groves, Jay T

    2007-01-01

    The colorimetric bio-barcode assay is a red-to-blue color change-based protein detection method with ultrahigh sensitivity. This assay is based on both the bio-barcode amplification method that allows for detecting miniscule amount of targets with attomolar sensitivity and gold nanoparticle-based colorimetric DNA detection method that allows for a simple and straightforward detection of biomolecules of interest (here we detect interleukin-2, an important biomarker (cytokine) for many immunodeficiency-related diseases and cancers). The protocol is composed of the following steps: (i) conjugation of target capture molecules and barcode DNA strands onto silica microparticles, (ii) target capture with probes, (iii) separation and release of barcode DNA strands from the separated probes, (iv) detection of released barcode DNA using DNA-modified gold nanoparticle probes and (v) red-to-blue color change analysis with a graphic software. Actual target detection and quantification steps with premade probes take approximately 3 h (whole protocol including probe preparations takes approximately 3 days).

  7. Local Geometry and Evolutionary Conservation of Protein Surfaces Reveal the Multiple Recognition Patches in Protein-Protein Interactions

    PubMed Central

    Laine, Elodie; Carbone, Alessandra

    2015-01-01

    Protein-protein interactions (PPIs) are essential to all biological processes and they represent increasingly important therapeutic targets. Here, we present a new method for accurately predicting protein-protein interfaces, understanding their properties, origins and binding to multiple partners. Contrary to machine learning approaches, our method combines in a rational and very straightforward way three sequence- and structure-based descriptors of protein residues: evolutionary conservation, physico-chemical properties and local geometry. The implemented strategy yields very precise predictions for a wide range of protein-protein interfaces and discriminates them from small-molecule binding sites. Beyond its predictive power, the approach permits to dissect interaction surfaces and unravel their complexity. We show how the analysis of the predicted patches can foster new strategies for PPIs modulation and interaction surface redesign. The approach is implemented in JET2, an automated tool based on the Joint Evolutionary Trees (JET) method for sequence-based protein interface prediction. JET2 is freely available at www.lcqb.upmc.fr/JET2. PMID:26690684

  8. Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-01-01

    The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.

  9. Easy method of matching fighter engine to airframe for use in aircraft engine design courses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mattingly, J.D.

    1989-01-01

    The proper match of the engine(s) to the airframe affects both aircraft size and life cycle cost. A fast and straightforward method is developed and used for the matching of fighter engine(s) to airframes during conceptual design. A thrust-lapse equation is developed for the dual-spool, mixed-flow, afterburning turbofan type of engine based on the installation losses of 'Aircraft Engine Design' and the performance predictions of the cycle analysis programs ONX and OFFX. Using system performance requirements, the effects of aircraft thrust-to-weight, wing loading, and engine cycle on takeoff weight are analyzed and example design course results presented. 5 refs.

  10. PCTDSE: A parallel Cartesian-grid-based TDSE solver for modeling laser-atom interactions

    NASA Astrophysics Data System (ADS)

    Fu, Yongsheng; Zeng, Jiaolong; Yuan, Jianmin

    2017-01-01

    We present a parallel Cartesian-grid-based time-dependent Schrödinger equation (TDSE) solver for modeling laser-atom interactions. It can simulate the single-electron dynamics of atoms in arbitrary time-dependent vector potentials. We use a split-operator method combined with fast Fourier transforms (FFT), on a three-dimensional (3D) Cartesian grid. Parallelization is realized using a 2D decomposition strategy based on the Message Passing Interface (MPI) library, which results in a good parallel scaling on modern supercomputers. We give simple applications for the hydrogen atom using the benchmark problems coming from the references and obtain repeatable results. The extensions to other laser-atom systems are straightforward with minimal modifications of the source code.

  11. Estimating p-n Diode Bulk Parameters, Bandgap Energy and Absolute Zero by a Simple Experiment

    ERIC Educational Resources Information Center

    Ocaya, R. O.; Dejene, F. B.

    2007-01-01

    This paper presents a straightforward but interesting experimental method for p-n diode characterization. The method differs substantially from many approaches in diode characterization by offering much tighter control over the temperature and current variables. The method allows the determination of important diode constants such as temperature…

  12. Applications of 3D-EDGE Detection for ALS Point Cloud

    NASA Astrophysics Data System (ADS)

    Ni, H.; Lin, X. G.; Zhang, J. X.

    2017-09-01

    Edge detection has been one of the major issues in the field of remote sensing and photogrammetry. With the fast development of sensor technology of laser scanning system, dense point clouds have become increasingly common. Precious 3D-edges are able to be detected from these point clouds and a great deal of edge or feature line extraction methods have been proposed. Among these methods, an easy-to-use 3D-edge detection method, AGPN (Analyzing Geometric Properties of Neighborhoods), has been proposed. The AGPN method detects edges based on the analysis of geometric properties of a query point's neighbourhood. The AGPN method detects two kinds of 3D-edges, including boundary elements and fold edges, and it has many applications. This paper presents three applications of AGPN, i.e., 3D line segment extraction, ground points filtering, and ground breakline extraction. Experiments show that the utilization of AGPN method gives a straightforward solution to these applications.

  13. On time discretizations for the simulation of the batch settling-compression process in one dimension.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Mejías, Camilo

    2016-01-01

    The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.

  14. Shannon information, LMC complexity and Rényi entropies: a straightforward approach.

    PubMed

    López-Ruiz, Ricardo

    2005-04-01

    The LMC complexity, an indicator of complexity based on a probabilistic description, is revisited. A straightforward approach allows us to establish the time evolution of this indicator in a near-equilibrium situation and gives us a new insight for interpreting the LMC complexity for a general non equilibrium system. Its relationship with the Rényi entropies is also explained. One of the advantages of this indicator is that its calculation does not require a considerable computational effort in many cases of physical and biological interest.

  15. Diketopyrrolopyrrole-based carbon dots for photodynamic therapy.

    PubMed

    He, Haozhe; Zheng, Xiaohua; Liu, Shi; Zheng, Min; Xie, Zhigang; Wang, Yong; Yu, Meng; Shuai, Xintao

    2018-06-01

    The development of a simple and straightforward strategy to synthesize multifunctional carbon dots for photodynamic therapy (PDT) has been an emerging focus. In this work, diketopyrrolopyrrole-based fluorescent carbon dots (DPP CDs) were designed and synthesized through a facile one-pot hydrothermal method by using diketopyrrolopyrrole (DPP) and chitosan (CTS) as raw materials. DPP CDs not only maintained the ability of DPP to generate singlet oxygen (1O2) but also have excellent hydrophilic properties and outstanding biocompatibility. In vitro and in vivo experiments demonstrated that DPP CDs greatly inhibited the growth of tumor cells under laser irradiation (540 nm). This study highlights the potential of the rational design of CDs for efficient cancer therapy.

  16. SIZE DISTRIBUTION OF SEA-SALT EMISSIONS AS A FUNCTION OF RELATIVE HUMIDITY

    EPA Science Inventory

    This note presents a straightforward method to correct sea-salt-emission particle-size distributions according to local relative humidity. The proposed method covers a wide range of relative humidity (0.45 to 0.99) and its derivation incorporates recent laboratory results on sea-...

  17. Least squares estimation of avian molt rates

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.

  18. TE/TM decomposition of electromagnetic sources

    NASA Technical Reports Server (NTRS)

    Lindell, Ismo V.

    1988-01-01

    Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.

  19. Measuring housing quality in the absence of a monetized real estate market.

    PubMed

    Rindfuss, Ronald R; Piotrowski, Martin; Thongthai, Varachai; Prasartkul, Pramote

    2007-03-01

    Measuring housing quality or value or both has been a weak component of demographic and development research in less developed countries that lack an active real estate (housing) market. We describe a new method based on a standardized subjective rating process. It is designed to be used in settings that do not have an active, monetized housing market. The method is applied in an ongoing longitudinal study in north-east Thailand and could be straightforwardly used in many other settings. We develop a conceptual model of the process whereby households come to reside in high-quality or low-quality housing units. We use this theoretical model in conjunction with longitudinal data to show that the new method of measuring housing quality behaves as theoretically expected, thus providing evidence of face validity.

  20. Direct synthesis of vertically aligned ZnO nanowires on FTO substrates using a CVD method and the improvement of photovoltaic performance

    PubMed Central

    2012-01-01

    In this work, we report a direct synthesis of vertically aligned ZnO nanowires on fluorine-doped tin oxide-coated substrates using the chemical vapor deposition (CVD) method. ZnO nanowires with a length of more than 30 μm were synthesized, and dye-sensitized solar cells (DSSCs) based on the as-grown nanowires were fabricated, which showed improvement of the device performance compared to those fabricated using transferred ZnO nanowires. Dependence of the cell performance on nanowire length and annealing temperature was also examined. This synthesis method provided a straightforward, one-step CVD process to grow relatively long ZnO nanowires and avoided subsequent nanowire transfer process, which simplified DSSC fabrication and improved cell performance. PMID:22673046

  1. Fast solution of elliptic partial differential equations using linear combinations of plane waves.

    PubMed

    Pérez-Jordá, José M

    2016-02-01

    Given an arbitrary elliptic partial differential equation (PDE), a procedure for obtaining its solution is proposed based on the method of Ritz: the solution is written as a linear combination of plane waves and the coefficients are obtained by variational minimization. The PDE to be solved is cast as a system of linear equations Ax=b, where the matrix A is not sparse, which prevents the straightforward application of standard iterative methods in order to solve it. This sparseness problem can be circumvented by means of a recursive bisection approach based on the fast Fourier transform, which makes it possible to implement fast versions of some stationary iterative methods (such as Gauss-Seidel) consuming O(NlogN) memory and executing an iteration in O(Nlog(2)N) time, N being the number of plane waves used. In a similar way, fast versions of Krylov subspace methods and multigrid methods can also be implemented. These procedures are tested on Poisson's equation expressed in adaptive coordinates. It is found that the best results are obtained with the GMRES method using a multigrid preconditioner with Gauss-Seidel relaxation steps.

  2. Setup for functional cell ablation with lasers: coupling of a laser to a microscope.

    PubMed

    Sweeney, Sean T; Hidalgo, Alicia; de Belle, J Steven; Keshishian, Haig

    2012-06-01

    The selective removal of cells by ablation is a powerful tool in the study of eukaryotic developmental biology, providing much information about their origin, fate, or function in the developing organism. In Drosophila, three main methods have been used to ablate cells: chemical, genetic, and laser ablation. Each method has its own applicability with regard to developmental stage and the cells to be ablated, and its own limitations. The primary advantage of laser-based ablation is the flexibility provided by the method: The operations can be performed in any cell pattern and at any time in development. Laser-based techniques permit manipulation of structures within cells, even to the molecular level. They can also be used for gene activation. However, laser ablation can be expensive, labor-intensive, and time-consuming. Although live cells can be difficult to image in Drosophila embryos, the use of vital fluorescent imaging methods has made laser-mediated cell manipulation methods more appealing; the methods are relatively straightforward. This article provides the information necessary for setting up and using a laser microscope for lasesr ablation studies.

  3. Temporal Downscaling of Crop Coefficient and Crop Water Requirement from Growing Stage to Substage Scales

    PubMed Central

    Shang, Songhao

    2012-01-01

    Crop water requirement is essential for agricultural water management, which is usually available for crop growing stages. However, crop water requirement values of monthly or weekly scales are more useful for water management. A method was proposed to downscale crop coefficient and water requirement from growing stage to substage scales, which is based on the interpolation of accumulated crop and reference evapotranspiration calculated from their values in growing stages. The proposed method was compared with two straightforward methods, that is, direct interpolation of crop evapotranspiration and crop coefficient by assuming that stage average values occurred in the middle of the stage. These methods were tested with a simulated daily crop evapotranspiration series. Results indicate that the proposed method is more reliable, showing that the downscaled crop evapotranspiration series is very close to the simulated ones. PMID:22619572

  4. van der Waals interactions between nanostructures: Some analytic results from series expansions

    NASA Astrophysics Data System (ADS)

    Stedman, T.; Drosdoff, D.; Woods, L. M.

    2014-01-01

    The van der Waals force between objects of nontrivial geometries is considered. A technique based on a perturbation series approach is formulated in the dilute limit. We show that the dielectric response and object size can be decoupled and dominant contributions in terms of object separations can be obtained. This is a powerful method, which enables straightforward calculations of the interaction for different geometries. Our results for planar structures, such as thin sheets, infinitely long ribbons, and ribbons with finite dimensions, may be applicable for nanostructured devices where the van der Waals interaction plays an important role.

  5. Stepwise Bay Annulation of Indigo for the Synthesis of Desymmetrized Electron Acceptors and Donor–Acceptor Constructs

    DOE PAGES

    Kolaczkowski, Matthew A.; He, Bo; Liu, Yi

    2016-10-10

    In this work, a selective stepwise annulation of indigo has been demonstrated as a means of providing both monoannulated and differentially double-annulated indigo derivatives. Disparate substitution of the electron accepting bay-annulated indigo system allows for fine control over both the electronic properties as well as donor-acceptor structural architectures. Optical and electronic properties were characterized computationally as well as through UV-vis absorption spectroscopy and cyclic voltammetry. Finally, this straightforward method provides a modular approach for the design of indigo-based materials with tailored optoelectronic properties.

  6. Applications of remote sensing, volume 3

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. Of the four change detection techniques (post classification comparison, delta data, spectral/temporal, and layered spectral temporal), the post classification comparison was selected for further development. This was based upon test performances of the four change detection method, straightforwardness of the procedures, and the output products desired. A standardized modified, supervised classification procedure for analyzing the Texas coastal zone data was compiled. This procedure was developed in order that all quadrangles in the study are would be classified using similar analysis techniques to allow for meaningful comparisons and evaluations of the classifications.

  7. Swimming of an assembly of rigid spheres at low Reynolds number.

    PubMed

    Felderhof, B U

    2014-11-01

    A matrix formulation is derived for the calculation of the swimming speed and the power required for swimming of an assembly of rigid spheres immersed in a viscous fluid of infinite extent. The spheres may have arbitrary radii and may interact with elastic forces. The analysis is based on the Stokes mobility matrix of the set of spheres, defined in low Reynolds number hydrodynamics. For small amplitude, swimming optimization of the swimming speed at given power leads to an eigenvalue problem. The method allows straightforward calculation of the swimming performance of structures modeled as assemblies of interacting rigid spheres.

  8. Harmonic Skeleton Guided Evaluation of Stenoses in Human Coronary Arteries

    PubMed Central

    Yang, Yan; Zhu, Lei; Haker, Steven; Tannenbaum, Allen R.; Giddens, Don P.

    2013-01-01

    This paper presents a novel approach that three-dimensionally visualizes and evaluates stenoses in human coronary arteries by using harmonic skeletons. A harmonic skeleton is the center line of a multi-branched tubular surface extracted based on a harmonic function, which is the solution of the Laplace equation. This skeletonization method guarantees smoothness and connectivity and provides a fast and straightforward way to calculate local cross-sectional areas of the arteries, and thus provides the possibility to localize and evaluate coronary artery stenosis, which is a commonly seen pathology in coronary artery disease. PMID:16685882

  9. Observational clues to the energy release process in impulsive solar bursts

    NASA Technical Reports Server (NTRS)

    Batchelor, David

    1990-01-01

    The nature of the energy release process that produces impulsive bursts of hard X-rays and microwaves during solar flares is discussed, based on new evidence obtained using the method of Crannell et al. (1978). It is shown that the hard X-ray spectral index gamma is negatively correlated with the microwave peak frequency, suggesting a common source for the microwaves and X-rays. The thermal and nonthermal models are compared. It is found that the most straightforward explanations for burst time behavior are shock-wave particle acceleration in the nonthermal model and thermal conduction fronts in the thermal model.

  10. Second law of thermodynamics in volume diffusion hydrodynamics in multicomponent gas mixtures

    NASA Astrophysics Data System (ADS)

    Dadzie, S. Kokou

    2012-10-01

    We presented the thermodynamic structure of a new continuum flow model for multicomponent gas mixtures. The continuum model is based on a volume diffusion concept involving specific species. It is independent of the observer's reference frame and enables a straightforward tracking of a selected species within a mixture composed of a large number of constituents. A method to derive the second law and constitutive equations accompanying the model is presented. Using the configuration of a rotating fluid we illustrated an example of non-classical flow physics predicted by new contributions in the entropy and constitutive equations.

  11. A Conceptual Approach to Assimilating Remote Sensing Data to Improve Soil Moisture Profile Estimates in a Surface Flux/Hydrology Model. 3; Disaggregation

    NASA Technical Reports Server (NTRS)

    Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius

    1998-01-01

    This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.

  12. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.

    PubMed

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-16

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  13. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution

    NASA Astrophysics Data System (ADS)

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-01

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  14. RAPID COMMUNICATION Time-resolved measurements with a vortex flowmeter in a pulsating turbulent flow using wavelet analysis

    NASA Astrophysics Data System (ADS)

    Laurantzon, F.; Örlü, R.; Segalini, A.; Alfredsson, P. H.

    2010-12-01

    Vortex flowmeters are commonly employed in technical applications and are obtainable in a variety of commercially available types. However their robustness and accuracy can easily be impaired by environmental conditions, such as inflow disturbances and/or pulsating conditions. Various post-processing techniques of the vortex signal have been used, but all of these methods are so far targeted on obtaining an improved estimate of the time-averaged bulk velocity. Here, on the other hand, we propose, based on wavelet analysis, a straightforward way to utilize the signal from a vortex shedder to extract the time-resolved and thereby the phase-averaged velocity under pulsatile flow conditions. The method was verified with hot-wire and laser Doppler velocimetry measurements.

  15. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  16. Development of methods for inferring cloud thickness and cloud-base height from satellite radiance data

    NASA Technical Reports Server (NTRS)

    Smith, William L., Jr.; Minnis, Patrick; Alvarez, Joseph M.; Uttal, Taneil; Intrieri, Janet M.; Ackerman, Thomas P.; Clothiaux, Eugene

    1993-01-01

    Cloud-top height is a major factor determining the outgoing longwave flux at the top of the atmosphere. The downwelling radiation from the cloud strongly affects the cooling rate within the atmosphere and the longwave radiation incident at the surface. Thus, determination of cloud-base temperature is important for proper calculation of fluxes below the cloud. Cloud-base altitude is also an important factor in aircraft operations. Cloud-top height or temperature can be derived in a straightforward manner using satellite-based infrared data. Cloud-base temperature, however, is not observable from the satellite, but is related to the height, phase, and optical depth of the cloud in addition to other variables. This study uses surface and satellite data taken during the First ISCCP Regional Experiment (FIRE) Phase-2 Intensive Field Observation (IFO) period (13 Nov. - 7 Dec. 1991, to improve techniques for deriving cloud-base height from conventional satellite data.

  17. Accelerated gradient methods for the x-ray imaging of solar flares

    NASA Astrophysics Data System (ADS)

    Bonettini, S.; Prato, M.

    2014-05-01

    In this paper we present new optimization strategies for the reconstruction of x-ray images of solar flares by means of the data collected by the Reuven Ramaty high energy solar spectroscopic imager. The imaging concept of the satellite is based on rotating modulation collimator instruments, which allow the use of both Fourier imaging approaches and reconstruction techniques based on the straightforward inversion of the modulated count profiles. Although in the last decade, greater attention has been devoted to the former strategies due to their very limited computational cost, here we consider the latter model and investigate the effectiveness of different accelerated gradient methods for the solution of the corresponding constrained minimization problem. Moreover, regularization is introduced through either an early stopping of the iterative procedure, or a Tikhonov term added to the discrepancy function by means of a discrepancy principle accounting for the Poisson nature of the noise affecting the data.

  18. A subsystem identification method based on the path concept with coupling strength estimation

    NASA Astrophysics Data System (ADS)

    Magrans, Francesc Xavier; Poblet-Puig, Jordi; Rodríguez-Ferran, Antonio

    2018-02-01

    For complex geometries, the definition of the subsystems is not a straightforward task. We present here a subsystem identification method based on the direct transfer matrix, which represents the first-order paths. The key ingredient is a cluster analysis of the rows of the powers of the transfer matrix. These powers represent high-order paths in the system and are more affected than low-order paths by damping. Once subsystems are identified, the proposed approach also provides a quantification of the degree of coupling between subsystems. This information is relevant to decide whether a subsystem may be analysed in a computer model or measured in the laboratory independently of the rest or subsystems or not. The two features (subsystem identification and quantification of the degree of coupling) are illustrated by means of numerical examples: plates coupled by means of springs and rooms connected by means of a cavity.

  19. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  20. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  1. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  2. Heuristics for connectivity-based brain parcellation of SMA/pre-SMA through force-directed graph layout.

    PubMed

    Crippa, Alessandro; Cerliani, Leonardo; Nanetti, Luca; Roerdink, Jos B T M

    2011-02-01

    We propose the use of force-directed graph layout as an explorative tool for connectivity-based brain parcellation studies. The method can be used as a heuristic to find the number of clusters intrinsically present in the data (if any) and to investigate their organisation. It provides an intuitive representation of the structure of the data and facilitates interactive exploration of properties of single seed voxels as well as relations among (groups of) voxels. We validate the method on synthetic data sets and we investigate the changes in connectivity in the supplementary motor cortex, a brain region whose parcellation has been previously investigated via connectivity studies. This region is supposed to present two easily distinguishable connectivity patterns, putatively denoted by SMA (supplementary motor area) and pre-SMA. Our method provides insights with respect to the connectivity patterns of the premotor cortex. These present a substantial variation among subjects, and their subdivision into two well-separated clusters is not always straightforward. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Outstanding performance of configuration interaction singles and doubles using exact exchange Kohn-Sham orbitals in real-space numerical grid method

    NASA Astrophysics Data System (ADS)

    Lim, Jaechang; Choi, Sunghwan; Kim, Jaewook; Kim, Woo Youn

    2016-12-01

    To assess the performance of multi-configuration methods using exact exchange Kohn-Sham (KS) orbitals, we implemented configuration interaction singles and doubles (CISD) in a real-space numerical grid code. We obtained KS orbitals with the exchange-only optimized effective potential under the Krieger-Li-Iafrate (KLI) approximation. Thanks to the distinctive features of KLI orbitals against Hartree-Fock (HF), such as bound virtual orbitals with compact shapes and orbital energy gaps similar to excitation energies; KLI-CISD for small molecules shows much faster convergence as a function of simulation box size and active space (i.e., the number of virtual orbitals) than HF-CISD. The former also gives more accurate excitation energies with a few dominant configurations than the latter, even with many more configurations. The systematic control of basis set errors is straightforward in grid bases. Therefore, grid-based multi-configuration methods using exact exchange KS orbitals provide a promising new way to make accurate electronic structure calculations.

  4. Study on the interaction between hematoporphyrin monomethyl ether and DNA and the determination of hematoporphyrin monomethyl ether using the resonance light scattering technique

    NASA Astrophysics Data System (ADS)

    Chen, Zhanguang; Song, Tianhe; Chen, Xi; Wang, Shaobin; Chen, Junhui

    2010-10-01

    The interaction between photosensitizer anticancer drug hematoporphyrin monomethyl ether (HMME) and ctDNA has been studied based on the decreased resonance light scattering (RLS) phenomenon. The RLS, UV-vis and fluorescence spectra characteristics of the HMME-ctDNA system were investigated. Besides, the phosphodiesters quaternary ammonium salt (PQAS), a kind of new gemini surfactant synthesized recently, was used to determine anticancer drug HMME based on the increasing RLS intensity. Under the optimum assay conditions, the enhanced RLS intensity was proportional to the concentration of HMME. The linear range was 0.8-8.4 μg mL -1, with correlation coefficient R2 = 0.9913. The detection limit was 0.014 μg mL -1. The human serum samples and urine samples were determined satisfactorily, which proved that this method was reliable and applicable in the determination of HMME in body fluid. The presented method was simple, sensitive and straightforward and could be a significant method in clinical analysis.

  5. GAP Noise Computation By The CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2001-01-01

    A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

  6. An efficient, widely applicable cryopreservation of Lilium shoot tips by droplet vitrification

    USDA-ARS?s Scientific Manuscript database

    We report a straightforward and widely applicable cryopreservation method for Lilium shoot tips. This method uses adventitious shoots that were induced from leaf segments cultured for 4 weeks on a shoot regeneration medium containing 1 mg L-1 a-naphthaleneacetic acid (NAA) and 0.5 mg L-1 thidiazuron...

  7. Photometric correction for an optical CCD-based system based on the sparsity of an eight-neighborhood gray gradient.

    PubMed

    Zhang, Yuzhong; Zhang, Yan

    2016-07-01

    In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.

  8. Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis.

    PubMed

    Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun

    2015-01-01

    In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it.

  9. Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis

    PubMed Central

    Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun

    2015-01-01

    In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it. PMID:26241496

  10. Fourier-based classification of protein secondary structures.

    PubMed

    Shu, Jian-Jun; Yong, Kian Yan

    2017-04-15

    The correct prediction of protein secondary structures is one of the key issues in predicting the correct protein folded shape, which is used for determining gene function. Existing methods make use of amino acids properties as indices to classify protein secondary structures, but are faced with a significant number of misclassifications. The paper presents a technique for the classification of protein secondary structures based on protein "signal-plotting" and the use of the Fourier technique for digital signal processing. New indices are proposed to classify protein secondary structures by analyzing hydrophobicity profiles. The approach is simple and straightforward. Results show that the more types of protein secondary structures can be classified by means of these newly-proposed indices. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Morphology-Controlled Synthesis of Organometal Halide Perovskite Inverse Opals.

    PubMed

    Chen, Kun; Tüysüz, Harun

    2015-11-09

    The booming development of organometal halide perovskites in recent years has prompted the exploration of morphology-control strategies to improve their performance in photovoltaic, photonic, and optoelectronic applications. However, the preparation of organometal halide perovskites with high hierarchical architecture is still highly challenging and a general morphology-control method for various organometal halide perovskites has not been achieved. A mild and scalable method to prepare organometal halide perovskites in inverse opal morphology is presented that uses a polystyrene-based artificial opal as hard template. Our method is flexible and compatible with different halides and organic ammonium compositions. Thus, the perovskite inverse opal maintains the advantage of straightforward structure and band gap engineering. Furthermore, optoelectronic investigations reveal that morphology exerted influence on the conducting nature of organometal halide perovskites. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. An entropy-based method for determining the flow depth distribution in natural channels

    NASA Astrophysics Data System (ADS)

    Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.

    2013-08-01

    A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.

  13. Recent Development in Chemical Depolymerization of Lignin: A Review

    DOE PAGES

    Wang, Hai; Tucker, Melvin; Ji, Yun

    2013-01-01

    This article reviewed recent development of chemical depolymerization of lignins. There were five types of treatment discussed, including base-catalyzed, acid-catalyzed, metallic catalyzed, ionic liquids-assisted, and supercritical fluids-assisted lignin depolymerizations. The methods employed in this research were described, and the important results were marked. Generally, base-catalyzed and acid-catalyzed methods were straightforward, but the selectivity was low. The severe reaction conditions (high pressure, high temperature, and extreme pH) resulted in requirement of specially designed reactors, which led to high costs of facility and handling. Ionic liquids, and supercritical fluids-assisted lignin depolymerizations had high selectivity, but the high costs of ionic liquids recyclingmore » and supercritical fluid facility limited their applications on commercial scale biomass treatment. Metallic catalyzed depolymerization had great advantages because of its high selectivity to certain monomeric compounds and much milder reaction condition than base-catalyzed or acid-catalyzed depolymerizations. It would be a great contribution to lignin conversion if appropriate catalysts were synthesized.« less

  14. A UML profile for framework modeling.

    PubMed

    Xu, Xiao-liang; Wang, Le-yu; Zhou, Hong

    2004-01-01

    The current standard Unified Modeling Language(UML) could not model framework flexibility and extendability adequately due to lack of appropriate constructs to distinguish framework hot-spots from kernel elements. A new UML profile that may customize UML for framework modeling was presented using the extension mechanisms of UML, providing a group of UML extensions to meet the needs of framework modeling. In this profile, the extended class diagrams and sequence diagrams were defined to straightforwardly identify the hot-spots and describe their instantiation restrictions. A transformation model based on design patterns was also put forward, such that the profile based framework design diagrams could be automatically mapped to the corresponding implementation diagrams. It was proved that the presented profile makes framework modeling more straightforwardly and therefore easier to understand and instantiate.

  15. STORMWATER BEST MANAGEMENT PRACTICE MONITORING

    EPA Science Inventory

    Implementation of an effective BMP monitoring program is not a straight-forward task. BMPs by definition are devices, practices, or methods used to manage stormwater runoff. This umbrella term lumps widely varying techniques into a single category. Also, with the existence of ...

  16. Empty backhaul, an opportunity to avoid fuel expended on the road.

    DOT National Transportation Integrated Search

    2009-11-01

    "An effort was undertaken to determine whether or not vehicle telemetry could provide data : which would indicate whether a commercial vehicle was operating under loaded or unloaded conditions. : With a straightforward method for establishing the loa...

  17. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  18. Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest-Posttest Study.

    PubMed

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A

    2008-09-01

    The pretest-posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest-posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175).

  19. Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2013-01-01

    The pretest–posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest–posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175). PMID:23729942

  20. Ultrasonic-based membrane aided sample preparation of urine proteomes.

    PubMed

    Jesus, Jemmyson Romário; Santos, Hugo M; López-Fernández, H; Lodeiro, Carlos; Arruda, Marco Aurélio Zezzi; Capelo, J L

    2018-02-01

    A new ultrafast ultrasonic-based method for shotgun proteomics as well as label-free protein quantification in urine samples is developed. The method first separates the urine proteins using nitrocellulose-based membranes and then proteins are in-membrane digested using trypsin. The enzymatic digestion process is accelerated from overnight to four minutes using a sonoreactor ultrasonic device. Overall, the sample treatment pipeline comprising protein separation, digestion and identification is done in just 3h. The process is assessed using urine of healthy volunteers. The method shows that male can be differentiated from female using the protein content of urine in a fast, easy and straightforward way. 232 and 226 proteins are identified in urine of male and female, respectively. From this, 162 are common to both genders, whilst 70 are unique to male and 64 to female. From the 162 common proteins, 13 are present at levels statistically different (p < 0.05). The method matches the analytical minimalism concept as outlined by Halls, as each stage of this analysis is evaluated to minimize the time, cost, sample requirement, reagent consumption, energy requirements and production of waste products. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Highly accurate symplectic element based on two variational principles

    NASA Astrophysics Data System (ADS)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  2. A Simple Spreadsheet Program for the Calculation of Lattice-Site Distributions

    ERIC Educational Resources Information Center

    McCaffrey, John G.

    2009-01-01

    A simple spreadsheet program is presented that can be used by undergraduate students to calculate the lattice-site distributions in solids. A major strength of the method is the natural way in which the correct number of ions or atoms are present, or absent, at specific lattice distances. The expanding-cube method utilized is straightforward to…

  3. A new method for long-term storage of titred microbial standard solutions suitable for microbiologic quality control activities of pharmaceutical companies.

    PubMed

    Chiellini, Carolina; Mocali, Stefano; Fani, Renato; Ferro, Iolanda; Bruschi, Serenella; Pinzani, Alessandro

    2016-08-01

    Commercially available lyophilized microbial standards are expensive and subject to reduction in cell viability due to freeze-drying stress. Here we introduce an inexpensive and straightforward method for in-house microbial standard preparation and cryoconservation that preserves constant cell titre and cell viability over 14 months.

  4. Dual-wavelength digital holographic imaging with phase background subtraction

    NASA Astrophysics Data System (ADS)

    Khmaladze, Alexander; Matz, Rebecca L.; Jasensky, Joshua; Seeley, Emily; Holl, Mark M. Banaszak; Chen, Zhan

    2012-05-01

    Three-dimensional digital holographic microscopic phase imaging of objects that are thicker than the wavelength of the imaging light is ambiguous and results in phase wrapping. In recent years, several unwrapping methods that employed two or more wavelengths were introduced. These methods compare the phase information obtained from each of the wavelengths and extend the range of unambiguous height measurements. A straightforward dual-wavelength phase imaging method is presented which allows for a flexible tradeoff between the maximum height of the sample and the amount of noise the method can tolerate. For highly accurate phase measurements, phase unwrapping of objects with heights higher than the beat (synthetic) wavelength (i.e. the product of the original two wavelengths divided by their difference), can be achieved. Consequently, three-dimensional measurements of a wide variety of biological systems and microstructures become technically feasible. Additionally, an effective method of removing phase background curvature based on slowly varying polynomial fitting is proposed. This method allows accurate volume measurements of several small objects with the same image frame.

  5. Fast graph-based relaxed clustering for large data sets using minimal enclosing ball.

    PubMed

    Qian, Pengjiang; Chung, Fu-Lai; Wang, Shitong; Deng, Zhaohong

    2012-06-01

    Although graph-based relaxed clustering (GRC) is one of the spectral clustering algorithms with straightforwardness and self-adaptability, it is sensitive to the parameters of the adopted similarity measure and also has high time complexity O(N(3)) which severely weakens its usefulness for large data sets. In order to overcome these shortcomings, after introducing certain constraints for GRC, an enhanced version of GRC [constrained GRC (CGRC)] is proposed to increase the robustness of GRC to the parameters of the adopted similarity measure, and accordingly, a novel algorithm called fast GRC (FGRC) based on CGRC is developed in this paper by using the core-set-based minimal enclosing ball approximation. A distinctive advantage of FGRC is that its asymptotic time complexity is linear with the data set size N. At the same time, FGRC also inherits the straightforwardness and self-adaptability from GRC, making the proposed FGRC a fast and effective clustering algorithm for large data sets. The advantages of FGRC are validated by various benchmarking and real data sets.

  6. Representation of complex probabilities and complex Gibbs sampling

    NASA Astrophysics Data System (ADS)

    Salcedo, Lorenzo Luis

    2018-03-01

    Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.

  7. Shifting Native Chemical Ligation into Reverse through N→S Acyl Transfer

    PubMed Central

    Macmillan, Derek; Adams, Anna; Premdjee, Bhavesh

    2011-01-01

    Peptide thioester synthesis by N→S acyl transfer is being intensively explored by many research groups the world over. Reasons for this likely include the often straightforward method of precursor assembly using Fmoc-based chemistry and the fundamentally interesting acyl migration process. In this review we introduce recent advances in this exciting area and discuss, in more detail, our own efforts towards the synthesis of peptide thioesters through N→S acyl transfer in native peptide sequences. We have found that several peptide thioesters can be readily prepared and, what’s more, there appears to be ample opportunity for further development and discovery. PMID:22347724

  8. Nonlattice simulation for supersymmetric gauge theories in one dimension.

    PubMed

    Hanada, Masanori; Nishimura, Jun; Takeuchi, Shingo

    2007-10-19

    Lattice simulation of supersymmetric gauge theories is not straightforward. In some cases the lack of manifest supersymmetry just necessitates cumbersome fine-tuning, but in the worse cases the chiral and/or Majorana nature of fermions makes it difficult to even formulate an appropriate lattice theory. We propose circumventing all these problems inherent in the lattice approach by adopting a nonlattice approach for one-dimensional supersymmetric gauge theories, which are important in the string or M theory context. In particular, our method can be used to investigate the gauge-gravity duality from first principles, and to simulate M theory based on the matrix theory conjecture.

  9. Fragment-based screening by protein crystallography: successes and pitfalls.

    PubMed

    Chilingaryan, Zorik; Yin, Zhou; Oakley, Aaron J

    2012-10-08

    Fragment-based drug discovery (FBDD) concerns the screening of low-molecular weight compounds against macromolecular targets of clinical relevance. These compounds act as starting points for the development of drugs. FBDD has evolved and grown in popularity over the past 15 years. In this paper, the rationale and technology behind the use of X-ray crystallography in fragment based screening (FBS) will be described, including fragment library design and use of synchrotron radiation and robotics for high-throughput X-ray data collection. Some recent uses of crystallography in FBS will be described in detail, including interrogation of the drug targets β-secretase, phenylethanolamine N-methyltransferase, phosphodiesterase 4A and Hsp90. These examples provide illustrations of projects where crystallography is straightforward or difficult, and where other screening methods can help overcome the limitations of crystallography necessitated by diffraction quality.

  10. Fragment-Based Screening by Protein Crystallography: Successes and Pitfalls

    PubMed Central

    Chilingaryan, Zorik; Yin, Zhou; Oakley, Aaron J.

    2012-01-01

    Fragment-based drug discovery (FBDD) concerns the screening of low-molecular weight compounds against macromolecular targets of clinical relevance. These compounds act as starting points for the development of drugs. FBDD has evolved and grown in popularity over the past 15 years. In this paper, the rationale and technology behind the use of X-ray crystallography in fragment based screening (FBS) will be described, including fragment library design and use of synchrotron radiation and robotics for high-throughput X-ray data collection. Some recent uses of crystallography in FBS will be described in detail, including interrogation of the drug targets β-secretase, phenylethanolamine N-methyltransferase, phosphodiesterase 4A and Hsp90. These examples provide illustrations of projects where crystallography is straightforward or difficult, and where other screening methods can help overcome the limitations of crystallography necessitated by diffraction quality. PMID:23202926

  11. Resonant transition-based quantum computation

    NASA Astrophysics Data System (ADS)

    Chiang, Chen-Fu; Hsieh, Chang-Yu

    2017-05-01

    In this article we assess a novel quantum computation paradigm based on the resonant transition (RT) phenomenon commonly associated with atomic and molecular systems. We thoroughly analyze the intimate connections between the RT-based quantum computation and the well-established adiabatic quantum computation (AQC). Both quantum computing frameworks encode solutions to computational problems in the spectral properties of a Hamiltonian and rely on the quantum dynamics to obtain the desired output state. We discuss how one can adapt any adiabatic quantum algorithm to a corresponding RT version and the two approaches are limited by different aspects of Hamiltonians' spectra. The RT approach provides a compelling alternative to the AQC under various circumstances. To better illustrate the usefulness of the novel framework, we analyze the time complexity of an algorithm for 3-SAT problems and discuss straightforward methods to fine tune its efficiency.

  12. Unicameral bone cysts: general characteristics and management controversies.

    PubMed

    Pretell-Mazzini, Juan; Murphy, Robert Francis; Kushare, Indranil; Dormans, John P

    2014-05-01

    Unicameral bone cysts are benign bone lesions that are often asymptomatic and commonly develop in the proximal humerus and femur of skeletally immature patients. The etiology of these lesions remains unknown. Most patients present with a pathologic fracture, but these cysts can be discovered incidentally, as well. Radiographically, a unicameral bone cyst appears as a radiolucent lesion with cortical thinning and is centrally located within the metaphysis. Although diagnosis is frequently straightforward, management remains controversial. Because the results of various management methods are heterogeneous, no single method has emerged as the standard of care. New minimally invasive techniques involve cyst decompression with bone grafting and instrumentation. These techniques have yielded promising results, with low rates of complications and recurrence reported; however, prospective clinical trials are needed to compare these techniques with current evidence-based treatments.

  13. Lidar-Based Rock-Fall Hazard Characterization of Cliffs

    USGS Publications Warehouse

    Collins, Brian D.; Greg M.Stock,

    2017-01-01

    Rock falls from cliffs and other steep slopes present numerous challenges for detailed geological characterization. In steep terrain, rock-fall source areas are both dangerous and difficult to access, severely limiting the ability to make detailed structural and volumetric measurements necessary for hazard assessment. Airborne and terrestrial lidar survey methods can provide high-resolution data needed for volumetric, structural, and deformation analyses of rock falls, potentially making these analyses straightforward and routine. However, specific methods to collect, process, and analyze lidar data of steep cliffs are needed to maximize analytical accuracy and efficiency. This paper presents observations showing how lidar data sets should be collected, filtered, registered, and georeferenced to tailor their use in rock fall characterization. Additional observations concerning surface model construction, volumetric calculations, and deformation analysis are also provided.

  14. Comparison of prescription reimbursement methodologies in Japan and the United States.

    PubMed

    Akaho, Eiichi; MacLaughlin, Eric J; Takeuchi, Yoshikazu

    2003-01-01

    To compare methods of prescription reimbursement in Japan and the United States. Data were obtained through interviews and a search of the pharmacy literature using MEDLINE, International Pharmaceutical Abstracts, the Iowa Drug Information Service, and the Internet. Search terms were pharmacy, dispensing fee, reimbursement, prescriptions, Japan, United States, and average wholesale price (AWP). A comprehensive search was done (i.e., no year limits were observed). Performed manually by the authors. The reimbursement systems for prescriptions differ widely between Japan and the United States. The reimbursement system in the United States is fairly straightforward and easy to understand; it is generally based on product cost (e.g., AWP minus a percentage) plus a small dispensing fee. The system in Japan is extremely complex. Reimbursement formulae have four components, including fees for professional dispensing, drug cost, counseling and administration, and medication supplies and devices. Additionally, various adjustments to the final amount are made based on dosage form, length of therapy, number of prescriptions dispensed by the pharmacy per month, and when the prescription is filled (e.g., after hours, on Sundays or holidays). In Japan, each pharmacist is limited to filling 40 prescriptions per day, but each "prescription" can involve several medication orders, making it difficult to compare Japanese pharmacists' workloads with those of their counterparts in the United States. In addition, Japanese pharmacists are provided remuneration for providing various cognitive services, such as taking a patient history, counseling a patient, consulting with a physician, and identifying drug-related problems. Japan and the United States have very different methods of reimbursing pharmacists for dispensing prescriptions, each with positive and negative features. Based on the features of pharmacy reimbursement systems in each country, perhaps the optimal pharmacy practice system would have workload limits that reflect safety standards and amount of support staff available, provide a fair and standardized method for determining drug cost, are relatively straightforward, pay for cognitive services, and provide care for all of citizens through of some type of national health care system.

  15. General Dialdehyde Click Chemistry for Amine Bioconjugation.

    PubMed

    Elahipanah, Sina; O'Brien, Paul J; Rogozhnikov, Dmitry; Yousaf, Muhammad N

    2017-05-17

    The development of methods for conjugating a range of molecules to primary amine functional groups has revolutionized the fields of chemistry, biology, and material science. The primary amine is a key functional group and one of the most important nucleophiles and bases used in all of synthetic chemistry. Therefore, tremendous interest in the synthesis of molecules containing primary amines and strategies to devise chemical reactions to react with primary amines has been at the core of chemical research. In particular, primary amines are a ubiquitous functional group found in biological systems as free amino acids, as key side chain lysines in proteins, and in signaling molecules and metabolites and are also present in many natural product classes. Due to its abundance, the primary amine is the most convenient functional group handle in molecules for ligation to other molecules for a broad range of applications that impact all scientific fields. Because of the primary amine's central importance in synthetic chemistry, acid-base chemistry, redox chemistry, and biology, many methods have been developed to efficiently react with primary amines, including activated carboxylic acids, isothiocyanates, Michael addition type systems, and reaction with ketones or aldehydes followed by in situ reductive amination. Herein, we introduce a new traceless, high-yield, fast click-chemistry method based on the rapid and efficient trapping of amine groups via a functionalized dialdehyde group. The click reaction occurs in mild conditions in organic solvents or aqueous media and proceeds in high yield, and the starting dialdehyde reagent and resulting dialdehyde click conjugates are stable. Moreover, no catalyst or dialdehyde-activating group is required, and the only byproduct is water. The initial dialdehyde and the resulting conjugate are both straightforward to characterize, and the reaction proceeds with high atom economy. To demonstrate the broad scope of this new click-conjugation strategy, we designed a straightforward scheme to synthesize a suite of dialdehyde reagents. The dialdehyde molecules were used for applications in cell-surface engineering and for tailoring surfaces for material science applications. We anticipate the broad utility of the general dialdehyde click chemistry to primary amines in all areas of chemical research, ranging from polymers and bioconjugation to material science and nanoscience.

  16. Near-edge band structures and band gaps of Cu-based semiconductors predicted by the modified Becke-Johnson potential plus an on-site Coulomb U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yubo; Zhang, Jiawei; Wang, Youwei

    Diamond-like Cu-based multinary semiconductors are a rich family of materials that hold promise in a wide range of applications. Unfortunately, accurate theoretical understanding of the electronic properties of these materials is hindered by the involvement of Cu d electrons. Density functional theory (DFT) based calculations using the local density approximation or generalized gradient approximation often give qualitative wrong electronic properties of these materials, especially for narrow-gap systems. The modified Becke-Johnson (mBJ) method has been shown to be a promising alternative to more elaborate theory such as the GW approximation for fast materials screening and predictions. However, straightforward applications of themore » mBJ method to these materials still encounter significant difficulties because of the insufficient treatment of the localized d electrons. We show that combining the promise of mBJ potential and the spirit of the well-established DFT + U method leads to a much improved description of the electronic structures, including the most challenging narrow-gap systems. A survey of the band gaps of about 20 Cu-based semiconductors calculated using the mBJ + U method shows that the results agree with reliable values to within ±0.2 eV.« less

  17. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  18. Multiwavelength metasurfaces through spatial multiplexing

    DOE PAGES

    Arbabi, Ehsan; Arbabi, Amir; Kamali, Seyedeh Mahsa; ...

    2016-09-06

    Metasurfaces are two-dimensional arrangements of optical scatterers rationally arranged to control optical wavefronts. Despite the significant advances made in wavefront engineering through metasurfaces, most of these devices are designed for and operate at a single wavelength. Here we show that spatial multiplexing schemes can be applied to increase the number of operation wavelengths. We use a high contrast dielectric transmittarray platform with amorphous silicon nano-posts to demonstrate polarization insensitive metasurface lenses with a numerical aperture of 0.46, that focus light at 915 and 1550 nm to the same focal distance. We investigate two different methods, one based on large scalemore » segmentation and one on meta-atom interleaving, and compare their performances. An important feature of this method is its simple generalization to adding more wavelengths or new functionalities to a device. Furthermore, it provides a relatively straightforward method for achieving multi-functional and multiwavelength metasurface devices.« less

  19. Protecting Privacy of Shared Epidemiologic Data without Compromising Analysis Potential

    DOE PAGES

    Cologne, John; Grant, Eric J.; Nakashima, Eiji; ...

    2012-01-01

    Objective . Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. Methods . We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Results . Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relativemore » accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. Conclusions . When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs.« less

  20. Protecting Privacy of Shared Epidemiologic Data without Compromising Analysis Potential

    PubMed Central

    Cologne, John; Grant, Eric J.; Nakashima, Eiji; Chen, Yun; Funamoto, Sachiyo; Katayama, Hiroaki

    2012-01-01

    Objective. Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. Methods. We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Results. Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relative accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. Conclusions. When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs. PMID:22505949

  1. Measurement of charge transfer potential barrier in pinned photodiode CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Chen, Cao; Bing, Zhang; Junfeng, Wang; Longsheng, Wu

    2016-05-01

    The charge transfer potential barrier (CTPB) formed beneath the transfer gate causes a noticeable image lag issue in pinned photodiode (PPD) CMOS image sensors (CIS), and is difficult to measure straightforwardly since it is embedded inside the device. From an understanding of the CTPB formation mechanism, we report on an alternative method to feasibly measure the CTPB height by performing a linear extrapolation coupled with a horizontal left-shift on the sensor photoresponse curve under the steady-state illumination. The theoretical study was performed in detail on the principle of the proposed method. Application of the measurements on a prototype PPD-CIS chip with an array of 160 × 160 pixels is demonstrated. Such a method intends to shine new light on the guidance for the lag-free and high-speed sensors optimization based on PPD devices. Project supported by the National Defense Pre-Research Foundation of China (No. 51311050301095).

  2. Quantitative Image Restoration in Bright Field Optical Microscopy.

    PubMed

    Gutiérrez-Medina, Braulio; Sánchez Miranda, Manuel de Jesús

    2017-11-07

    Bright field (BF) optical microscopy is regarded as a poor method to observe unstained biological samples due to intrinsic low image contrast. We introduce quantitative image restoration in bright field (QRBF), a digital image processing method that restores out-of-focus BF images of unstained cells. Our procedure is based on deconvolution, using a point spread function modeled from theory. By comparing with reference images of bacteria observed in fluorescence, we show that QRBF faithfully recovers shape and enables quantify size of individual cells, even from a single input image. We applied QRBF in a high-throughput image cytometer to assess shape changes in Escherichia coli during hyperosmotic shock, finding size heterogeneity. We demonstrate that QRBF is also applicable to eukaryotic cells (yeast). Altogether, digital restoration emerges as a straightforward alternative to methods designed to generate contrast in BF imaging for quantitative analysis. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  3. A straightforward method to determine flavouring substances in food by GC-MS.

    PubMed

    Lopez, Patricia; van Sisseren, Maarten; De Marco, Stefania; Jekel, Ad; de Nijs, Monique; Mol, Hans G J

    2015-05-01

    A straightforward GC-MS method was developed to determine the occurrence of fourteen flavouring compounds in food. It was successfully validated for four generic types of food (liquids, semi-solids, dry solids and fatty solids) in terms of limit of quantification, linearity, selectivity, matrix effects, recovery (53-120%) and repeatability (3-22%). The method was applied to a survey of 61 Dutch food products. The survey was designed to cover all the food commodities for which the EU Regulation 1334/2008 set maximum permitted levels. All samples were compliant with EU legislation. However, the levels of coumarin (0.6-63 mg/kg) may result in an exposure that, in case of children, would exceed the tolerable daily intake (TDI) of 0.1mg/kg bw/day. In addition to coumarin, estragole, methyl-eugenol, (R)-(+)-pulegone and thujone were EU-regulated substances detected in thirty-one of the products. The non-EU regulated alkenylbenzenes, trans-anethole and myristicin, were commonly present in beverages and in herbs-containing products. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Matrix Treatment of Ray Optics.

    ERIC Educational Resources Information Center

    Quon, W. Steve

    1996-01-01

    Describes a method to combine two learning experiences--optical physics and matrix mathematics--in a straightforward laboratory experiment that allows engineering/physics students to integrate a variety of learning insights and technical skills, including using lasers, studying refraction through thin lenses, applying concepts of matrix…

  5. NONSTATIONARY SPATIAL MODELING OF ENVIRONMENTAL DATA USING A PROCESS CONVOLUTION APPROACH

    EPA Science Inventory

    Traditional approaches to modeling spatial processes involve the specification of the covariance structure of the field. Although such methods are straightforward to understand and effective in some situations, there are often problems in incorporating non-stationarity and in ma...

  6. Methods and Techniques of Revenue Forecasting.

    ERIC Educational Resources Information Center

    Caruthers, J. Kent; Wentworth, Cathi L.

    1997-01-01

    Revenue forecasting is the critical first step in most college and university budget-planning processes. While it seems a straightforward exercise, effective forecasting requires consideration of a number of interacting internal and external variables, including demographic trends, economic conditions, and broad social priorities. The challenge…

  7. QUASI-PML FOR WAVES IN CYLINDRICAL COORDINATES. (R825225)

    EPA Science Inventory

    We prove that the straightforward extension of Berenger's original perfectly matched layer (PML) is not reflectionless at a cylindrical interface in the continuum limit. A quasi-PLM is developed as an absorbing boundary condition (ABC) for the finite-difference time-domain method...

  8. Landslide early warning based on failure forecast models: the example of the Mt. de La Saxe rockslide, northern Italy

    NASA Astrophysics Data System (ADS)

    Manconi, A.; Giordan, D.

    2015-07-01

    We apply failure forecast models by exploiting near-real-time monitoring data for the La Saxe rockslide, a large unstable slope threatening Aosta Valley in northern Italy. Starting from the inverse velocity theory, we analyze landslide surface displacements automatically and in near real time on different temporal windows and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here, we present the result obtained for the La Saxe rockslide, a large unstable slope located in Aosta Valley, northern Italy. Based on this case study, we identify operational thresholds that are established on the reliability of the forecast models. Our approach is aimed at supporting the management of early warning systems in the most critical phases of the landslide emergency.

  9. Multilocus Association Mapping Using Variable-Length Markov Chains

    PubMed Central

    Browning, Sharon R.

    2006-01-01

    I propose a new method for association-based gene mapping that makes powerful use of multilocus data, is computationally efficient, and is straightforward to apply over large genomic regions. The approach is based on the fitting of variable-length Markov chain models, which automatically adapt to the degree of linkage disequilibrium (LD) between markers to create a parsimonious model for the LD structure. Edges of the fitted graph are tested for association with trait status. This approach can be thought of as haplotype testing with sophisticated windowing that accounts for extent of LD to reduce degrees of freedom and number of tests while maximizing information. I present analyses of two published data sets that show that this approach can have better power than single-marker tests or sliding-window haplotypic tests. PMID:16685642

  10. Multilocus association mapping using variable-length Markov chains.

    PubMed

    Browning, Sharon R

    2006-06-01

    I propose a new method for association-based gene mapping that makes powerful use of multilocus data, is computationally efficient, and is straightforward to apply over large genomic regions. The approach is based on the fitting of variable-length Markov chain models, which automatically adapt to the degree of linkage disequilibrium (LD) between markers to create a parsimonious model for the LD structure. Edges of the fitted graph are tested for association with trait status. This approach can be thought of as haplotype testing with sophisticated windowing that accounts for extent of LD to reduce degrees of freedom and number of tests while maximizing information. I present analyses of two published data sets that show that this approach can have better power than single-marker tests or sliding-window haplotypic tests.

  11. Virtual Ray Tracing as a Conceptual Tool for Image Formation in Mirrors and Lenses

    ERIC Educational Resources Information Center

    Heikkinen, Lasse; Savinainen, Antti; Saarelainen, Markku

    2016-01-01

    The ray tracing method is widely used in teaching geometrical optics at the upper secondary and university levels. However, using simple and straightforward examples may lead to a situation in which students use the model of ray tracing too narrowly. Previous studies show that students seem to use the ray tracing method too concretely instead of…

  12. Efficient Synthesis of γ-Lactams by a Tandem Reductive Amination/Lactamization Sequence

    PubMed Central

    Nöth, Julica; Frankowski, Kevin J.; Neuenswander, Benjamin; Aubé, Jeffrey; Reiser, Oliver

    2009-01-01

    A three-component method for synthesizing highly-substituted γ-lactams from readily available maleimides, aldehydes and amines is described. A new reductive amination/intramolecular lactamization sequence provides a straightforward route to the lactam products in a single manipulation. The general utility of this method is demonstrated by the parallel synthesis of a γ-lactam library. PMID:18338857

  13. Efficient synthesis of gamma-lactams by a tandem reductive amination/lactamization sequence.

    PubMed

    Nöth, Julica; Frankowski, Kevin J; Neuenswander, Benjamin; Aubé, Jeffrey; Reiser, Oliver

    2008-01-01

    A three-component method for the synthesis of highly substituted gamma-lactams from readily available maleimides, aldehydes, and amines is described. A new reductive amination/intramolecular lactamization sequence provides a straightforward route to the lactam products in a single manipulation. The general utility of this method is demonstrated by the parallel synthesis of a gamma-lactam library.

  14. Soft but Strong. Neg-Raising, Soft Triggers, and Exhaustification

    ERIC Educational Resources Information Center

    Romoli, Jacopo

    2012-01-01

    In this thesis, I focus on scalar implicatures, presuppositions and their connections. In chapter 2, I propose a scalar implicature-based account of neg-raising inferences, standardly analyzed as a presuppositional phenomenon (Gajewski 2005, 2007). I show that an approach based on scalar implicatures can straightforwardly account for the…

  15. Decision Support Framework Implementation Of The Web-Based Environmental Decision Analysis DASEES: Decision Analysis For A Sustainable Environment, Economy, And Society

    EPA Science Inventory

    Solutions to pervasive environmental problems often are not amenable to a straightforward application of science-based actions. These problems encompass large-scale environmental policy questions where environmental concerns, economic constraints, and societal values conflict ca...

  16. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  17. Observations in public settings

    Treesearch

    Robert G. Lee

    1977-01-01

    Straightforward observation of children in their everyday environments is a more appropriate method of discovering the meaning of their relationships to nature than complex methodologies or reductionist commonsense thinking. Observational study requires an explicit conceptual framework and adherence to procedures that allow scientific inference. Error may come from...

  18. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1988-01-01

    The initial effort was concentrated on developing the quasi-analytical approach for two-dimensional transonic flow. To keep the problem computationally efficient and straightforward, only the two-dimensional flow was considered and the problem was modeled using the transonic small perturbation equation.

  19. Breeding for phytonutrient content; examples from watermelon

    USDA-ARS?s Scientific Manuscript database

    Breeding for high phytonutrient fruits and vegetables can be a fairly straightforward endeavor when the compounds of interest produce a visible effect or the methods for quantifying the compounds simple and inexpensive. Lycopene in tomatoes and watermelon is one such compound, since the amount of r...

  20. Metal-free Synthesis of Ynones from Acyl Chlorides and Potassium Alkynyltrifluoroborate Salts

    PubMed Central

    Taylor, Cassandra L.; Bolshan, Yuri

    2015-01-01

    Ynones are a valuable functional group and building block in organic synthesis. Ynones serve as a precursor to many important organic functional groups and scaffolds. Traditional methods for the preparation of ynones are associated with drawbacks including harsh conditions, multiple purification steps, and the presence of unwanted byproducts. An alternative method for the straightforward preparation of ynones from acyl chlorides and potassium alkynyltrifluoroborate salts is described herein. The adoption of organotrifluoroborate salts as an alternative to organometallic reagents for the formation of new carbon-carbon bonds has a number of advantages. Potassium organotrifluoroborate salts are shelf stable, have good functional group tolerance, low toxicity, and a wide variety are straightforward to prepare. The title reaction proceeds rapidly at ambient temperature in the presence of a Lewis acid without the exclusion of air and moisture. Fair to excellent yields may be obtained via reaction of various aryl and alkyl acid chlorides with alkynyltrifluoroborate salts in the presence of boron trichloride. PMID:25742169

  1. Vibrationally resolved photoelectron spectroscopy of electronic excited states of DNA bases: application to the ã state of thymine cation.

    PubMed

    Hochlaf, Majdi; Pan, Yi; Lau, Kai-Chung; Majdi, Youssef; Poisson, Lionel; Garcia, Gustavo A; Nahon, Laurent; Al Mogren, Muneerah Mogren; Schwell, Martin

    2015-02-19

    For fully understanding the light-molecule interaction dynamics at short time scales, recent theoretical and experimental studies proved the importance of accurate characterizations not only of the ground (D0) but also of the electronic excited states (e.g., D1) of molecules. While ground state investigations are currently straightforward, those of electronic excited states are not. Here, we characterized the à electronic state of ionic thymine (T(+)) DNA base using explicitly correlated coupled cluster ab initio methods and state-of-the-art synchrotron-based electron/ion coincidence techniques. The experimental spectrum is composed of rich and long vibrational progressions corresponding to the population of the low frequency modes of T(+)(Ã). This work challenges previous numerous works carried out on DNA bases using common synchrotron and VUV-based photoelectron spectroscopies. We provide hence a powerful theoretical and experimental framework to study the electronic structure of ionized DNA bases that could be generalized to other medium-sized biologically relevant systems.

  2. An ontology-based annotation of cardiac implantable electronic devices to detect therapy changes in a national registry.

    PubMed

    Rosier, Arnaud; Mabo, Philippe; Chauvin, Michel; Burgun, Anita

    2015-05-01

    The patient population benefitting from cardiac implantable electronic devices (CIEDs) is increasing. This study introduces a device annotation method that supports the consistent description of the functional attributes of cardiac devices and evaluates how this method can detect device changes from a CIED registry. We designed the Cardiac Device Ontology, an ontology of CIEDs and device functions. We annotated 146 cardiac devices with this ontology and used it to detect therapy changes with respect to atrioventricular pacing, cardiac resynchronization therapy, and defibrillation capability in a French national registry of patients with implants (STIDEFIX). We then analyzed a set of 6905 device replacements from the STIDEFIX registry. Ontology-based identification of therapy changes (upgraded, downgraded, or similar) was accurate (6905 cases) and performed better than straightforward analysis of the registry codes (F-measure 1.00 versus 0.75 to 0.97). This study demonstrates the feasibility and effectiveness of ontology-based functional annotation of devices in the cardiac domain. Such annotation allowed a better description and in-depth analysis of STIDEFIX. This method was useful for the automatic detection of therapy changes and may be reused for analyzing data from other device registries.

  3. Colorimetric detection of ammonia using smartphones based on localized surface plasmon resonance of silver nanoparticles.

    PubMed

    Amirjani, Amirmostafa; Fatmehsari, Davoud Haghshenas

    2018-01-01

    In this work, a rapid and straightforward method was developed for colorimetric determination of ammonia using smartphones. The mechanisms is based on the manipulation of the surface plasmon band of silver nanoparticles (AgNPs) via the formation of Ag (NH 3 ) 2 + complex. This complex decreases the amount of AgNPs in the solution and consequently, the color intensity of the colloidal system decreases. Not only the variation in color intensity of the solution can be tracked by a UV-vis spectrophotometer, but also a smartphone can be employed to monitor the color intensity variation by RGB analysis. Ammonia, in the concentration range of 10-1000mgL -1 , was successfully measured spectrophotometrically (UV-vis spectrophotometer) and colorimetrically (RGB measurement) with the detection limit of 180 and 200mgL -1 , respectively. Linear relationships were also developed for both methods. Also, the response time of the developed colorimetric sensor was around 20s. Both of the colorimetric and spectrophotometric methods showed a reliable performance for determination of ammonia in the real samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Determination of absorption coefficient of nanofluids with unknown refractive index from reflection and transmission spectra

    NASA Astrophysics Data System (ADS)

    Kim, Joong Bae; Lee, Seungyoon; Lee, Kyungeun; Lee, Ikjin; Lee, Bong Jae

    2018-07-01

    It has been shown that the absorption coefficient of a nanofluid can be actively tuned by changing material, size, shape, and concentration of the nanoparticle suspension. In applications of engineered nanofluids for the direct absorption of solar radiation, it is important to experimentally characterize the absorption coefficient of nanofluids in the solar spectrum. If the refractive index of the base fluid (i.e., the solution without nanoparticles) is known a priori, the absorption coefficient of nanofluids can be easily determined from the transmission spectrum. However, if the refractive index of the base fluid is not known, it is not straightforward to extract the absorption coefficient solely from the transmission spectrum. The present work aims to develop an analytical method of determining the absorption coefficient of nanofluids with unknown refractive index by measuring both reflection and transmission spectra. The proposed method will be validated with deionized water, and the effect of measurement uncertainty will be carefully examined. Finally, the general applicability of the proposed method will also be demonstrated for Therminol VP-1 as well as the Therminol VP-1 - graphite nanofluid.

  5. Quantifying the Sources of Intermodel Spread in Equilibrium Climate Sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caldwell, Peter M.; Zelinka, Mark D.; Taylor, Karl E.

    This paper clarifies the causes of intermodel differences in the global-average temperature response to doubled CO 2, commonly known as equilibrium climate sensitivity (ECS). The authors begin by noting several issues with the standard approach for decomposing ECS into a sum of forcing and feedback terms. This leads to a derivation of an alternative method based on linearizing the effect of the net feedback. Consistent with previous studies, the new method identifies shortwave cloud feedback as the dominant source of intermodel spread in ECS. This new approach also reveals that covariances between cloud feedback and forcing, between lapse rate andmore » longwave cloud feedbacks, and between albedo and shortwave cloud feedbacks play an important and previously underappreciated role in determining model differences in ECS. Finally, defining feedbacks based on fixed relative rather than specific humidity (as suggested by Held and Shell) reduces the covariances between processes and leads to more straightforward interpretations of results.« less

  6. Quantifying the Sources of Intermodel Spread in Equilibrium Climate Sensitivity

    DOE PAGES

    Caldwell, Peter M.; Zelinka, Mark D.; Taylor, Karl E.; ...

    2016-01-07

    This paper clarifies the causes of intermodel differences in the global-average temperature response to doubled CO 2, commonly known as equilibrium climate sensitivity (ECS). The authors begin by noting several issues with the standard approach for decomposing ECS into a sum of forcing and feedback terms. This leads to a derivation of an alternative method based on linearizing the effect of the net feedback. Consistent with previous studies, the new method identifies shortwave cloud feedback as the dominant source of intermodel spread in ECS. This new approach also reveals that covariances between cloud feedback and forcing, between lapse rate andmore » longwave cloud feedbacks, and between albedo and shortwave cloud feedbacks play an important and previously underappreciated role in determining model differences in ECS. Finally, defining feedbacks based on fixed relative rather than specific humidity (as suggested by Held and Shell) reduces the covariances between processes and leads to more straightforward interpretations of results.« less

  7. Dense electro-optic frequency comb generated by two-stage modulation for dual-comb spectroscopy.

    PubMed

    Wang, Shuai; Fan, Xinyu; Xu, Bingxin; He, Zuyuan

    2017-10-01

    An electro-optic frequency comb enables frequency-agile comb-based spectroscopy without using sophisticated phase-locking electronics. Nevertheless, dense electro-optic frequency combs over broad spans have yet to be developed. In this Letter, we propose a straightforward and efficient method for electro-optic frequency comb generation with a small line spacing and a large span. This method is based on two-stage modulation: generating an 18 GHz line-spacing comb at the first stage and a 250 MHz line-spacing comb at the second stage. After generating an electro-optic frequency comb covering 1500 lines, we set up an easily established mutually coherent hybrid dual-comb interferometer, which combines the generated electro-optic frequency comb and a free-running mode-locked laser. As a proof of concept, this hybrid dual-comb interferometer is used to measure the absorption and dispersion profiles of the molecular transition of H 13 CN with a spectral resolution of 250 MHz.

  8. Straightforward Method for Coverage of Major Vessels After Modified Radical Neck Dissection.

    PubMed

    González-García, Raúl; Moreno-García, Carlos; Moreno-Sánchez, Manuel; Román-Romero, Leticia

    2017-06-01

    A new method for covering the internal jugular vein and carotid artery after exposure of the cervical vascular axis subsequent to neck dissection is presented. To cover the most caudal part of the vascular axis, a platysma coli muscle flap is harvested from its most medial and inferior part of the neck in a caudally based fashion and is slightly rotated posteriorly up to 45°. In addition, a superiorly based sternocleidomastoid muscle flap involving the posterior half of the muscle after detachment of the clavicle head is harvested and rotated 45° anteriorly to cover the upper two thirds of the vascular axis. This technique seems to be a good alternative to the pectoralis major myocutaneous flap for covering cervical major vessels, if no classical radical neck dissection is performed, especially in those oncologic malnourished patients who will undergo adjuvant radiotherapy after surgical treatment. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  9. An evidence-based concept of implant dentistry. Utilization of short and narrow platform implants.

    PubMed

    Ruiz, Jose-Luis

    2012-09-01

    As a profession, we must remember that tooth replacement is not a luxury; it is often a necessity for health reasons. Although bone augmentation and CBCT and expensive surgical guides are often indicated for complex cases, they are being overused. Simple or straightforward implant cases, when there is sufficient natural bone for narrow or shorter implant, can be predictable performed by well-trained GPs and other trained specialists. Complex cases requiring bone augmentation and other complexities as described herein, should be referred to a surgical specialist. Implant courses and curricula have to be based on the level of complexity of implant surgery that each clinician wishes to provide to his or her patients. Using a "logical approach" to implant dentistry keeps cases simple or straightforward, and more accessible to patients by the correct use of narrow and shorter implants.

  10. Uncertainty management by relaxation of conflicting constraints in production process scheduling

    NASA Technical Reports Server (NTRS)

    Dorn, Juergen; Slany, Wolfgang; Stary, Christian

    1992-01-01

    Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.

  11. LandScape: a simple method to aggregate p-values and other stochastic variables without a priori grouping.

    PubMed

    Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob

    2016-08-01

    In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).

  12. Neutron-Encoded Protein Quantification by Peptide Carbamylation

    NASA Astrophysics Data System (ADS)

    Ulbrich, Arne; Merrill, Anna E.; Hebert, Alexander S.; Westphall, Michael S.; Keller, Mark P.; Attie, Alan D.; Coon, Joshua J.

    2014-01-01

    We describe a chemical tag for duplex proteome quantification using neutron encoding (NeuCode). The method utilizes the straightforward, efficient, and inexpensive carbamylation reaction. We demonstrate the utility of NeuCode carbamylation by accurately measuring quantitative ratios from tagged yeast lysates mixed in known ratios and by applying this method to quantify differential protein expression in mice fed a either control or high-fat diet.

  13. Exact Solutions for the Integrable Sixth-Order Drinfeld-Sokolov-Satsuma-Hirota System by the Analytical Methods.

    PubMed

    Manafian Heris, Jalil; Lakestani, Mehrdad

    2014-01-01

    We establish exact solutions including periodic wave and solitary wave solutions for the integrable sixth-order Drinfeld-Sokolov-Satsuma-Hirota system. We employ this system by using a generalized (G'/G)-expansion and the generalized tanh-coth methods. These methods are developed for searching exact travelling wave solutions of nonlinear partial differential equations. It is shown that these methods, with the help of symbolic computation, provide a straightforward and powerful mathematical tool for solving nonlinear partial differential equations.

  14. Forces in General Relativity

    ERIC Educational Resources Information Center

    Ridgely, Charles T.

    2010-01-01

    Many textbooks dealing with general relativity do not demonstrate the derivation of forces in enough detail. The analyses presented herein demonstrate straightforward methods for computing forces by way of general relativity. Covariant divergence of the stress-energy-momentum tensor is used to derive a general expression of the force experienced…

  15. Teaching Stress Physiology Using Zebrafish ("Danio Rerio")

    ERIC Educational Resources Information Center

    Cooper, Michael; Dhawale, Shree; Mustafa, Ahmed

    2009-01-01

    A straightforward and inexpensive laboratory experiment is presented that investigates the physiological stress response of zebrafish after a 5 degree C increase in water temperature. This experiment is designed for an undergraduate physiology lab and allows students to learn the scientific method and relevant laboratory techniques without causing…

  16. "Quantum Interference with Slits" Revisited

    ERIC Educational Resources Information Center

    Rothman, Tony; Boughn, Stephen

    2011-01-01

    Marcella has presented a straightforward technique employing the Dirac formalism to calculate single- and double-slit interference patterns. He claims that no reference is made to classical optics or scattering theory and that his method therefore provides a purely quantum mechanical description of these experiments. He also presents his…

  17. Efficient energy stable schemes for isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven

    2018-07-01

    We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.

  18. Tenax extraction as a simple approach to improve environmental risk assessments.

    PubMed

    Harwood, Amanda D; Nutile, Samuel A; Landrum, Peter F; Lydy, Michael J

    2015-07-01

    It is well documented that using exhaustive chemical extractions is not an effective means of assessing exposure of hydrophobic organic compounds in sediments and that bioavailability-based techniques are an improvement over traditional methods. One technique that has shown special promise as a method for assessing the bioavailability of hydrophobic organic compounds in sediment is the use of Tenax-extractable concentrations. A 6-h or 24-h single-point Tenax-extractable concentration correlates to both bioaccumulation and toxicity. This method has demonstrated effectiveness for several hydrophobic organic compounds in various organisms under both field and laboratory conditions. In addition, a Tenax bioaccumulation model was developed for multiple compounds relating 24-h Tenax-extractable concentrations to oligochaete tissue concentrations exposed in both the laboratory and field. This model has demonstrated predictive capacity for additional compounds and species. Use of Tenax-extractable concentrations to estimate exposure is rapid, simple, straightforward, and relatively inexpensive, as well as accurate. Therefore, this method would be an invaluable tool if implemented in risk assessments. © 2015 SETAC.

  19. Space-time least-squares finite element method for convection-reaction system with transformed variables

    PubMed Central

    Nam, Jaewook

    2011-01-01

    We present a method to solve a convection-reaction system based on a least-squares finite element method (LSFEM). For steady-state computations, issues related to recirculation flow are stated and demonstrated with a simple example. The method can compute concentration profiles in open flow even when the generation term is small. This is the case for estimating hemolysis in blood. Time-dependent flows are computed with the space-time LSFEM discretization. We observe that the computed hemoglobin concentration can become negative in certain regions of the flow; it is a physically unacceptable result. To prevent this, we propose a quadratic transformation of variables. The transformed governing equation can be solved in a straightforward way by LSFEM with no sign of unphysical behavior. The effect of localized high shear on blood damage is shown in a circular Couette-flow-with-blade configuration, and a physiological condition is tested in an arterial graft flow. PMID:21709752

  20. Applications of computer algebra to distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Storch, Joel A.

    1993-01-01

    In the analysis of vibrations of continuous elastic systems, one often encounters complicated transcendental equations with roots directly related to the system's natural frequencies. Typically, these equations contain system parameters whose values must be specified before a numerical solution can be obtained. The present paper presents a method whereby the fundamental frequency can be obtained in analytical form to any desired degree of accuracy. The method is based upon truncation of rapidly converging series involving inverse powers of the system natural frequencies. A straightforward method to developing these series and summing them in closed form is presented. It is demonstrated how Computer Algebra can be exploited to perform the intricate analytical procedures which otherwise would render the technique difficult to apply in practice. We illustrate the method by developing two analytical approximations to the fundamental frequency of a vibrating cantilever carrying a rigid tip body. The results are compared to the numerical solution of the exact (transcendental) frequency equation over a range of system parameters.

  1. Biocompatible artificial DNA linker that is read through by DNA polymerases and is functional in Escherichia coli

    PubMed Central

    El-Sagheer, Afaf H.; Sanzone, A. Pia; Gao, Rachel; Tavassoli, Ali; Brown, Tom

    2011-01-01

    A triazole mimic of a DNA phosphodiester linkage has been produced by templated chemical ligation of oligonucleotides functionalized with 5′-azide and 3′-alkyne. The individual azide and alkyne oligonucleotides were synthesized by standard phosphoramidite methods and assembled using a straightforward ligation procedure. This highly efficient chemical equivalent of enzymatic DNA ligation has been used to assemble a 300-mer from three 100-mer oligonucleotides, demonstrating the total chemical synthesis of very long oligonucleotides. The base sequences of the DNA strands containing this artificial linkage were copied during PCR with high fidelity and a gene containing the triazole linker was functional in Escherichia coli. PMID:21709264

  2. Augmented Lagrange Programming Neural Network for Localization Using Time-Difference-of-Arrival Measurements.

    PubMed

    Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George

    2017-08-15

    A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.

  3. An adaptive approach to the physical annealing strategy for simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2013-02-01

    A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.

  4. Microfluidic-Based Bacteria Isolation from Whole Blood for Diagnostics of Blood Stream Infection.

    PubMed

    Zelenin, Sergey; Ramachandraiah, Harisha; Faridi, Asim; Russom, Aman

    2017-01-01

    Bacterial blood stream infection (BSI) potentially leads to life-threatening clinical conditions and medical emergencies such as severe sepsis, septic shock, and multi organ failure syndrome. Blood culturing is currently the gold standard for the identification of microorganisms and, although it has been automated over the decade, the process still requires 24-72 h to complete. This long turnaround time, especially for the identification of antimicrobial resistance, is driving the development of rapid molecular diagnostic methods. Rapid detection of microbial pathogens in blood related to bloodstream infections will allow the clinician to decide on or adjust the antimicrobial therapy potentially reducing the morbidity, mortality, and economic burden associated with BSI. For molecular-based methods, there is a lot to gain from an improved and straightforward method for isolation of bacteria from whole blood for downstream processing.We describe a microfluidic-based sample-preparation approach that rapidly and selectively lyses all blood cells while it extracts intact bacteria for downstream analysis. Whole blood is exposed to a mild detergent, which lyses most blood cells, and then to osmotic shock using deionized water, which eliminates the remaining white blood cells. The recovered bacteria are 100 % viable, which opens up possibilities for performing drug susceptibility tests and for nucleic-acid-based molecular identification.

  5. Developing collaborative classifiers using an expert-based model

    USGS Publications Warehouse

    Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan

    2009-01-01

    This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  6. A Straightforward Method for Glucosinolate Extraction and Analysis with High-pressure Liquid Chromatography (HPLC).

    PubMed

    Grosser, Katharina; van Dam, Nicole M

    2017-03-15

    Glucosinolates are a well-studied and highly diverse class of natural plant compounds. They play important roles in plant resistance, rapeseed oil quality, food flavoring, and human health. The biological activity of glucosinolates is released upon tissue damage, when they are mixed with the enzyme myrosinase. This results in the formation of pungent and toxic breakdown products, such as isothiocyanates and nitriles. Currently, more than 130 structurally different glucosinolates have been identified. The chemical structure of the glucosinolate is an important determinant of the product that is formed, which in turn determines its biological activity. The latter may range from detrimental (e.g., progoitrin) to beneficial (e.g., glucoraphanin). Each glucosinolate-containing plant species has its own specific glucosinolate profile. For this reason, it is important to correctly identify and reliably quantify the different glucosinolates present in brassicaceous leaf, seed, and root crops or, for ecological studies, in their wild relatives. Here, we present a well-validated, targeted, and robust method to analyze glucosinolate profiles in a wide range of plant species and plant organs. Intact glucosinolates are extracted from ground plant materials with a methanol-water mixture at high temperatures to disable myrosinase activity. Thereafter, the resulting extract is brought onto an ion-exchange column for purification. After sulfatase treatment, the desulfoglucosinolates are eluted with water and the eluate is freeze-dried. The residue is taken up in an exact volume of water, which is analyzed by high-pressure liquid chromatography (HPLC) with a photodiode array (PDA) or ultraviolet (UV) detector. Detection and quantification are achieved by conducting comparisons of the retention times and UV spectra of commercial reference standards. The concentrations are calculated based on a sinigrin reference curve and well-established response factors. The advantages and disadvantages of this straightforward method, when compared to faster and more technologically advanced methods, are discussed here.

  7. A Straightforward Method for Glucosinolate Extraction and Analysis with High-pressure Liquid Chromatography (HPLC)

    PubMed Central

    Grosser, Katharina; van Dam, Nicole M.

    2017-01-01

    Glucosinolates are a well-studied and highly diverse class of natural plant compounds. They play important roles in plant resistance, rapeseed oil quality, food flavoring, and human health. The biological activity of glucosinolates is released upon tissue damage, when they are mixed with the enzyme myrosinase. This results in the formation of pungent and toxic breakdown products, such as isothiocyanates and nitriles. Currently, more than 130 structurally different glucosinolates have been identified. The chemical structure of the glucosinolate is an important determinant of the product that is formed, which in turn determines its biological activity. The latter may range from detrimental (e.g., progoitrin) to beneficial (e.g., glucoraphanin). Each glucosinolate-containing plant species has its own specific glucosinolate profile. For this reason, it is important to correctly identify and reliably quantify the different glucosinolates present in brassicaceous leaf, seed, and root crops or, for ecological studies, in their wild relatives. Here, we present a well-validated, targeted, and robust method to analyze glucosinolate profiles in a wide range of plant species and plant organs. Intact glucosinolates are extracted from ground plant materials with a methanol-water mixture at high temperatures to disable myrosinase activity. Thereafter, the resulting extract is brought onto an ion-exchange column for purification. After sulfatase treatment, the desulfoglucosinolates are eluted with water and the eluate is freeze-dried. The residue is taken up in an exact volume of water, which is analyzed by high-pressure liquid chromatography (HPLC) with a photodiode array (PDA) or ultraviolet (UV) detector. Detection and quantification are achieved by conducting comparisons of the retention times and UV spectra of commercial reference standards. The concentrations are calculated based on a sinigrin reference curve and well-established response factors. The advantages and disadvantages of this straightforward method, when compared to faster and more technologically advanced methods, are discussed here. PMID:28362416

  8. Isolation of Soybean Agglutinin (SBA) from Soy Meal.

    ERIC Educational Resources Information Center

    Sattsangi, Prem D.; And Others

    1982-01-01

    Describes a straight-forward and relatively inexpensive method for routine isolation of purified soybean agglutinin, suitable for use as a starting material in most studies, especially for fluorescent-labeling experiments. The process is used as a project to provide advanced laboratory training at a two-year college. (Author/JN)

  9. Invasive Species Science Update (No. 3)

    Treesearch

    Mee-Sook Kim; Jack Butler

    2009-01-01

    Although scientific journals are the traditional method for disseminating research results, information must be distributed more rapidly and widely using approaches that connect researchers directly with managers. The exchange of information between science producer and science user would appear to be straightforward because, for the most part, the two groups speak the...

  10. A Review of Scoring Algorithms for Ability and Aptitude Tests.

    ERIC Educational Resources Information Center

    Chevalier, Shirley A.

    In conventional practice, most educators and educational researchers score cognitive tests using a dichotomous right-wrong scoring system. Although simple and straightforward, this method does not take into consideration other factors, such as partial knowledge or guessing tendencies and abilities. This paper discusses alternative scoring models:…

  11. The Michigan Public High School Context and Performance Report Card

    ERIC Educational Resources Information Center

    Van Beek, Michael; Bowen, Daniel; Mills, Jonathan

    2012-01-01

    Assessing a high school's effectiveness is not straightforward. Comparing a school's standardized test scores to those of other schools is one approach to measuring effectiveness, but a major objection to this method is that students' test scores tend to be related to students' "socioeconomic" status--family household income, for…

  12. Leasing versus Borrowing: Evaluating Alternative Forms of Consumer Credit.

    ERIC Educational Resources Information Center

    Nunnally, Bennie H., Jr.; Plath, D. Anthony

    1989-01-01

    Presents a straightforward method for evaluating lease versus borrow (buy) decisions illustrated with actual financing cost data reported to new car purchasers. States that individuals should consider after-tax cash flows associated with alternative arrangements, time in which cash flow occurs, and opportunity cost of capital to identify the least…

  13. A near-infrared spectroscopy routine for unambiguous identification of cryptic ant species

    USDA-ARS?s Scientific Manuscript database

    The identification of species – of importance for most biological disciplines – is not always straightforward as cryptic species present a hurdle for traditional species discrimination. Fibre-optic near-infrared spectroscopy (NIRS) is a rapid and cheap method for a wide range of different applicatio...

  14. A Thermal Management Systems Model for the NASA GTX RBCC Concept

    NASA Technical Reports Server (NTRS)

    Traci, Richard M.; Farr, John L., Jr.; Laganelli, Tony; Walker, James (Technical Monitor)

    2002-01-01

    The Vehicle Integrated Thermal Management Analysis Code (VITMAC) was further developed to aid the analysis, design, and optimization of propellant and thermal management concepts for advanced propulsion systems. The computational tool is based on engineering level principles and models. A graphical user interface (GUI) provides a simple and straightforward method to assess and evaluate multiple concepts before undertaking more rigorous analysis of candidate systems. The tool incorporates the Chemical Equilibrium and Applications (CEA) program and the RJPA code to permit heat transfer analysis of both rocket and air breathing propulsion systems. Key parts of the code have been validated with experimental data. The tool was specifically tailored to analyze rocket-based combined-cycle (RBCC) propulsion systems being considered for space transportation applications. This report describes the computational tool and its development and verification for NASA GTX RBCC propulsion system applications.

  15. On the use and the performance of software reliability growth models

    NASA Technical Reports Server (NTRS)

    Keiller, Peter A.; Miller, Douglas R.

    1991-01-01

    We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.

  16. FRET-based binding assay between a fluorescent cAMP analogue and a cyclic nucleotide-binding domain tagged with a CFP.

    PubMed

    Romero, Francisco; Santana-Calvo, Carmen; Sánchez-Guevara, Yoloxochitl; Nishigaki, Takuya

    2017-09-01

    The cyclic nucleotide-binding domain (CNBD) functions as a regulatory domain of many proteins involved in cyclic nucleotide signalling. We developed a straightforward and reliable binding assay based on intermolecular fluorescence resonance energy transfer (FRET) between an adenosine-3', 5'-cyclic monophosphate analogue labelled with fluorescein and a recombinant CNBD of human EPAC1 tagged with a cyan fluorescence protein (CFP). The high FRET efficiency of this method (~ 80%) allowed us to perform several types of binding experiments with nanomolar range of sample using conventional equipment. In addition, the CFP tag on the CNBD enabled us to perform a specific binding experiment using an unpurified protein. Considering these advantages, this technique is useful to study poorly characterized CNBDs. © 2017 Federation of European Biochemical Societies.

  17. A Continuous Method for Gene Flow

    PubMed Central

    Palczewski, Michal; Beerli, Peter

    2013-01-01

    Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937

  18. New method for the rapid extraction of natural products: efficient isolation of shikimic acid from star anise.

    PubMed

    Just, Jeremy; Deans, Bianca J; Olivier, Wesley J; Paull, Brett; Bissember, Alex C; Smith, Jason A

    2015-05-15

    A new, practical, rapid, and high-yielding process for the pressurized hot water extraction (PHWE) of multigram quantities of shikimic acid from star anise (Illicium verum) using an unmodified household espresso machine has been developed. This operationally simple and inexpensive method enables the efficient and straightforward isolation of shikimic acid and the facile preparation of a range of its synthetic derivatives.

  19. Graph-based layout analysis for PDF documents

    NASA Astrophysics Data System (ADS)

    Xu, Canhui; Tang, Zhi; Tao, Xin; Li, Yun; Shi, Cao

    2013-03-01

    To increase the flexibility and enrich the reading experience of e-book on small portable screens, a graph based method is proposed to perform layout analysis on Portable Document Format (PDF) documents. Digital born document has its inherent advantages like representing texts and fractional images in explicit form, which can be straightforwardly exploited. To integrate traditional image-based document analysis and the inherent meta-data provided by PDF parser, the page primitives including text, image and path elements are processed to produce text and non text layer for respective analysis. Graph-based method is developed in superpixel representation level, and page text elements corresponding to vertices are used to construct an undirected graph. Euclidean distance between adjacent vertices is applied in a top-down manner to cut the graph tree formed by Kruskal's algorithm. And edge orientation is then used in a bottom-up manner to extract text lines from each sub tree. On the other hand, non-textual objects are segmented by connected component analysis. For each segmented text and non-text composite, a 13-dimensional feature vector is extracted for labelling purpose. The experimental results on selected pages from PDF books are presented.

  20. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  1. Evaluation of Health Profession Student Attitudes toward an Online Nutrition Education Problem-Based Learning Module

    ERIC Educational Resources Information Center

    Gould, Kathleen; Sadera, William

    2015-01-01

    The intent of problem-based learning (PBL) is to increase student motivation to learn, to promote critical thinking and to teach students to learn with complexity. PBL encourages students to understand that there are no straightforward answers and that problem solutions depend on context. This paper discusses the experience of undergraduate health…

  2. Ubiquitous Discussion Forum: Introducing Mobile Phones and Voice Discussion into a Web Discussion Forum

    ERIC Educational Resources Information Center

    Wei, Fu-Hsiang; Chen, Gwo-Dong; Wang, Chin-Yeh; Li, Liang-Yi

    2007-01-01

    Web-based discussion forums enable users to share knowledge in straightforward and popular platforms. However, discussion forums have several problems, such as the lack of immediate delivery and response, the heavily text-based medium, inability to hear expressions of voice and the heuristically created discussion topics which can impede the…

  3. Back to Learning: How Research-Based Classroom Instruction Can Make the Impossible Possible

    ERIC Educational Resources Information Center

    Parsons, Les

    2012-01-01

    Based on the most up-to-date research, "Back to Learning" presents straightforward analysis and practical guidance on confronting bullying, taming the digital universe, and changing the troublesome trend in students' entitled attitudes toward learning and grades. "Back to Learning" gives teachers the background they need to: (1) understand how the…

  4. A comparison study: image-based vs signal-based retrospective gating on microCT

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Salmon, Phil L.; Laperre, Kjell; Sasov, Alexander

    2017-09-01

    Retrospective gating on animal studies with microCT has gained popularity in recent years. Previously, we use ECG signals for cardiac gating and breathing airflow or video signals of abdominal motion for respiratory gating. This method is adequate and works well for most applications. However, through the years, researchers have noticed some pitfalls in the method. For example, the additional signal acquisition step may increase failure rate in practice. X-Ray image-based gating, on the other hand, does not require any extra step in the scanning. Therefore we investigate imagebased gating techniques. This paper presents a comparison study of the image-based versus signal-based approach to retrospective gating. The two application areas we have studied are respiratory and cardiac imaging for both rats and mice. Image-based respiratory gating on microCT is relatively straightforward and has been done by several other researchers and groups. This method retrieves an intensity curve of a region of interest (ROI) placed in the lung area on all projections. From scans on our systems based on step-and-shoot scanning mode, we confirm that this method is very effective. A detailed comparison between image-based and signal-based gating methods is given. For cardiac gating, breathing motion is not negligible and has to be dealt with. Another difficulty in cardiac gating is the relatively smaller amplitude of cardiac movements comparing to the respirational movements, and the higher heart rate. Higher heart rate requires high speed image acquisition. We have been working on our systems to improve the acquisition speed. A dual gating technique has been developed to achieve adequate cardiac imaging.

  5. Diagnostic accuracy of tablet-based software for the detection of concussion.

    PubMed

    Yang, Suosuo; Flores, Benjamin; Magal, Rotem; Harris, Kyrsti; Gross, Jonathan; Ewbank, Amy; Davenport, Sasha; Ormachea, Pablo; Nasser, Waleed; Le, Weidong; Peacock, W Frank; Katz, Yael; Eagleman, David M

    2017-01-01

    Despite the high prevalence of traumatic brain injuries (TBI), there are few rapid and straightforward tests to improve its assessment. To this end, we developed a tablet-based software battery ("BrainCheck") for concussion detection that is well suited to sports, emergency department, and clinical settings. This article is a study of the diagnostic accuracy of BrainCheck. We administered BrainCheck to 30 TBI patients and 30 pain-matched controls at a hospital Emergency Department (ED), and 538 healthy individuals at 10 control test sites. We compared the results of the tablet-based assessment against physician diagnoses derived from brain scans, clinical examination, and the SCAT3 test, a traditional measure of TBI. We found consistent distributions of normative data and high test-retest reliability. Based on these assessments, we defined a composite score that distinguishes TBI from non-TBI individuals with high sensitivity (83%) and specificity (87%). We conclude that our testing application provides a rapid, portable testing method for TBI.

  6. A straightforward, validated liquid chromatography coupled to tandem mass spectrometry method for the simultaneous detection of nine drugs of abuse and their metabolites in hair and nails.

    PubMed

    Cappelle, Delphine; De Doncker, Mireille; Gys, Celine; Krysiak, Kamelia; De Keukeleire, Steven; Maho, Walid; Crunelle, Cleo L; Dom, Geert; Covaci, Adrian; van Nuijs, Alexander L N; Neels, Hugo

    2017-04-01

    Hair and nails allow for a stable accumulation of compounds over time and retrospective investigation of past exposure and/or consumption. Owing to their long window of detection (weeks to months), analysis of these matrices can provide information complementary to blood and urine analysis or can be used in standalone when e.g. elimination from the body has already occurred. Drugs of abuse are often used together and, therefore, multi-analyte methods capable of detecting several substances and their metabolites in a single run are of importance. This paper presents the development and validation of a method based on liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) for the simultaneous detection of nine drugs of abuse and their metabolites in hair and nails. We focused on a simple and straightforward sample preparation to reduce costs, and allow application in routine laboratory practice. Chromatographic and mass spectrometric parameters, such as column type, mobile phase, and multiple reaction monitoring transitions were optimized. The method was validated according to the European Medicine Agency guidelines with an assessment of specificity, limit of quantification (LOQ), linearity, accuracy, precision, carry-over, matrix effects, recovery, and process efficiency. Linearity ranged from 25 to 20 000 pg mg -1 hair and from 50 to 20 000 pg mg -1 nails, and the lowest calibration point achieved the requirements for the LOQ (25 pg mg -1 for hair and 50 pg mg -1 for nails). Although it was not the main focus of the article, the reliability of the method was proven through successful participation in a proficiency test, and by investigation of authentic hair and nail samples from self-reported drug users. In the future, the method should allow comparison between the two matrices to acquire an in-depth knowledge of nail analysis and to define cutoff levels for nail analysis, as they exist for hair. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A Quadrupole Dalton-based multi-attribute method for product characterization, process development, and quality control of therapeutic proteins.

    PubMed

    Xu, Weichen; Jimenez, Rod Brian; Mowery, Rachel; Luo, Haibin; Cao, Mingyan; Agarwal, Nitin; Ramos, Irina; Wang, Xiangyang; Wang, Jihong

    2017-10-01

    During manufacturing and storage process, therapeutic proteins are subject to various post-translational modifications (PTMs), such as isomerization, deamidation, oxidation, disulfide bond modifications and glycosylation. Certain PTMs may affect bioactivity, stability or pharmacokinetics and pharmacodynamics profile and are therefore classified as potential critical quality attributes (pCQAs). Identifying, monitoring and controlling these PTMs are usually key elements of the Quality by Design (QbD) approach. Traditionally, multiple analytical methods are utilized for these purposes, which is time consuming and costly. In recent years, multi-attribute monitoring methods have been developed in the biopharmaceutical industry. However, these methods combine high-end mass spectrometry with complicated data analysis software, which could pose difficulty when implementing in a quality control (QC) environment. Here we report a multi-attribute method (MAM) using a Quadrupole Dalton (QDa) mass detector to selectively monitor and quantitate PTMs in a therapeutic monoclonal antibody. The result output from the QDa-based MAM is straightforward and automatic. Evaluation results indicate this method provides comparable results to the traditional assays. To ensure future application in the QC environment, this method was qualified according to the International Conference on Harmonization (ICH) guideline and applied in the characterization of drug substance and stability samples. The QDa-based MAM is shown to be an extremely useful tool for product and process characterization studies that facilitates facile understanding of process impact on multiple quality attributes, while being QC friendly and cost-effective.

  8. Multiaxis Rainflow Fatigue Methods for Nonstationary Vibration

    NASA Technical Reports Server (NTRS)

    Irvine, T.

    2016-01-01

    Mechanical structures and components may be subjected to cyclical loading conditions, including sine and random vibration. Such systems must be designed and tested accordingly. Rainflow cycle counting is the standard method for reducing a stress time history to a table of amplitude-cycle pairings prior to the Palmgren-Miner cumulative damage calculation. The damage calculation is straightforward for sinusoidal stress but very complicated for random stress, particularly for nonstationary vibration. This paper evaluates candidate methods and makes a recommendation for further study of a hybrid technique.

  9. Viral Delivery of dsRNA for Control of Insect Agricultural Pests and Vectors of Human Disease: Prospects and Challenges

    PubMed Central

    Kolliopoulou, Anna; Taning, Clauvis N. T.; Smagghe, Guy; Swevers, Luc

    2017-01-01

    RNAi is applied as a new and safe method for pest control in agriculture but efficiency and specificity of delivery of dsRNA trigger remains a critical issue. Various agents have been proposed to augment dsRNA delivery, such as engineered micro-organisms and synthetic nanoparticles, but the use of viruses has received relatively little attention. Here we present a critical view of the potential of the use of recombinant viruses for efficient and specific delivery of dsRNA. First of all, it requires the availability of plasmid-based reverse genetics systems for virus production, of which an overview is presented. For RNA viruses, their application seems to be straightforward since dsRNA is produced as an intermediate molecule during viral replication, but DNA viruses also have potential through the production of RNA hairpins after transcription. However, application of recombinant virus for dsRNA delivery may not be straightforward in many cases, since viruses can encode RNAi suppressors, and virus-induced silencing effects can be determined by the properties of the encoded RNAi suppressor. An alternative is virus-like particles that retain the efficiency and specificity determinants of natural virions but have encapsidated non-replicating RNA. Finally, the use of viruses raises important safety issues which need to be addressed before application can proceed. PMID:28659820

  10. Auto-programmable impulse neural circuits

    NASA Technical Reports Server (NTRS)

    Watula, D.; Meador, J.

    1990-01-01

    Impulse neural networks use pulse trains to communicate neuron activation levels. Impulse neural circuits emulate natural neurons at a more detailed level than that typically employed by contemporary neural network implementation methods. An impulse neural circuit which realizes short term memory dynamics is presented. The operation of that circuit is then characterized in terms of pulse frequency modulated signals. Both fixed and programmable synapse circuits for realizing long term memory are also described. The implementation of a simple and useful unsupervised learning law is then presented. The implementation of a differential Hebbian learning rule for a specific mean-frequency signal interpretation is shown to have a straightforward implementation using digital combinational logic with a variation of a previously developed programmable synapse circuit. This circuit is expected to be exploited for simple and straightforward implementation of future auto-adaptive neural circuits.

  11. Functional connectivity analysis in EEG source space: The choice of method

    PubMed Central

    Knyazeva, Maria G.

    2017-01-01

    Functional connectivity (FC) is among the most informative features derived from EEG. However, the most straightforward sensor-space analysis of FC is unreliable owing to volume conductance effects. An alternative—source-space analysis of FC—is optimal for high- and mid-density EEG (hdEEG, mdEEG); however, it is questionable for widely used low-density EEG (ldEEG) because of inadequate surface sampling. Here, using simulations, we investigate the performance of the two source FC methods, the inverse-based source FC (ISFC) and the cortical partial coherence (CPC). To examine the effects of localization errors of the inverse method on the FC estimation, we simulated an oscillatory source with varying locations and SNRs. To compare the FC estimations by the two methods, we simulated two synchronized sources with varying between-source distance and SNR. The simulations were implemented for hdEEG, mdEEG, and ldEEG. We showed that the performance of both methods deteriorates for deep sources owing to their inaccurate localization and smoothing. The accuracy of both methods improves with the increasing between-source distance. The best ISFC performance was achieved using hd/mdEEG, while the best CPC performance was observed with ldEEG. In conclusion, with hdEEG, ISFC outperforms CPC and therefore should be the preferred method. In the studies based on ldEEG, the CPC is a method of choice. PMID:28727750

  12. San Luis Basin Sustainability Metrics Project: A Methodology for Evaluating Regional Sustainability

    EPA Science Inventory

    Although there are several scientifically-based sustainability metrics, many are data intensive, difficult to calculate, and fail to capture all aspects of a system. To address these issues, we produced a scientifically-defensible, but straightforward and inexpensive, methodolog...

  13. Straightforward Generation of Pillared, Microporous Graphene Frameworks for Use in Supercapacitors.

    PubMed

    Yuan, Kai; Xu, Yazhou; Uihlein, Johannes; Brunklaus, Gunther; Shi, Lei; Heiderhoff, Ralf; Que, Mingming; Forster, Michael; Chassé, Thomas; Pichler, Thomas; Riedl, Thomas; Chen, Yiwang; Scherf, Ullrich

    2015-11-01

    Microporous, pillared graphene-based frameworks are generated in a simple functionalization/coupling procedure starting from reduced graphene oxide. They are used for the fabrication of high-performance supercapacitor devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Expanding the Interaction Lexicon for 3D Graphics

    DTIC Science & Technology

    2001-11-01

    believe that extending it to work with image-based rendering engines is straightforward. I could modify plenoptic image editing [Seitz] to allow...M. Seitz and Kiriakos N. Kutulakos. Plenoptic Image Editing. International Conference on Computer Vision ‘98, pages 17-24. [ShapeCapture

  15. Generalized type II hybrid ARQ scheme using punctured convolutional coding

    NASA Astrophysics Data System (ADS)

    Kallel, Samir; Haccoun, David

    1990-11-01

    A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.

  16. Efficient correction of wavefront inhomogeneities in X-ray holographic nanotomography by random sample displacement

    NASA Astrophysics Data System (ADS)

    Hubert, Maxime; Pacureanu, Alexandra; Guilloud, Cyril; Yang, Yang; da Silva, Julio C.; Laurencin, Jerome; Lefebvre-Joud, Florence; Cloetens, Peter

    2018-05-01

    In X-ray tomography, ring-shaped artifacts present in the reconstructed slices are an inherent problem degrading the global image quality and hindering the extraction of quantitative information. To overcome this issue, we propose a strategy for suppression of ring artifacts originating from the coherent mixing of the incident wave and the object. We discuss the limits of validity of the empty beam correction in the framework of a simple formalism. We then deduce a correction method based on two-dimensional random sample displacement, with minimal cost in terms of spatial resolution, acquisition, and processing time. The method is demonstrated on bone tissue and on a hydrogen electrode of a ceramic-metallic solid oxide cell. Compared to the standard empty beam correction, we obtain high quality nanotomography images revealing detailed object features. The resulting absence of artifacts allows straightforward segmentation and posterior quantification of the data.

  17. Aspergillus vertebral osteomyelitis in chronic leukocyte leukemia patient diagnosed by a novel panfungal polymerase chain reaction method.

    PubMed

    Dayan, Lior; Sprecher, Hannah; Hananni, Amos; Rosenbaum, Hana; Milloul, Victor; Oren, Ilana

    2007-01-01

    Vertebral osteomyelitis and disciitis caused by Aspergillus spp is a rare event. Early diagnosis and early antifungal therapy are critical in improving the prognosis for these patients. The diagnosis of invasive fungal infections is, in many cases, not straightforward and requires invasive procedures so that histological examination and culture can be performed. Furthermore, current traditional microbiological tests (ie, cultures and stains) lack the sensitivity for diagnosis of invasive aspergillosis. To present a case of vertebral osteomyelitis caused by Aspergillus spp diagnosed using a novel polymerase chain reaction (PCR) assay. Case report. Aspergillus DNA was detected in DNA extracted from the necrotic bone tissue by using a "panfungal" PCR novel method. Treatment with voriconazole was started based on the diagnosis. Using this novel technique enabled us to diagnose accurately an unusual bone pathogen that requires a unique treatment.

  18. Sizing of single fluorescently stained DNA fragments by scanning microscopy

    PubMed Central

    Laib, Stephan; Rankl, Michael; Ruckstuhl, Thomas; Seeger, Stefan

    2003-01-01

    We describe an approach to determine DNA fragment sizes based on the fluorescence detection of single adsorbed fragments on specifically coated glass cover slips. The brightness of single fragments stained with the DNA bisintercalation dye TOTO-1 is determined by scanning the surface with a confocal microscope. The brightness of adsorbed fragments is found to be proportional to the fragment length. The method needs only minute amount of DNA, beyond inexpensive and easily available surface coatings, like poly-l-lysine, 3-aminoproyltriethoxysilane and polyornithine, are utilizable. We performed DNA-sizing of fragment lengths between 2 and 14 kb. Further, we resolved the size distribution before and after an enzymatic restriction digest. At this a separation of buffers or enzymes was unnecessary. DNA sizes were determined within an uncertainty of 7–14%. The proposed method is straightforward and can be applied to standardized microtiter plates. PMID:14602931

  19. Microfluidic etching and oxime-based tailoring of biodegradable polyketoesters.

    PubMed

    Barrett, Devin G; Lamb, Brian M; Yousaf, Muhammad N

    2008-09-02

    A straightforward, flexible, and inexpensive method to etch biodegradable poly(1,2,6-hexanetriol alpha-ketoglutarate) films is reported. Microfluidic delivery of the etchant, a solution of NaOH, can create micron-scale channels through local hydrolysis of the polyester film. In addition, the presence of a ketone in the repeat unit allows for prior or post chemoselective modifications, enabling the design of functionalized microchannels. Delivery of oxyamine tethered ligands react with ketone groups on the polyketoester to generate covalent oxime linkages. By thermally sealing an etched film to a second flat surface, poly(1,2,6-hexanetriol alpha-ketoglutarate) can be used to create biodegradable microfluidic devices. In order to determine the versatility of the microfluidic etch technique, poly(epsilon-caprolactone) was etched with acetone. This strategy provides a facile method for the direct patterning of biodegradable materials, both through etching and chemoselective ligand immobilization.

  20. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  1. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  2. The solution of radiative transfer problems in molecular bands without the LTE assumption by accelerated lambda iteration methods

    NASA Technical Reports Server (NTRS)

    Kutepov, A. A.; Kunze, D.; Hummer, D. G.; Rybicki, G. B.

    1991-01-01

    An iterative method based on the use of approximate transfer operators, which was designed initially to solve multilevel NLTE line formation problems in stellar atmospheres, is adapted and applied to the solution of the NLTE molecular band radiative transfer in planetary atmospheres. The matrices to be constructed and inverted are much smaller than those used in the traditional Curtis matrix technique, which makes possible the treatment of more realistic problems using relatively small computers. This technique converges much more rapidly than straightforward iteration between the transfer equation and the equations of statistical equilibrium. A test application of this new technique to the solution of NLTE radiative transfer problems for optically thick and thin bands (the 4.3 micron CO2 band in the Venusian atmosphere and the 4.7 and 2.3 micron CO bands in the earth's atmosphere) is described.

  3. Computer Generated Diffraction Patterns Of Rough Surfaces

    NASA Astrophysics Data System (ADS)

    Rakels, Jan H.

    1989-03-01

    It is generally accepted, that optical methods are the most promising for the in-process measurement of surface finish. These methods have the advantages of being non-contacting and fast data acquisition. In the Micro-Engineering Centre at the University of Warwick, an optical sensor has been devised which can measure the rms roughness, slope and wavelength of turned and precision ground surfaces. The operation of this device is based upon the Kirchhoff-Fresnel diffraction integral. Application of this theory to ideal turned surfaces is straightforward, and indeed the theoretically calculated diffraction patterns are in close agreement with patterns produced by an actual optical instrument. Since it is mathematically difficult to introduce real surface profiles into the diffraction integral, a computer program has been devised, which simulates the operation of the optical sensor. The program produces a diffraction pattern as a graphical output. Comparison between computer generated and actual diffraction patterns of the same surfaces show a high correlation.

  4. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  5. Enhanced NMR-based profiling of polyphenols in commercially available grape juices using solid-phase extraction.

    PubMed

    Savage, Angela K; van Duynhoven, John P M; Tucker, Gregory; Daykin, Clare A

    2011-12-01

    Grapes and related products, such as juices, and in particular, their polyphenols, have previously been associated with many health benefits, such as protection against cardiovascular disease. Within grapes, a large range of structurally diverse polyphenols can be present, and their characterisation stands as a challenge. (1)H NMR spectroscopy in principle would provide a rapid, nondestructive and straightforward method for profiling of polyphenols. However, polyphenol profiling and identification in grape juices is hindered because of signals of prevailing carbohydrates causing spectral overlap and compromising dynamic range. This study describes the development of an extraction method prior to analysis using (1)H NMR spectroscopy, which can, potentially, significantly increase the number of detectable polyphenols and aid their identification, by reduction of signal overlap and selective removal of heavily dominating compounds such as sugars. Copyright © 2012 John Wiley & Sons, Ltd.

  6. A chiral diamine: practical implications of a three-stereoisomer cocrystallization.

    PubMed

    Dolinar, Brian S; Samedov, Kerim; Maloney, Andrew G P; West, Robert; Khrustalev, Victor N; Guzei, Ilia A

    2018-01-01

    A brief comparison of seven straightforward methods for molecular crystal-volume estimation revealed that their precisions are comparable. A chiral diamine, N 2 ,N 3 -bis[2,6-bis(propan-2-yl)phenyl]butane-2,3-diamine, C 28 H 44 N 2 , has been used to illustrate the application of the methods. Three stereoisomers of the diamine cocrystallize in the centrosymmetric space group P2 1 /c with Z' = 1.5. The molecules occupying general positions are RR and SS, whereas that residing on an inversion center is meso. This is one of only ten examples of three stereoisomers with two asymmetric atoms cocrystallizing together reported to the Cambridge Structural Database (CSD). The conformations of the SS/RR and meso molecules differ considerably and lead to statistically significantly different C(asymmetric)-C(asymmetric) bond lengths in the diastereomers. An advanced Python script-based CSD searching technique for chiral compounds is presented.

  7. Protecting privacy of shared epidemiologic data without compromising analysis potential.

    PubMed

    Cologne, John; Grant, Eric J; Nakashima, Eiji; Chen, Yun; Funamoto, Sachiyo; Katayama, Hiroaki

    2012-01-01

    Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relative accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs.

  8. Accelerated Enveloping Distribution Sampling: Enabling Sampling of Multiple End States while Preserving Local Energy Minima.

    PubMed

    Perthold, Jan Walther; Oostenbrink, Chris

    2018-05-17

    Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.

  9. A versatile and efficient high-throughput cloning tool for structural biology.

    PubMed

    Geertsma, Eric R; Dutzler, Raimund

    2011-04-19

    Methods for the cloning of large numbers of open reading frames into expression vectors are of critical importance for challenging structural biology projects. Here we describe a system termed fragment exchange (FX) cloning that facilitates the high-throughput generation of expression constructs. The method is based on a class IIS restriction enzyme and negative selection markers. FX cloning combines attractive features of established recombination- and ligation-independent cloning methods: It allows the straightforward transfer of an open reading frame into a variety of expression vectors and is highly efficient and very economic in its use. In addition, FX cloning avoids the common but undesirable feature of significantly extending target open reading frames with cloning related sequences, as it leaves a minimal seam of only a single extra amino acid to either side of the protein. The method has proven to be very robust and suitable for all common pro- and eukaryotic expression systems. It considerably speeds up the generation of expression constructs compared to traditional methods and thus facilitates a broader expression screening.

  10. A Dissipative Systems Theory for FDTD With Application to Stability Analysis and Subgridding

    NASA Astrophysics Data System (ADS)

    Bekmambetova, Fadime; Zhang, Xinyue; Triverio, Piero

    2017-02-01

    This paper establishes a far-reaching connection between the Finite-Difference Time-Domain method (FDTD) and the theory of dissipative systems. The FDTD equations for a rectangular region are written as a dynamical system having the magnetic and electric fields on the boundary as inputs and outputs. Suitable expressions for the energy stored in the region and the energy absorbed from the boundaries are introduced, and used to show that the FDTD system is dissipative under a generalized Courant-Friedrichs-Lewy condition. Based on the concept of dissipation, a powerful theoretical framework to investigate the stability of FDTD methods is devised. The new method makes FDTD stability proofs simpler, more intuitive, and modular. Stability conditions can indeed be given on the individual components (e.g. boundary conditions, meshes, embedded models) instead of the whole coupled setup. As an example of application, we derive a new subgridding method with material traverse, arbitrary grid refinement, and guaranteed stability. The method is easy to implement and has a straightforward stability proof. Numerical results confirm its stability, low reflections, and ability to handle material traverse.

  11. The Expense Tied to Secondary Course Failure: The Case of Ontario

    ERIC Educational Resources Information Center

    Faubert, Brenton

    2016-01-01

    This article describes a study that examined the volume of secondary course failure and its direct budget impact on Ontario's K-12 public education system. The study employed a straightforward, descriptive accounting method to estimate the annual expenditure tied to secondary course failure, taking into account some factors known to be…

  12. Investigating an Aerial Image First

    ERIC Educational Resources Information Center

    Wyrembeck, Edward P.; Elmer, Jeffrey S.

    2006-01-01

    Most introductory optics lab activities begin with students locating the real image formed by a converging lens. The method is simple and straightforward--students move a screen back and forth until the real image is in sharp focus on the screen. Students then draw a simple ray diagram to explain the observation using only two or three special…

  13. A simple, sensitive graphical method of treating thermogravimetric analysis data

    Treesearch

    Abraham Broido

    1969-01-01

    Thermogravimetric Analysis (TGA) is finding increasing utility in investigations of the pyrolysis and combustion behavior of materuals. Although a theoretical treatment of the TGA behavior of an idealized reaction is relatively straight-forward, major complications can be introduced when the reactions are complex, e.g., in the pyrolysis of cellulose, and when...

  14. What Does It Mean to Assess Gifted Students' Perceptions of Giftedness Labels?

    ERIC Educational Resources Information Center

    Meadows, Bryan; Neumann, Jacob W.

    2017-01-01

    Measuring gifted and talented ("GT") students' perceptions of their "GT" label might seem to be a relatively straightforward affair. Most of this research uses survey methods that ask "GT" students to complete Likert scale or open-ended response questionnaires about their perceptions of the label and then presents…

  15. Cyber Portfolio: The Innovative Menu for 21st Century Technology

    ERIC Educational Resources Information Center

    Robles, Ava Clare Marie O.

    2012-01-01

    Cyber portfolio is a valuable innovative menu for teachers who seek out strategies or methods to integrate technology into their lessons. This paper presents a straightforward preparation on how to innovate a menu that addresses the 21st century skills blended with higher order thinking skills, multiple intelligence, technology and multimedia.…

  16. A facile synthesis of pyrrolo[2,3-b]quinolines via a Rh(I)-catalyzed carbodiimide-Pauson-Khand-type reaction.

    PubMed

    Saito, Takao; Furukawa, Naoki; Otani, Takashi

    2010-03-07

    A new straightforward synthetic method for 2,3-dihydro-1H-pyrrolo[2,3-b]quinolin-2-ones via a [RhCl(CO)(2)](2)-dppp catalyzed Pauson-Khand-type reaction of N-[2-(2-alkyn-1-yl)phenyl]carbodiimides is reported.

  17. The Environmental Impact of Electrical Generation: Nuclear vs. Conventional.

    ERIC Educational Resources Information Center

    McDermott, John J., Ed.

    This minicourse, partially supported by the Division of Nuclear Education and Training of the U.S. Atomic Energy Commission, is an effort to describe the benefit-to-risk ratio of various methods of generating electrical power. It attempts to present an unbiased, straightforward, and objective view of the advantages and disadvantages of nuclear…

  18. Seeing the Invisible with Schlieren Imaging

    ERIC Educational Resources Information Center

    Lekholm, Ville; Ramme, Goran; Thornell, Greger

    2011-01-01

    Schlieren imaging is a method for visualizing differences in refractive index as caused by pressure or temperature non-uniformities within a medium, or as caused by the mixing of two fluids. It is an inexpensive yet powerful and straightforward tool for sensitive and high-resolution visualization of otherwise invisible phenomena. In this article,…

  19. A practical modification of horizontal line sampling for snag and cavity tree inventory

    Treesearch

    M. J. Ducey; G. J. Jordan; J. H. Gove; H. T. Valentine

    2002-01-01

    Snags and cavity trees are important structural features in forests, but they are often sparsely distributed, making efficient inventories problematic. We present a straightforward modification of horizontal line sampling designed to facilitate inventory of these features while remaining compatible with commonly employed sampling methods for the living overstory. The...

  20. Straightforward and effective protein encapsulation in polypeptide-based artificial cells.

    PubMed

    Zhi, Zheng-Liang; Haynie, Donald T

    2006-01-01

    A simple and straightforward approach to encapsulating an enzyme and preserving its function in polypeptide-based artificial cells is demonstrated. A model enzyme, glucose oxidase (GOx), was encapsulated by repeated stepwise adsorption of poly(L-lysine) and poly(L-glutamic acid) onto GOx-coated CaCO3 templates. These polypeptides are known from previous research to exhibit nanometer-scale organization in multilayer films. Templates were dissolved by ethylenediaminetetraacetic acid (EDTA) at neutral pH. Addition of polyethylene glycol (PEG) to the polypeptide assembly solutions greatly increased enzyme retention on the templates, resulting in high-capacity, high-activity loading of the enzyme into artificial cells. Assay of enzyme activity showed that over 80 mg-mL(-1) GOx was retained in artificial cells after polypeptide multilayer film formation and template dissolution in the presence of PEG, but only one-fifth as much was retained in the absence of PEG. Encapsulation is a means of improving the availability of therapeutic macromolecules in biomedicine. This work therefore represents a means of developing polypeptide-based artificial cells for use as therapeutic biomacromolecule delivery vehicles.

  1. Chirality in Magnetic Multilayers Probed by the Symmetry and the Amplitude of Dichroism in X-Ray Resonant Magnetic Scattering

    NASA Astrophysics Data System (ADS)

    Chauleau, Jean-Yves; Legrand, William; Reyren, Nicolas; Maccariello, Davide; Collin, Sophie; Popescu, Horia; Bouzehouane, Karim; Cros, Vincent; Jaouen, Nicolas; Fert, Albert

    2018-01-01

    Chirality in condensed matter has recently become a topic of the utmost importance because of its significant role in the understanding and mastering of a large variety of new fundamental physical mechanisms. Versatile experimental approaches, capable to reveal easily the exact winding of order parameters, are therefore essential. Here we report x-ray resonant magnetic scattering as a straightforward tool to reveal directly the properties of chiral magnetic systems. We show that it can straightforwardly and unambiguously determine the main characteristics of chiral magnetic distributions: i.e., its chiral nature, the quantitative winding sense (clockwise or counterclockwise), and its type, i.e., Néel [cycloidal] or Bloch [helical]. This method is model independent, does not require a priori knowledge of the magnetic parameters, and can be applied to any system with magnetic domains ranging from a few nanometers (wavelength limited) to several microns. By using prototypical multilayers with tailored magnetic chiralities driven by spin-orbit-related effects at Co |Pt interfaces, we illustrate the strength of this method.

  2. Detecting Genetic Interactions for Quantitative Traits Using m-Spacing Entropy Measure

    PubMed Central

    Yee, Jaeyong; Kwon, Min-Seok; Park, Taesung; Park, Mira

    2015-01-01

    A number of statistical methods for detecting gene-gene interactions have been developed in genetic association studies with binary traits. However, many phenotype measures are intrinsically quantitative and categorizing continuous traits may not always be straightforward and meaningful. Association of gene-gene interactions with an observed distribution of such phenotypes needs to be investigated directly without categorization. Information gain based on entropy measure has previously been successful in identifying genetic associations with binary traits. We extend the usefulness of this information gain by proposing a nonparametric evaluation method of conditional entropy of a quantitative phenotype associated with a given genotype. Hence, the information gain can be obtained for any phenotype distribution. Because any functional form, such as Gaussian, is not assumed for the entire distribution of a trait or a given genotype, this method is expected to be robust enough to be applied to any phenotypic association data. Here, we show its use to successfully identify the main effect, as well as the genetic interactions, associated with a quantitative trait. PMID:26339620

  3. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  4. A non-linear regression analysis program for describing electrophysiological data with multiple functions using Microsoft Excel.

    PubMed

    Brown, Angus M

    2006-04-01

    The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.

  5. Edge theory approach to topological entanglement entropy, mutual information, and entanglement negativity in Chern-Simons theories

    NASA Astrophysics Data System (ADS)

    Wen, Xueda; Matsuura, Shunji; Ryu, Shinsei

    2016-06-01

    We develop an approach based on edge theories to calculate the entanglement entropy and related quantities in (2+1)-dimensional topologically ordered phases. Our approach is complementary to, e.g., the existing methods using replica trick and Witten's method of surgery, and applies to a generic spatial manifold of genus g , which can be bipartitioned in an arbitrary way. The effects of fusion and braiding of Wilson lines can be also straightforwardly studied within our framework. By considering a generic superposition of states with different Wilson line configurations, through an interference effect, we can detect, by the entanglement entropy, the topological data of Chern-Simons theories, e.g., the R symbols, monodromy, and topological spins of quasiparticles. Furthermore, by using our method, we calculate other entanglement/correlation measures such as the mutual information and the entanglement negativity. In particular, it is found that the entanglement negativity of two adjacent noncontractible regions on a torus provides a simple way to distinguish Abelian and non-Abelian topological orders.

  6. A Quick and Easy Simplification of Benzocaine's NMR Spectrum

    NASA Astrophysics Data System (ADS)

    Carpenter, Suzanne R.; Wallace, Richard H.

    2006-04-01

    The preparation of benzocaine is a common experiment used in sophomore-level organic chemistry. Its straightforward procedure and predictable good yields make it ideal for the beginning organic student. Analysis of the product via NMR spectroscopy, however, can be confusing to the novice interpreter. An inexpensive, quick, and effective method for simplifying the NMR spectrum is reported. The method results in a spectrum that is cleanly integrated and more easily interpreted.

  7. ADE-FDTD Scattered-Field Formulation for Dispersive Materials

    PubMed Central

    Kong, Soon-Cheol; Simpson, Jamesina J.; Backman, Vadim

    2009-01-01

    This Letter presents a scattered-field formulation for modeling dispersive media using the finite-difference time-domain (FDTD) method. Specifically, the auxiliary differential equation method is applied to Drude and Lorentz media for a scattered field FDTD model. The present technique can also be applied in a straightforward manner to Debye media. Excellent agreement is achieved between the FDTD-calculated and exact theoretical results for the reflection coefficient in half-space problems. PMID:19844602

  8. ADE-FDTD Scattered-Field Formulation for Dispersive Materials.

    PubMed

    Kong, Soon-Cheol; Simpson, Jamesina J; Backman, Vadim

    2008-01-01

    This Letter presents a scattered-field formulation for modeling dispersive media using the finite-difference time-domain (FDTD) method. Specifically, the auxiliary differential equation method is applied to Drude and Lorentz media for a scattered field FDTD model. The present technique can also be applied in a straightforward manner to Debye media. Excellent agreement is achieved between the FDTD-calculated and exact theoretical results for the reflection coefficient in half-space problems.

  9. Direct Alkynylation of 3H-Imidazo[4,5-b]pyridines Using gem-Dibromoalkenes as Alkynes Source.

    PubMed

    Aziz, Jessy; Baladi, Tom; Piguel, Sandrine

    2016-05-20

    C2 direct alkynylation of 3H-imidazo[4,5-b]pyridine derivatives is explored for the first time. Stable and readily available 1,1-dibromo-1-alkenes, electrophilic alkyne precursors, are used as coupling partners. The simple reaction conditions include an inexpensive copper catalyst (CuBr·SMe2 or Cu(OAc)2), a phosphine ligand (DPEphos) and a base (LiOtBu) in 1,4-dioxane at 120 °C. This C-H alkynylation method revealed to be compatible with a variety of substitutions on both coupling partners: heteroarenes and gem-dibromoalkenes. This protocol allows the straightforward synthesis of various 2-alkynyl-3H-imidazo[4,5-b]pyridines, a valuable scaffold in drug design.

  10. RNA-Seq for Bacterial Gene Expression.

    PubMed

    Poulsen, Line Dahl; Vinther, Jeppe

    2018-06-01

    RNA sequencing (RNA-seq) has become the preferred method for global quantification of bacterial gene expression. With the continued improvements in sequencing technology and data analysis tools, the most labor-intensive and expensive part of an RNA-seq experiment is the preparation of sequencing libraries, which is also essential for the quality of the data obtained. Here, we present a straightforward and inexpensive basic protocol for preparation of strand-specific RNA-seq libraries from bacterial RNA as well as a computational pipeline for the data analysis of sequencing reads. The protocol is based on the Illumina platform and allows easy multiplexing of samples and the removal of sequencing reads that are PCR duplicates. © 2018 by John Wiley & Sons, Inc. © 2018 John Wiley & Sons, Inc.

  11. Sound field separation with sound pressure and particle velocity measurements.

    PubMed

    Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-12-01

    In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.

  12. Use of the LUS in sequence allele designations to facilitate probabilistic genotyping of NGS-based STR typing results.

    PubMed

    Just, Rebecca S; Irwin, Jodi A

    2018-05-01

    Some of the expected advantages of next generation sequencing (NGS) for short tandem repeat (STR) typing include enhanced mixture detection and genotype resolution via sequence variation among non-homologous alleles of the same length. However, at the same time that NGS methods for forensic DNA typing have advanced in recent years, many caseworking laboratories have implemented or are transitioning to probabilistic genotyping to assist the interpretation of complex autosomal STR typing results. Current probabilistic software programs are designed for length-based data, and were not intended to accommodate sequence strings as the product input. Yet to leverage the benefits of NGS for enhanced genotyping and mixture deconvolution, the sequence variation among same-length products must be utilized in some form. Here, we propose use of the longest uninterrupted stretch (LUS) in allele designations as a simple method to represent sequence variation within the STR repeat regions and facilitate - in the nearterm - probabilistic interpretation of NGS-based typing results. An examination of published population data indicated that a reference LUS region is straightforward to define for most autosomal STR loci, and that using repeat unit plus LUS length as the allele designator can represent greater than 80% of the alleles detected by sequencing. A proof of concept study performed using a freely available probabilistic software demonstrated that the LUS length can be used in allele designations when a program does not require alleles to be integers, and that utilizing sequence information improves interpretation of both single-source and mixed contributor STR typing results as compared to using repeat unit information alone. The LUS concept for allele designation maintains the repeat-based allele nomenclature that will permit backward compatibility to extant STR databases, and the LUS lengths themselves will be concordant regardless of the NGS assay or analysis tools employed. Further, these biologically based, easy-to-derive designations uphold clear relationships between parent alleles and their stutter products, enabling analysis in fully continuous probabilistic programs that model stutter while avoiding the algorithmic complexities that come with string based searches. Though using repeat unit plus LUS length as the allele designator does not capture variation that occurs outside of the core repeat regions, this straightforward approach would permit the large majority of known STR sequence variation to be used for mixture deconvolution and, in turn, result in more informative mixture statistics in the near term. Ultimately, the method could bridge the gap from current length-based probabilistic systems to facilitate broader adoption of NGS by forensic DNA testing laboratories. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  13. SNDR Limits of Oscillator-Based Sensor Readout Circuits.

    PubMed

    Cardes, Fernando; Quintero, Andres; Gutierrez, Eric; Buffa, Cesare; Wiesbauer, Andreas; Hernandez, Luis

    2018-02-03

    This paper analyzes the influence of phase noise and distortion on the performance of oscillator-based sensor data acquisition systems. Circuit noise inherent to the oscillator circuit manifests as phase noise and limits the SNR. Moreover, oscillator nonlinearity generates distortion for large input signals. Phase noise analysis of oscillators is well known in the literature, but the relationship between phase noise and the SNR of an oscillator-based sensor is not straightforward. This paper proposes a model to estimate the influence of phase noise in the performance of an oscillator-based system by reflecting the phase noise to the oscillator input. The proposed model is based on periodic steady-state analysis tools to predict the SNR of the oscillator. The accuracy of this model has been validated by both simulation and experiment in a 130 nm CMOS prototype. We also propose a method to estimate the SNDR and the dynamic range of an oscillator-based readout circuit that improves by more than one order of magnitude the simulation time compared to standard time domain simulations. This speed up enables the optimization and verification of this kind of systems with iterative algorithms.

  14. Unbundling in Current Broadband and Next-Generation Ultra-Broadband Access Networks

    NASA Astrophysics Data System (ADS)

    Gaudino, Roberto; Giuliano, Romeo; Mazzenga, Franco; Valcarenghi, Luca; Vatalaro, Francesco

    2014-05-01

    This article overviews the methods that are currently under investigation for implementing multi-operator open-access/shared-access techniques in next-generation access ultra-broadband architectures, starting from the traditional "unbundling-of-the-local-loop" techniques implemented in legacy twisted-pair digital subscriber line access networks. A straightforward replication of these copper-based unbundling-of-the-local-loop techniques is usually not feasible on next-generation access networks, including fiber-to-the-home point-to-multipoint passive optical networks. To investigate this issue, the article first gives a concise description of traditional copper-based unbundling-of-the-local-loop solutions, then focalizes on both next-generation access hybrid fiber-copper digital subscriber line fiber-to-the-cabinet scenarios and on fiber to the home by accounting for the mix of regulatory and technological reasons driving the next-generation access migration path, focusing mostly on the European situation.

  15. Segmenting human from photo images based on a coarse-to-fine scheme.

    PubMed

    Lu, Huchuan; Fang, Guoliang; Shao, Xinqing; Li, Xuelong

    2012-06-01

    Human segmentation in photo images is a challenging and important problem that finds numerous applications ranging from album making and photo classification to image retrieval. Previous works on human segmentation usually demand a time-consuming training phase for complex shape-matching processes. In this paper, we propose a straightforward framework to automatically recover human bodies from color photos. Employing a coarse-to-fine strategy, we first detect a coarse torso (CT) using the multicue CT detection algorithm and then extract the accurate region of the upper body. Then, an iterative multiple oblique histogram algorithm is presented to accurately recover the lower body based on human kinematics. The performance of our algorithm is evaluated on our own data set (contains 197 images with human body region ground truth data), VOC 2006, and the 2010 data set. Experimental results demonstrate the merits of the proposed method in segmenting a person with various poses.

  16. Information-based approach to performance estimation and requirements allocation in multisensor fusion for target recognition

    NASA Astrophysics Data System (ADS)

    Harney, Robert C.

    1997-03-01

    A novel methodology offering the potential for resolving two of the significant problems of implementing multisensor target recognition systems, i.e., the rational selection of a specific sensor suite and optimal allocation of requirements among sensors, is presented. Based on a sequence of conjectures (and their supporting arguments) concerning the relationship of extractable information content to recognition performance of a sensor system, a set of heuristics (essentially a reformulation of Johnson's criteria applicable to all sensor and data types) is developed. An approach to quantifying the information content of sensor data is described. Coupling this approach with the widely accepted Johnson's criteria for target recognition capabilities results in a quantitative method for comparing the target recognition ability of diverse sensors (imagers, nonimagers, active, passive, electromagnetic, acoustic, etc.). Extension to describing the performance of multiple sensors is straightforward. The application of the technique to sensor selection and requirements allocation is discussed.

  17. Inviscid Design of Hypersonic Wind Tunnel Nozzles for a Real Gas

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A straightforward procedure has been developed to quickly determine an inviscid design of a hypersonic wind tunnel nozzle when the test crash is both calorically and thermally imperfect. This real gas procedure divides the nozzle into four distinct parts: subsonic, throat to conical, conical, and turning flow regions. The design process is greatly simplified by treating the imperfect gas effects only in the source flow region. This simplification can be justified for a large class of hypersonic wind tunnel nozzle design problems. The final nozzle design is obtained either by doing a classical boundary layer correction or by using this inviscid design as the starting point for a viscous design optimization based on computational fluid dynamics. An example of a real gas nozzle design is used to illustrate the method. The accuracy of the real gas design procedure is shown to compare favorably with an ideal gas design based on computed flow field solutions.

  18. Micro-sampling method based on high-resolution continuum source graphite furnace atomic absorption spectrometry for calcium determination in blood and mitochondrial suspensions.

    PubMed

    Gómez-Nieto, Beatriz; Gismera, Mª Jesús; Sevilla, Mª Teresa; Satrústegui, Jorgina; Procopio, Jesús R

    2017-08-01

    A micro-sampling and straightforward method based on high resolution continuum source atomic absorption spectrometry (HR-CS AAS) was developed to determine extracellular and intracellular Ca in samples of interest in clinical and biomedical analysis. Solid sampling platforms were used to introduce the micro-samples into the graphite furnace atomizer. The secondary absorption line for Ca, located at 239.856nm, was selected to carry out the measurements. Experimental parameters such as pyrolysis and atomization temperatures and the amount of sample introduced for the measurements were optimized. Calibration was performed using aqueous standards and the approach to measure at the wings of the absorption lines was employed for the expansion of the linear response range. The limit of detection was of 0.02mgL -1 Ca (0.39ng Ca) and the upper limit of linear range was increased up to 8.0mgL -1 Ca (160ng Ca). The proposed method was used to determine Ca in mitochondrial suspensions and whole blood samples with successful results. Adequate recoveries (within 91-107%) were obtained in the tests performed for validation purposes. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  20. Linking flowability and granulometry of lactose powders.

    PubMed

    Boschini, F; Delaval, V; Traina, K; Vandewalle, N; Lumay, G

    2015-10-15

    The flowing properties of 10 lactose powders commonly used in pharmaceutical industries have been analyzed with three recently improved measurement methods. The first method is based on the heap shape measurement. This straightforward measurement method provides two physical parameters (angle of repose αr and static cohesive index σr) allowing to make a first screening of the powder properties. The second method allows to estimate the rheological properties of a powder by analyzing the powder flow in a rotating drum. This more advanced method gives a large set of physical parameters (flowing angle αf, dynamic cohesive index σf, angle of first avalanche αa and powder aeration %ae) leading to deeper interpretations. The third method is an improvement of the classical bulk and tapped density measurements. In addition to the improvement of the measurement precision, the densification dynamics of the powder bulk submitted to taps is analyzed. The link between the macroscopic physical parameters obtained with these methods and the powder granulometry is analyzed. Moreover, the correlations between the different flowability indexes are discussed. Finally, the link between grain shape and flowability is discussed qualitatively. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Altitude Effects on Thermal Ice Protection System Performance; a Study of an Alternative Approach

    NASA Technical Reports Server (NTRS)

    Addy, Harold E., Jr.; Orchard, David; Wright, William B.; Oleskiw, Myron

    2016-01-01

    Research has been conducted to better understand the phenomena involved during operation of an aircraft's thermal ice protection system under running wet icing conditions. In such situations, supercooled water striking a thermally ice-protected surface does not fully evaporate but runs aft to a location where it freezes. The effects of altitude, in terms of air pressure and density, on the processes involved were of particular interest. Initial study results showed that the altitude effects on heat energy transfer were accurately modeled using existing methods, but water mass transport was not. Based upon those results, a new method to account for altitude effects on thermal ice protection system operation was proposed. The method employs a two-step process where heat energy and mass transport are sequentially matched, linked by matched surface temperatures. While not providing exact matching of heat and mass transport to reference conditions, the method produces a better simulation than other methods. Moreover, it does not rely on the application of empirical correction factors, but instead relies on the straightforward application of the primary physics involved. This report describes the method, shows results of testing the method, and discusses its limitations.

  2. Friction coefficient of an intact free liquid jet moving in air

    NASA Astrophysics Data System (ADS)

    Comiskey, P. M.; Yarin, A. L.

    2018-04-01

    Here, we propose a novel method of determining the friction coefficient of intact free liquid jets moving in quiescent air. The middle-size jets of this kind are relevant for such applications as decorative fountains, fiber-forming, fire suppression, agriculture, and forensics. The present method is based on measurements of trajectories created using a straightforward experimental apparatus emulating such jets at a variety of initial inclination angles. Then, the trajectories are described theoretically, accounting for the longitudinal traction imposed on such jets by the surrounding air. The comparison of the experimental data with the theoretical predictions shows that the results can be perfectly superimposed with the friction coefficient {C_{{fd}}}=5R{e_d}^{{ - 1/2 ± 0.05}}, in the 621 ≤ R{e_d} ≤ 1289 range, with Red being the Reynolds number based on the local cross-sectional diameter of the jet. The results also show that the farthest distance such jets can reach corresponds to the initial inclination angle α =35° which is in agreement with already published data.

  3. Robust and Simple Non-Reflecting Boundary Conditions for the Euler Equations: A New Approach Based on the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Loh, Ching-Yuen; Wang, Xiao-Yen; Yu, Shang-Tao

    2003-01-01

    This paper reports on a significant advance in the area of non-reflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of the development of the space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics-based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains their unique robustness and accuracy in terms of the conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.

  4. Forward and backward tone mapping of high dynamic range images based on subband architecture

    NASA Astrophysics Data System (ADS)

    Bouzidi, Ines; Ouled Zaid, Azza

    2015-01-01

    This paper presents a novel High Dynamic Range (HDR) tone mapping (TM) system based on sub-band architecture. Standard wavelet filters of Daubechies, Symlets, Coiflets and Biorthogonal were used to estimate the proposed system performance in terms of Low Dynamic Range (LDR) image quality and reconstructed HDR image fidelity. During TM stage, the HDR image is firstly decomposed in sub-bands using symmetrical analysis-synthesis filter bank. The transform coefficients are then rescaled using a predefined gain map. The inverse Tone Mapping (iTM) stage is straightforward. Indeed, the LDR image passes through the same sub-band architecture. But, instead of reducing the dynamic range, the LDR content is boosted to an HDR representation. Moreover, in our TM sheme, we included an optimization module to select the gain map components that minimize the reconstruction error, and consequently resulting in high fidelity HDR content. Comparisons with recent state-of-the-art methods have shown that our method provides better results in terms of visual quality and HDR reconstruction fidelity using objective and subjective evaluations.

  5. a Context-Aware Tourism Recommender System Based on a Spreading Activation Method

    NASA Astrophysics Data System (ADS)

    Bahramian, Z.; Abbaspour, R. Ali; Claramunt, C.

    2017-09-01

    Users planning a trip to a given destination often search for the most appropriate points of interest location, this being a non-straightforward task as the range of information available is very large and not very well structured. The research presented by this paper introduces a context-aware tourism recommender system that overcomes the information overload problem by providing personalized recommendations based on the user's preferences. It also incorporates contextual information to improve the recommendation process. As previous context-aware tourism recommender systems suffer from a lack of formal definition to represent contextual information and user's preferences, the proposed system is enhanced using an ontology approach. We also apply a spreading activation technique to contextualize user preferences and learn the user profile dynamically according to the user's feedback. The proposed method assigns more effect in the spreading process for nodes which their preference values are assigned directly by the user. The results show the overall performance of the proposed context-aware tourism recommender systems by an experimental application to the city of Tehran.

  6. Non-invasive determination of glucose directly in raw fruits using a continuous flow system based on microdialysis sampling and amperometric detection at an integrated enzymatic biosensor.

    PubMed

    Vargas, E; Ruiz, M A; Campuzano, S; Reviejo, A J; Pingarrón, J M

    2016-03-31

    A non-destructive, rapid and simple to use sensing method for direct determination of glucose in non-processed fruits is described. The strategy involved on-line microdialysis sampling coupled with a continuous flow system with amperometric detection at an enzymatic biosensor. Apart from direct determination of glucose in fruit juices and blended fruits, this work describes for the first time the successful application of an enzymatic biosensor-based electrochemical approach to the non-invasive determination of glucose in raw fruits. The methodology correlates, through previous calibration set-up, the amperometric signal generated from glucose in non-processed fruits with its content in % (w/w). The comparison of the obtained results using the proposed approach in different fruits with those provided by other method involving the same commercial biosensor as amperometric detector in stirred solutions pointed out that there were no significant differences. Moreover, in comparison with other available methodologies, this microdialysis-coupled continuous flow system amperometric biosensor-based procedure features straightforward sample preparation, low cost, reduced assay time (sampling rate of 7 h(-1)) and ease of automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Nanoparticle assisted laser desorption/ionization mass spectrometry for small molecule analytes.

    PubMed

    Abdelhamid, Hani Nasser

    2018-03-01

    Nanoparticle assisted laser desorption/ionization mass spectrometry (NPs-ALDI-MS) shows remarkable characteristics and has a promising future in terms of real sample analysis. The incorporation of NPs can advance several methods including surface assisted LDI-MS, and surface enhanced LDI-MS. These methods have advanced the detection of many thermally labile and nonvolatile biomolecules. Nanoparticles circumvent the drawbacks of conventional organic matrices for the analysis of small molecules. In most cases, NPs offer a clear background without interfering peaks, absence of fragmentation of thermally labile molecules, and allow the ionization of species with weak noncovalent interactions. Furthermore, an enhancement in sensitivity and selectivity can be achieved. NPs enable straightforward analysis of target species in a complex sample. This review (with 239 refs.) covers the progress made in laser-based mass spectrometry in combination with the use of metallic NPs (such as AuNPs, AgNPs, PtNPs, and PdNPs), NPs consisting of oxides and chalcogenides, silicon-based NPs, carbon-based nanomaterials, quantum dots, and metal-organic frameworks. Graphical abstract An overview is given on nanomaterials for use in surface-assisted laser desorption/ionization mass spectrometry of small molecules.

  8. Managing Uncertainty in Runoff Estimation with the U.S. Environmental Protection Agency National Stormwater Calculator.

    EPA Science Inventory

    The U.S. Environmental Protection Agency National Stormwater Calculator (NSWC) simplifies the task of estimating runoff through a straightforward simulation process based on the EPA Stormwater Management Model. The NSWC accesses localized climate and soil hydrology data, and opti...

  9. Rangeland and Oak Relationships

    Treesearch

    Dick R. McCleery

    1991-01-01

    Hardwood rangelands are becoming an endangered resource on the Central Coast of California. Straightforward inventory processes and management guidelines on which to base sound management decisions provide the landowner the tools to protect and utilize these important hardwood resources. Utilizing a WOODLAND INFORMATION STICK and a ZIG ZAG TRANSECT, landowners can...

  10. Computational Approaches to Identify Promoters and cis-Regulatory Elements in Plant Genomes1

    PubMed Central

    Rombauts, Stephane; Florquin, Kobe; Lescot, Magali; Marchal, Kathleen; Rouzé, Pierre; Van de Peer, Yves

    2003-01-01

    The identification of promoters and their regulatory elements is one of the major challenges in bioinformatics and integrates comparative, structural, and functional genomics. Many different approaches have been developed to detect conserved motifs in a set of genes that are either coregulated or orthologous. However, although recent approaches seem promising, in general, unambiguous identification of regulatory elements is not straightforward. The delineation of promoters is even harder, due to its complex nature, and in silico promoter prediction is still in its infancy. Here, we review the different approaches that have been developed for identifying promoters and their regulatory elements. We discuss the detection of cis-acting regulatory elements using word-counting or probabilistic methods (so-called “search by signal” methods) and the delineation of promoters by considering both sequence content and structural features (“search by content” methods). As an example of search by content, we explored in greater detail the association of promoters with CpG islands. However, due to differences in sequence content, the parameters used to detect CpG islands in humans and other vertebrates cannot be used for plants. Therefore, a preliminary attempt was made to define parameters that could possibly define CpG and CpNpG islands in Arabidopsis, by exploring the compositional landscape around the transcriptional start site. To this end, a data set of more than 5,000 gene sequences was built, including the promoter region, the 5′-untranslated region, and the first introns and coding exons. Preliminary analysis shows that promoter location based on the detection of potential CpG/CpNpG islands in the Arabidopsis genome is not straightforward. Nevertheless, because the landscape of CpG/CpNpG islands differs considerably between promoters and introns on the one side and exons (whether coding or not) on the other, more sophisticated approaches can probably be developed for the successful detection of “putative” CpG and CpNpG islands in plants. PMID:12857799

  11. Zephyr: Open-source Parallel Seismic Waveform Inversion in an Integrated Python-based Framework

    NASA Astrophysics Data System (ADS)

    Smithyman, B. R.; Pratt, R. G.; Hadden, S. M.

    2015-12-01

    Seismic Full-Waveform Inversion (FWI) is an advanced method to reconstruct wave properties of materials in the Earth from a series of seismic measurements. These methods have been developed by researchers since the late 1980s, and now see significant interest from the seismic exploration industry. As researchers move towards implementing advanced numerical modelling (e.g., 3D, multi-component, anisotropic and visco-elastic physics), it is desirable to make use of a modular approach, minimizing the effort developing a new set of tools for each new numerical problem. SimPEG (http://simpeg.xyz) is an open source project aimed at constructing a general framework to enable geophysical inversion in various domains. In this abstract we describe Zephyr (https://github.com/bsmithyman/zephyr), which is a coupled research project focused on parallel FWI in the seismic context. The software is built on top of Python, Numpy and IPython, which enables very flexible testing and implementation of new features. Zephyr is an open source project, and is released freely to enable reproducible research. We currently implement a parallel, distributed seismic forward modelling approach that solves the 2.5D (two-and-one-half dimensional) viscoacoustic Helmholtz equation at a range modelling frequencies, generating forward solutions for a given source behaviour, and gradient solutions for a given set of observed data. Solutions are computed in a distributed manner on a set of heterogeneous workers. The researcher's frontend computer may be separated from the worker cluster by a network link to enable full support for computation on remote clusters from individual workstations or laptops. The present codebase introduces a numerical discretization equivalent to that used by FULLWV, a well-known seismic FWI research codebase. This makes it straightforward to compare results from Zephyr directly with FULLWV. The flexibility introduced by the use of a Python programming environment makes extension of the codebase with new methods much more straightforward. This enables comparison and integration of new efforts with existing results.

  12. Improved perovskite phototransistor prepared using multi-step annealing method

    NASA Astrophysics Data System (ADS)

    Cao, Mingxuan; Zhang, Yating; Yu, Yu; Yao, Jianquan

    2018-02-01

    Organic-inorganic hybrid perovskites with good intrinsic physical properties have received substantial interest for solar cell and optoelectronic applications. However, perovskite film always suffers from a low carrier mobility due to its structural imperfection including sharp grain boundaries and pinholes, restricting their device performance and application potential. Here we demonstrate a straightforward strategy based on multi-step annealing process to improve the performance of perovskite photodetector. Annealing temperature and duration greatly affects the surface morphology and optoelectrical properties of perovskites which determines the device property of phototransistor. The perovskite films treated with multi-step annealing method tend to form highly uniform, well-crystallized and high surface coverage perovskite film, which exhibit stronger ultraviolet-visible absorption and photoluminescence spectrum compare to the perovskites prepared by conventional one-step annealing process. The field-effect mobilities of perovskite photodetector treated by one-step direct annealing method shows mobility as 0.121 (0.062) cm2V-1s-1 for holes (electrons), which increases to 1.01 (0.54) cm2V-1s-1 for that treated with muti-step slow annealing method. Moreover, the perovskite phototransistors exhibit a fast photoresponse speed of 78 μs. In general, this work focuses on the influence of annealing methods on perovskite phototransistor, instead of obtains best parameters of it. These findings prove that Multi-step annealing methods is feasible to prepared high performance based photodetector.

  13. Solution of the Schrodinger Equation for One-Dimensional Anharmonic Potentials: An Undergraduate Computational Experiment

    ERIC Educational Resources Information Center

    Beddard, Godfrey S.

    2011-01-01

    A method of solving the Schrodinger equation using a basis set expansion is described and used to calculate energy levels and wavefunctions of the hindered rotation of ethane and the ring puckering of cyclopentene. The calculations were performed using a computer algebra package and the calculations are straightforward enough for undergraduates to…

  14. The Assess-and-Fix Approach: Using Non-Destructive Evaluations to Help Select Pipe Renewal Methods (WaterRF Report 4473)

    EPA Science Inventory

    Nondestructive examinations (NDE) can be easily performed as part of a typical water main rehabilitation project. Once a bypass water system has been installed and the water main has been cleaned, pulling a scanning tool through the main is very straightforward. An engineer can t...

  15. Are You Paying Too Much? Cutting Telephone System Costs by Tracking Expenses.

    ERIC Educational Resources Information Center

    Sachnoff, Neil S.

    1990-01-01

    Personnel responsible for telecommunications systems need a way to ensure pursuit of the right technology at the right time does not lead them away from considerations of the right price. A three-step method of historical and monthly reviews offers a straightforward means of regaining control of expenditures and tracking costs. (MSE)

  16. Chapter 2. Methods and terminology used with studies of habitat associations

    Treesearch

    D. Archibald McCallum

    1994-01-01

    The forest owl conservation assessments emphasize the relationship between flammulated, boreal, and great gray owls and the forests in which they occur. The habitat requirements of the owls and their principal prey bear strongly on the conservation status of the owls. Establishing the characteristics of the owl/habitat relationship is not a trivial or straightforward...

  17. Bringing Catalysis with Gold Nanoparticles in Green Solvents to Graduate Level Students

    ERIC Educational Resources Information Center

    Raghuwanshi, Vikram Singh; Wendt, Robert; O'Neill, Maeve; Ochmann, Miguel; Som, Tirtha; Fenger, Robert; Mohrmann, Marie; Hoell, Armin; Rademann, Klaus

    2017-01-01

    We demonstrate here a novel laboratory experiment for the synthesis of gold nanoparticles (AuNPs) by using a low energy gold-sputtering method together with a modern, green, and biofriendly deep eutectic solvent (DES). The strategy is straightforward, economical, ecofriendly, rapid, and clean. It yields uniform AuNPs of 5 nm in diameter with high…

  18. Post-processing method for wind speed ensemble forecast using wind speed and direction

    NASA Astrophysics Data System (ADS)

    Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin

    2017-04-01

    Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.

  19. Non-Contact Surface Roughness Measurement by Implementation of a Spatial Light Modulator

    PubMed Central

    Aulbach, Laura; Salazar Bloise, Félix; Lu, Min; Koch, Alexander W.

    2017-01-01

    The surface structure, especially the roughness, has a significant influence on numerous parameters, such as friction and wear, and therefore estimates the quality of technical systems. In the last decades, a broad variety of surface roughness measurement methods were developed. A destructive measurement procedure or the lack of feasibility of online monitoring are the crucial drawbacks of most of these methods. This article proposes a new non-contact method for measuring the surface roughness that is straightforward to implement and easy to extend to online monitoring processes. The key element is a liquid-crystal-based spatial light modulator, integrated in an interferometric setup. By varying the imprinted phase of the modulator, a correlation between the imprinted phase and the fringe visibility of an interferogram is measured, and the surface roughness can be derived. This paper presents the theoretical approach of the method and first simulation and experimental results for a set of surface roughnesses. The experimental results are compared with values obtained by an atomic force microscope and a stylus profiler. PMID:28294990

  20. Comparison of AGE and Spectral Methods for the Simulation of Far-Wakes

    NASA Technical Reports Server (NTRS)

    Bisset, D. K.; Rogers, M. M.; Kega, Dennis (Technical Monitor)

    1999-01-01

    Turbulent flow simulation methods based on finite differences are attractive for their simplicity, flexibility and efficiency, but not always for accuracy or stability. This report demonstrates that a good compromise is possible with the Advected Grid Explicit (AGE) method. AGE has proven to be both efficient and accurate for simulating turbulent free-shear flows, including planar mixing layers and planar jets. Its efficiency results from its localized fully explicit finite difference formulation (Bisset 1998a,b) that is very straightforward to compute, outweighing the need for a fairly small timestep. Also, most of the successful simulations were slightly under-resolved, and therefore they were, in effect, large-eddy simulations (LES) without a sub-grid-scale (SGS) model, rather than direct numerical simulations (DNS). The principle is that the role of the smallest scales of turbulent motion (when the Reynolds number is not too low) is to dissipate turbulent energy, and therefore they do not have to be simulated when the numerical method is inherently dissipative at its resolution limits. Such simulations are termed 'auto-LES' (LES with automatic SGS modeling) in this report.

  1. IMPLEMENTATION AND VALIDATION OF A FULLY IMPLICIT ACCUMULATOR MODEL IN RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zou, Ling; Zhang, Hongbin

    2016-01-01

    This paper presents the implementation and validation of an accumulator model in RELAP-7 under the framework of preconditioned Jacobian free Newton Krylov (JFNK) method, based on the similar model used in RELAP5. RELAP-7 is a new nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). RELAP-7 is a fully implicit system code. The JFNK and preconditioning methods used in RELAP-7 is briefly discussed. The slightly modified accumulator model is summarized for completeness. The implemented model was validated with LOFT L3-1 test and benchmarked with RELAP5 results. RELAP-7 and RELAP5 had almost identical results for themore » accumulator gas pressure and water level, although there were some minor difference in other parameters such as accumulator gas temperature and tank wall temperature. One advantage of the JFNK method is its easiness to maintain and modify models due to fully separation of numerical methods from physical models. It would be straightforward to extend the current RELAP-7 accumulator model to simulate the advanced accumulator design.« less

  2. SORTAN: a Unix program for calculation and graphical presentation of fault slip as induced by stresses

    NASA Astrophysics Data System (ADS)

    Pascal, Christophe

    2004-04-01

    Stress inversion programs are nowadays frequently used in tectonic analysis. The purpose of this family of programs is to reconstruct the stress tensor characteristics from fault slip data acquired in the field or derived from earthquake focal mechanisms (i.e. inverse methods). Until now, little attention has been paid to direct methods (i.e. to determine fault slip directions from an inferred stress tensor). During the 1990s, the fast increase in resolution in 3D seismic reflection techniques made it possible to determine the geometry of subsurface faults with a satisfactory accuracy but not to determine precisely their kinematics. This recent improvement allows the use of direct methods. A computer program, namely SORTAN, is introduced. The program is highly portable on Unix platforms, straightforward to install and user-friendly. The computation is based on classical stress-fault slip relationships and allows for fast treatment of a set of faults and graphical presentation of the results (i.e. slip directions). In addition, the SORTAN program permits one to test the sensitivity of the results to input uncertainties. It is a complementary tool to classical stress inversion methods and can be used to check the mechanical consistency and the limits of structural interpretations based upon 3D seismic reflection surveys.

  3. Dispersive heterodyne probing method for laser frequency stabilization based on spectral hole burning in rare-earth doped crystals.

    PubMed

    Gobron, O; Jung, K; Galland, N; Predehl, K; Le Targat, R; Ferrier, A; Goldner, P; Seidelin, S; Le Coq, Y

    2017-06-26

    Frequency-locking a laser to a spectral hole in rare-earth doped crystals at cryogenic temperature has been shown to be a promising alternative to the use of high finesse Fabry-Perot cavities when seeking a very high short term stability laser (M. J. Thorpe et al., Nature Photonics 5, 688 (2011)). We demonstrate here a novel technique for achieving such stabilization, based on generating a heterodyne beat-note between a master laser and a slave laser whose dephasing caused by propagation near a spectral hole generate the error signal of the frequency lock. The master laser is far detuned from the center of the inhomogeneous absorption profile, and therefore exhibits only limited interaction with the crystal despite a potentially high optical power. The demodulation and frequency corrections are generated digitally with a hardware and software implementation based on a field-programmable gate array and a Software Defined Radio platform, making it straightforward to address several frequency channels (spectral holes) in parallel.

  4. Synthesis and Characterization of Highly Crystalline Graphene Aerogels

    DOE PAGES

    Worsley, Marcus A.; Pham, Thang T.; Yan, Aiming; ...

    2014-10-06

    Aerogels are used in a broad range of scientific and industrial applications due to their large surface areas, ultrafine pore sizes, and extremely low densities. Recently, a large number of reports have described graphene aerogels based on the reduction of graphene oxide (GO). Though these GO-based aerogels represent a considerable advance relative to traditional carbon aerogels, they remain significantly inferior to individual graphene sheets due to their poor crystallinity. Here, we report a straightforward method to synthesize highly crystalline GO-based graphene aerogels via high-temperature processing common in commercial graphite production. The crystallization of the graphene aerogels versus annealing temperature ismore » characterized using Raman and X-ray absorption spectroscopy, X-ray diffraction, and electron microscopy. Nitrogen porosimetry shows that the highly crystalline graphene macrostructure maintains a high surface area and ultrafine pore size. Because of their enhanced crystallinity, these graphene aerogels exhibit a ~200 °C improvement in oxidation temperature and an order of magnitude increase in electrical conductivity.« less

  5. WEC Design Response Toolbox v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Ryan; Michelen, Carlos; Eckert-Gallup, Aubrey

    2016-03-30

    The WEC Design Response Toolbox (WDRT) is a numerical toolbox for design-response analysis of wave energy converters (WECs). The WDRT was developed during a series of efforts to better understand WEC survival design. The WDRT has been designed as a tool for researchers and developers, enabling the straightforward application of statistical and engineering methods. The toolbox includes methods for short-term extreme response, environmental characterization, long-term extreme response and risk analysis, fatigue, and design wave composition.

  6. Integrated Hamiltonian sampling: a simple and versatile method for free energy simulations and conformational sampling.

    PubMed

    Mori, Toshifumi; Hamers, Robert J; Pedersen, Joel A; Cui, Qiang

    2014-07-17

    Motivated by specific applications and the recent work of Gao and co-workers on integrated tempering sampling (ITS), we have developed a novel sampling approach referred to as integrated Hamiltonian sampling (IHS). IHS is straightforward to implement and complementary to existing methods for free energy simulation and enhanced configurational sampling. The method carries out sampling using an effective Hamiltonian constructed by integrating the Boltzmann distributions of a series of Hamiltonians. By judiciously selecting the weights of the different Hamiltonians, one achieves rapid transitions among the energy landscapes that underlie different Hamiltonians and therefore an efficient sampling of important regions of the conformational space. Along this line, IHS shares similar motivations as the enveloping distribution sampling (EDS) approach of van Gunsteren and co-workers, although the ways that distributions of different Hamiltonians are integrated are rather different in IHS and EDS. Specifically, we report efficient ways for determining the weights using a combination of histogram flattening and weighted histogram analysis approaches, which make it straightforward to include many end-state and intermediate Hamiltonians in IHS so as to enhance its flexibility. Using several relatively simple condensed phase examples, we illustrate the implementation and application of IHS as well as potential developments for the near future. The relation of IHS to several related sampling methods such as Hamiltonian replica exchange molecular dynamics and λ-dynamics is also briefly discussed.

  7. New method for characterizing paper coating structures using argon ion beam milling and field emission scanning electron microscopy.

    PubMed

    Dahlström, C; Allem, R; Uesaka, T

    2011-02-01

    We have developed a new method for characterizing microstructures of paper coating using argon ion beam milling technique and field emission scanning electron microscopy. The combination of these two techniques produces extremely high-quality images with very few artefacts, which are particularly suited for quantitative analyses of coating structures. A new evaluation method has been developed by using marker-controlled watershed segmentation technique of the secondary electron images. The high-quality secondary electron images with well-defined pores makes it possible to use this semi-automatic segmentation method. One advantage of using secondary electron images instead of backscattered electron images is being able to avoid possible overestimation of the porosity because of the signal depth. A comparison was made between the new method and the conventional method using greyscale histogram thresholding of backscattered electron images. The results showed that the conventional method overestimated the pore area by 20% and detected around 5% more pores than the new method. As examples of the application of the new method, we have investigated the distributions of coating binders, and the relationship between local coating porosity and base sheet structures. The technique revealed, for the first time with direct evidence, the long-suspected coating non-uniformity, i.e. binder migration, and the correlation between coating porosity versus base sheet mass density, in a straightforward way. © 2010 The Authors Journal compilation © 2010 The Royal Microscopical Society.

  8. Kinematics of an in-parallel actuated manipulator based on the Stewart platform mechanism

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1992-01-01

    This paper presents kinematic equations and solutions for an in-parallel actuated robotic mechanism based on Stewart's platform. These equations are required for inverse position and resolved rate (inverse velocity) platform control. NASA LaRC has a Vehicle Emulator System (VES) platform designed by MIT which is based on Stewart's platform. The inverse position solution is straight-forward and computationally inexpensive. Given the desired position and orientation of the moving platform with respect to the base, the lengths of the prismatic leg actuators are calculated. The forward position solution is more complicated and theoretically has 16 solutions. The position and orientation of the moving platform with respect to the base is calculated given the leg actuator lengths. Two methods are pursued in this paper to solve this problem. The resolved rate (inverse velocity) solution is derived. Given the desired Cartesian velocity of the end-effector, the required leg actuator rates are calculated. The Newton-Raphson Jacobian matrix resulting from the second forward position kinematics solution is a modified inverse Jacobian matrix. Examples and simulations are given for the VES.

  9. Efficient synthesis of cysteine-rich cyclic peptides through intramolecular native chemical ligation of N-Hnb-Cys peptide crypto-thioesters.

    PubMed

    Terrier, Victor P; Delmas, Agnès F; Aucagne, Vincent

    2017-01-04

    We herein introduce a straightforward synthetic route to cysteine-containing cyclic peptides based on the intramolecular native chemical ligation of in situ generated thioesters. Key precursors are N-Hnb-Cys crypto-thioesters, easily synthesized by Fmoc-based SPPS. The strategy is applied to a representative range of naturally occurring cyclic disulfide-rich peptide sequences.

  10. Simplification of the time-dependent generalized self-interaction correction method using two sets of orbitals: Application of the optimized effective potential formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messud, J.; Dinh, P. M.; Suraud, Eric

    2009-10-15

    We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent 'generalized SIC-OEP'. A straightforward approximation, using the spatial localization of one set of orbitals, leads to the 'generalized SIC-Slater' formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.

  11. Simplification of the time-dependent generalized self-interaction correction method using two sets of orbitals: Application of the optimized effective potential formalism

    NASA Astrophysics Data System (ADS)

    Messud, J.; Dinh, P. M.; Reinhard, P.-G.; Suraud, Eric

    2009-10-01

    We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent “generalized SIC-OEP.” A straightforward approximation, using the spatial localization of one set of orbitals, leads to the “generalized SIC-Slater” formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.

  12. Galerkin Method for Nonlinear Dynamics

    NASA Astrophysics Data System (ADS)

    Noack, Bernd R.; Schlegel, Michael; Morzynski, Marek; Tadmor, Gilead

    A Galerkin method is presented for control-oriented reduced-order models (ROM). This method generalizes linear approaches elaborated by M. Morzyński et al. for the nonlinear Navier-Stokes equation. These ROM are used as plants for control design in the chapters by G. Tadmor et al., S. Siegel, and R. King in this volume. Focus is placed on empirical ROM which compress flow data in the proper orthogonal decomposition (POD). The chapter shall provide a complete description for construction of straight-forward ROM as well as the physical understanding and teste

  13. A heuristic way of obtaining the Kerr metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enderlein, J.

    1997-09-01

    An intuitive, straightforward way of finding the metric of a rotating black hole is presented, based on the algebra of differential forms. The representation obtained for the metric displays a simplicity which is not obvious in the usual Boyer{endash}Lindquist coordinates. {copyright} {ital 1997 American Association of Physics Teachers.}

  14. Embracing the Cloud: Six Ways to Look at the Shift to Cloud Computing

    ERIC Educational Resources Information Center

    Ullman, David F.; Haggerty, Blake

    2010-01-01

    Cloud computing is the latest paradigm shift for the delivery of IT services. Where previous paradigms (centralized, decentralized, distributed) were based on fairly straightforward approaches to technology and its management, cloud computing is radical in comparison. The literature on cloud computing, however, suffers from many divergent…

  15. Visual reconciliation of alternative similarity spaces in climate modeling

    Treesearch

    J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva

    2015-01-01

    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...

  16. Proper Acknowledgment?

    ERIC Educational Resources Information Center

    East, Julianne

    2005-01-01

    The concern in Australian universities about the prevalence of plagiarism has led to the development of policies about academic integrity and in turn focused attention on the need to inform students about how to avoid plagiarism and how to properly acknowledge. Teaching students how to avoid plagiarism can appear to be straightforward if based on…

  17. BAT - The Bayesian analysis toolkit

    NASA Astrophysics Data System (ADS)

    Caldwell, Allen; Kollár, Daniel; Kröninger, Kevin

    2009-11-01

    We describe the development of a new toolkit for data analysis. The analysis package is based on Bayes' Theorem, and is realized with the use of Markov Chain Monte Carlo. This gives access to the full posterior probability distribution. Parameter estimation, limit setting and uncertainty propagation are implemented in a straightforward manner.

  18. University Students' Reading of Their First-Year Mathematics Textbooks

    ERIC Educational Resources Information Center

    Shepherd, Mary D.; Selden, Annie; Selden, John

    2012-01-01

    This article reports the observed behaviors and difficulties that 11 precalculus and calculus students exhibited in reading new passages from their mathematics textbooks. To gauge the "effectiveness" of these students' reading, we asked them to attempt straightforward mathematical tasks, based directly on what they had just read. The…

  19. Cultural and Disciplinary Variation in Academic Discourse: The Issue of Influencing Factors

    ERIC Educational Resources Information Center

    Yakhontova, Tatyana

    2006-01-01

    This paper demonstrates the role of disciplinary context in shaping the common rhetorical and textual features of research texts in different languages and, more broadly, problematizes the validity of straightforward sociocultural explanations of rhetorical differences frequently used in the literature. The research is based on the contrastive…

  20. Pedagogical Sensemaking or "Doing School": In Well-Designed Workshop Sessions, Facilitation Makes the Difference

    ERIC Educational Resources Information Center

    Olmstead, Alice; Turpen, Chandra

    2017-01-01

    Although physics education researchers often use workshops to promote instructional change in higher education, little research has been done to investigate workshop design. Initial evidence suggests that many workshop sessions focus primarily on raising faculty's awareness of research-based instructional strategies, a fairly straightforward goal…

  1. Realistic absorption coefficient of each individual film in a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Cesaria, M.; Caricato, A. P.; Martino, M.

    2015-02-01

    A spectrophotometric strategy, termed multilayer-method (ML-method), is presented and discussed to realistically calculate the absorption coefficient of each individual layer embedded in multilayer architectures without reverse engineering, numerical refinements and assumptions about the layer homogeneity and thickness. The strategy extends in a non-straightforward way a consolidated route, already published by the authors and here termed basic-method, able to accurately characterize an absorbing film covering transparent substrates. The ML-method inherently accounts for non-measurable contribution of the interfaces (including multiple reflections), describes the specific film structure as determined by the multilayer architecture and used deposition approach and parameters, exploits simple mathematics, and has wide range of applicability (high-to-weak absorption regions, thick-to-ultrathin films). Reliability tests are performed on films and multilayers based on a well-known material (indium tin oxide) by deliberately changing the film structural quality through doping, thickness-tuning and underlying supporting-film. Results are found consistent with information obtained by standard (optical and structural) analysis, the basic-method and band gap values reported in the literature. The discussed example-applications demonstrate the ability of the ML-method to overcome the drawbacks commonly limiting an accurate description of multilayer architectures.

  2. Microrheology with optical tweezers: measuring the relative viscosity of solutions 'at a glance'.

    PubMed

    Tassieri, Manlio; Del Giudice, Francesco; Robertson, Emma J; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M

    2015-03-06

    We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples.

  3. Microrheology with Optical Tweezers: Measuring the relative viscosity of solutions ‘at a glance'

    PubMed Central

    Tassieri, Manlio; Giudice, Francesco Del; Robertson, Emma J.; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M.

    2015-01-01

    We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples. PMID:25743468

  4. Alignment method for parabolic trough solar concentrators

    DOEpatents

    Diver, Richard B [Albuquerque, NM

    2010-02-23

    A Theoretical Overlay Photographic (TOP) alignment method uses the overlay of a theoretical projected image of a perfectly aligned concentrator on a photographic image of the concentrator to align the mirror facets of a parabolic trough solar concentrator. The alignment method is practical and straightforward, and inherently aligns the mirror facets to the receiver. When integrated with clinometer measurements for which gravity and mechanical drag effects have been accounted for and which are made in a manner and location consistent with the alignment method, all of the mirrors on a common drive can be aligned and optimized for any concentrator orientation.

  5. Stellar Color Regression: A Spectroscopy-based Method for Color Calibration to a Few Millimagnitude Accuracy and the Recalibration of Stripe 82

    NASA Astrophysics Data System (ADS)

    Yuan, Haibo; Liu, Xiaowei; Xiang, Maosheng; Huang, Yang; Zhang, Huihua; Chen, Bingqiu

    2015-02-01

    In this paper we propose a spectroscopy-based stellar color regression (SCR) method to perform accurate color calibration for modern imaging surveys, taking advantage of millions of stellar spectra now available. The method is straightforward, insensitive to systematic errors in the spectroscopically determined stellar atmospheric parameters, applicable to regions that are effectively covered by spectroscopic surveys, and capable of delivering an accuracy of a few millimagnitudes for color calibration. As an illustration, we have applied the method to the Sloan Digital Sky Survey (SDSS) Stripe 82 data. With a total number of 23,759 spectroscopically targeted stars, we have mapped out the small but strongly correlated color zero-point errors present in the photometric catalog of Stripe 82, and we improve the color calibration by a factor of two to three. Our study also reveals some small but significant magnitude dependence errors in the z band for some charge-coupled devices (CCDs). Such errors are likely to be present in all the SDSS photometric data. Our results are compared with those from a completely independent test based on the intrinsic colors of red galaxies presented by Ivezić et al. The comparison, as well as other tests, shows that the SCR method has achieved a color calibration internally consistent at a level of about 5 mmag in u - g, 3 mmag in g - r, and 2 mmag in r - i and i - z. Given the power of the SCR method, we discuss briefly the potential benefits by applying the method to existing, ongoing, and upcoming imaging surveys.

  6. The Betting Odds Rating System: Using soccer forecasts to forecast soccer.

    PubMed

    Wunderlich, Fabian; Memmert, Daniel

    2018-01-01

    Betting odds are frequently found to outperform mathematical models in sports related forecasting tasks, however the factors contributing to betting odds are not fully traceable and in contrast to rating-based forecasts no straightforward measure of team-specific quality is deducible from the betting odds. The present study investigates the approach of combining the methods of mathematical models and the information included in betting odds. A soccer forecasting model based on the well-known ELO rating system and taking advantage of betting odds as a source of information is presented. Data from almost 15.000 soccer matches (seasons 2007/2008 until 2016/2017) are used, including both domestic matches (English Premier League, German Bundesliga, Spanish Primera Division and Italian Serie A) and international matches (UEFA Champions League, UEFA Europe League). The novel betting odds based ELO model is shown to outperform classic ELO models, thus demonstrating that betting odds prior to a match contain more relevant information than the result of the match itself. It is shown how the novel model can help to gain valuable insights into the quality of soccer teams and its development over time, thus having a practical benefit in performance analysis. Moreover, it is argued that network based approaches might help in further improving rating and forecasting methods.

  7. Polyacrylamide medium for the electrophoretic separation of biomolecules

    DOEpatents

    Madabhushi, Ramakrishna S.; Gammon, Stuart A.

    2003-11-11

    A polyacryalmide medium for the electrophoretic separation of biomolecules. The polyacryalmide medium comprises high molecular weight polyacrylamides (PAAm) having a viscosity average molecular weight (M.sub.v) of about 675-725 kDa were synthesized by conventional red-ox polymerization technique. Using this separation medium, capillary electrophoresis of BigDye DNA sequencing standard was performed. A single base resolution of .about.725 bases was achieved in .about.60 minute in a non-covalently coated capillary of 50 .mu.m i.d., 40 cm effective length, and a filed of 160 V/cm at 40.degree. C. The resolution achieved with this formulation to separate DNA under identical conditions is much superior (725 bases vs. 625 bases) and faster (60 min. vs. 75 min.) to the commercially available PAAm, such as supplied by Amersham. The formulation method employed here to synthesize PAAm is straight-forward, simple and does not require cumbersome methods such as emulsion polymerizaiton in order to achieve very high molecular weights. Also, the formulation here does not require separation of PAAm from the reaction mixture prior to reconstituting the polymer to a final concentration. Furthermore, the formulation here is prepared from a single average mol. wt. PAAm as opposed to the mixture of two different average mo. wt. PAAm previously required to achieve high resolution.

  8. The Betting Odds Rating System: Using soccer forecasts to forecast soccer

    PubMed Central

    Memmert, Daniel

    2018-01-01

    Betting odds are frequently found to outperform mathematical models in sports related forecasting tasks, however the factors contributing to betting odds are not fully traceable and in contrast to rating-based forecasts no straightforward measure of team-specific quality is deducible from the betting odds. The present study investigates the approach of combining the methods of mathematical models and the information included in betting odds. A soccer forecasting model based on the well-known ELO rating system and taking advantage of betting odds as a source of information is presented. Data from almost 15.000 soccer matches (seasons 2007/2008 until 2016/2017) are used, including both domestic matches (English Premier League, German Bundesliga, Spanish Primera Division and Italian Serie A) and international matches (UEFA Champions League, UEFA Europe League). The novel betting odds based ELO model is shown to outperform classic ELO models, thus demonstrating that betting odds prior to a match contain more relevant information than the result of the match itself. It is shown how the novel model can help to gain valuable insights into the quality of soccer teams and its development over time, thus having a practical benefit in performance analysis. Moreover, it is argued that network based approaches might help in further improving rating and forecasting methods. PMID:29870554

  9. Phase unwinding for dictionary compression with multiple channel transmission in magnetic resonance fingerprinting.

    PubMed

    Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A

    2018-06-01

    Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme.

    PubMed

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases.

  11. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme

    PubMed Central

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    abstract The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases. PMID:26646356

  12. Automatic peak selection by a Benjamini-Hochberg-based algorithm.

    PubMed

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into [Formula: see text]-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx.

  13. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    PubMed Central

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into -values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. PMID:23308147

  14. Measuring the performance of super-resolution reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dijk, Judith; Schutte, Klamer; van Eekeren, Adam W. M.; Bijl, Piet

    2012-06-01

    For many military operations situational awareness is of great importance. This situational awareness and related tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic. Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order to judge these algorithms and the conditions under which they operate best, performance evaluation methods are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available. Therefore, evaluation of the differences in high resolution between the estimated high resolution image and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution reconstruction, which are not known on forehand and hence are difficult to evaluate. In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms. Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery. Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution reconstruction algorithms.

  15. Fault Analysis of Space Station DC Power Systems-Using Neural Network Adaptive Wavelets to Detect Faults

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Wang, Yanchun; Dolce, James L.

    1997-01-01

    This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.

  16. Soliton and quasi-periodic wave solutions for b-type Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Singh, Manjit; Gupta, R. K.

    2017-11-01

    In this paper, truncated Laurent expansion is used to obtain the bilinear equation of a nonlinear evolution equation. As an application of Hirota's method, multisoliton solutions are constructed from the bilinear equation. Extending the application of Hirota's method and employing multidimensional Riemann theta function, one and two-periodic wave solutions are also obtained in a straightforward manner. The asymptotic behavior of one and two-periodic wave solutions under small amplitude limits is presented, and their relations with soliton solutions are also demonstrated.

  17. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    NASA Astrophysics Data System (ADS)

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-01

    In this article we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012), 10.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015), 10.1103/PhysRevC.91.027901]. We will discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  18. Study of the extra-ionic electron distributions in semi-metallic structures by nuclear quadrupole resonance techniques

    NASA Technical Reports Server (NTRS)

    Murty, A. N.

    1976-01-01

    A straightforward self-consistent method was developed to estimate solid state electrostatic potentials, fields and field gradients in ionic solids. The method is a direct practical application of basic electrostatics to solid state and also helps in the understanding of the principles of crystal structure. The necessary mathematical equations, derived from first principles, were presented and the systematic computational procedure developed to arrive at the solid state electrostatic field gradients values was given.

  19. Gravimetric capillary method for kinematic viscosity measurements

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Iwan, J.; Alexander, D.; Jin, Wei-Qing

    1992-01-01

    A novel version of the capillary method for viscosity measurements of liquids is presented. Viscosity data can be deduced in a straightforward way from mass transfer data obtained by differential weighing during the gravity-induced flow of the liquid between two cylindrical chambers. Tests of this technique with water, carbon tetrachloride, and ethanol suggest that this arrangement provides an accuracy of about +/- 1 percent. The technique facilitates operation under sealed, isothermal conditions and, thus can readily be applied to reactive and/or high vapor pressure liquids.

  20. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    DOE PAGES

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-19

    Here, we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012)PRVCAN0556-281310.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015)PRVCAN0556-281310.1103/PhysRevC.91.027901]. We will then discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  1. Advances in computer simulation of genome evolution: toward more realistic evolutionary genomics analysis by approximate bayesian computation.

    PubMed

    Arenas, Miguel

    2015-04-01

    NGS technologies present a fast and cheap generation of genomic data. Nevertheless, ancestral genome inference is not so straightforward due to complex evolutionary processes acting on this material such as inversions, translocations, and other genome rearrangements that, in addition to their implicit complexity, can co-occur and confound ancestral inferences. Recently, models of genome evolution that accommodate such complex genomic events are emerging. This letter explores these novel evolutionary models and proposes their incorporation into robust statistical approaches based on computer simulations, such as approximate Bayesian computation, that may produce a more realistic evolutionary analysis of genomic data. Advantages and pitfalls in using these analytical methods are discussed. Potential applications of these ancestral genomic inferences are also pointed out.

  2. Direct Determination of the Equilibrium Unbinding Potential Profile for a Short DNA Duplex from Force Spectroscopy Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noy, A

    2004-05-04

    Modern force microscopy techniques allow researchers to use mechanical forces to probe interactions between biomolecules. However, such measurements often happen in non-equilibrium regime, which precludes straightforward extraction of the equilibrium energy information. Here we use the work averaging method based on Jarzynski equality to reconstruct the equilibrium interaction potential from the unbinding of a complementary 14-mer DNA duplex from the results of non-equilibrium single-molecule measurements. The reconstructed potential reproduces most of the features of the DNA stretching transition, previously observed only in equilibrium stretching of long DNA sequences. We also compare the reconstructed potential with the thermodynamic parameters of DNAmore » duplex unbinding and show that the reconstruction accurately predicts duplex melting enthalpy.« less

  3. One-Pot Synthesis of Cyclopropane-Fused Cyclic Amidines: An Oxidative Carbanion Cyclization.

    PubMed

    Veeranna, Kirana Devarahosahalli; Das, Kanak Kanti; Baskaran, Sundarababu

    2017-12-18

    A novel and efficient one-pot method has been developed for the synthesis of cyclopropane-fused bicyclic amidines on the basis of a CuBr 2 -mediated oxidative cyclization of carbanions. The usefulness of this unique multicomponent strategy has been demonstrated by the use of a wide variety of substrates to furnish novel cyclopropane-containing amidines with a quaternary center in very good yields. This ketenimine-based approach provides straightforward access to biologically active and pharmaceutically important 3-azabicyclo[n.1.0]alkane frameworks under mild conditions. The synthetic power of this methodology is exemplified in the concise synthesis of the pharmaceutically important antidepressant drug candidate GSK1360707 and key intermediates for the synthesis of amitifadine, bicifadine, and narlaprevir. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Embracing change.

    PubMed

    Christiansen, Carl

    2009-01-01

    As the world changes financially, the healthcare architectural design world is following suit. Gone are the straightforward days of designing a hospital based solely on the programming and function of the space. Now, architects must evolve not only to understand healthcare financing and its availability to clients, but to work in a more collaborative way to design projects. Today, fewer projects are moving forward, creating increased competition among architects. This is also generating more scrutiny in regard to cost control and risk management, which has forced architects to consider alternatives to contractual relationships and design/delivery methods. New ways of thinking about bringing a hospital to life foster creativity in both the design and delivery processes. Although these changes can be somewhat uncomfortable, they foster learning, collaboration, and ultimately, benefits for all who participate in the process.

  5. Low-emittance tuning of storage rings using normal mode beam position monitor calibration

    NASA Astrophysics Data System (ADS)

    Wolski, A.; Rubin, D.; Sagan, D.; Shanks, J.

    2011-07-01

    We describe a new technique for low-emittance tuning of electron and positron storage rings. This technique is based on calibration of the beam position monitors (BPMs) using excitation of the normal modes of the beam motion, and has benefits over conventional methods. It is relatively fast and straightforward to apply, it can be as easily applied to a large ring as to a small ring, and the tuning for low emittance becomes completely insensitive to BPM gain and alignment errors that can be difficult to determine accurately. We discuss the theory behind the technique, present some simulation results illustrating that it is highly effective and robust for low-emittance tuning, and describe the results of some initial experimental tests on the CesrTA storage ring.

  6. Crystal-structure prediction via the Floppy-Box Monte Carlo algorithm: Method and application to hard (non)convex particles

    NASA Astrophysics Data System (ADS)

    de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein

    2012-12-01

    In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.

  7. "Dip-and-read" paper-based analytical devices using distance-based detection with color screening.

    PubMed

    Yamada, Kentaro; Citterio, Daniel; Henry, Charles S

    2018-05-15

    An improved paper-based analytical device (PAD) using color screening to enhance device performance is described. Current detection methods for PADs relying on the distance-based signalling motif can be slow due to the assay time being limited by capillary flow rates that wick fluid through the detection zone. For traditional distance-based detection motifs, analysis can take up to 45 min for a channel length of 5 cm. By using a color screening method, quantification with a distance-based PAD can be achieved in minutes through a "dip-and-read" approach. A colorimetric indicator line deposited onto a paper substrate using inkjet-printing undergoes a concentration-dependent colorimetric response for a given analyte. This color intensity-based response has been converted to a distance-based signal by overlaying a color filter with a continuous color intensity gradient matching the color of the developed indicator line. As a proof-of-concept, Ni quantification in welding fume was performed as a model assay. The results of multiple independent user testing gave mean absolute percentage error and average relative standard deviations of 10.5% and 11.2% respectively, which were an improvement over analysis based on simple visual color comparison with a read guide (12.2%, 14.9%). In addition to the analytical performance comparison, an interference study and a shelf life investigation were performed to further demonstrate practical utility. The developed system demonstrates an alternative detection approach for distance-based PADs enabling fast (∼10 min), quantitative, and straightforward assays.

  8. F-Expansion Method and New Exact Solutions of the Schrödinger-KdV Equation

    PubMed Central

    Filiz, Ali; Ekici, Mehmet; Sonmezoglu, Abdullah

    2014-01-01

    F-expansion method is proposed to seek exact solutions of nonlinear evolution equations. With the aid of symbolic computation, we choose the Schrödinger-KdV equation with a source to illustrate the validity and advantages of the proposed method. A number of Jacobi-elliptic function solutions are obtained including the Weierstrass-elliptic function solutions. When the modulus m of Jacobi-elliptic function approaches to 1 and 0, soliton-like solutions and trigonometric-function solutions are also obtained, respectively. The proposed method is a straightforward, short, promising, and powerful method for the nonlinear evolution equations in mathematical physics. PMID:24672327

  9. F-expansion method and new exact solutions of the Schrödinger-KdV equation.

    PubMed

    Filiz, Ali; Ekici, Mehmet; Sonmezoglu, Abdullah

    2014-01-01

    F-expansion method is proposed to seek exact solutions of nonlinear evolution equations. With the aid of symbolic computation, we choose the Schrödinger-KdV equation with a source to illustrate the validity and advantages of the proposed method. A number of Jacobi-elliptic function solutions are obtained including the Weierstrass-elliptic function solutions. When the modulus m of Jacobi-elliptic function approaches to 1 and 0, soliton-like solutions and trigonometric-function solutions are also obtained, respectively. The proposed method is a straightforward, short, promising, and powerful method for the nonlinear evolution equations in mathematical physics.

  10. Monovalent Streptavidin that Senses Oligonucleotides**

    PubMed Central

    Wang, Jingxian; Kostic, Natasa; Stojanovic, Milan N.

    2013-01-01

    We report a straightforward chemical route to monovalent streptavidin, a valuable reagent for imaging. The one-step process is based on a (tris)biotinylated-oligonucleotide blocking three of streptavidin’s four biotin binding sites. Further, the complex is highly sensitive to single-base differences - whereby perfectly matched oligonucleotides trigger dissociation of the biotin-streptavidin interaction at higher rates than single-base mismatches. Unique properties and ease of synthesis open wide opportunities for practical applications in imaging and biosensing. PMID:23606329

  11. Finite and spectral cell method for wave propagation in heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Joulaian, Meysam; Duczek, Sascha; Gabbert, Ulrich; Düster, Alexander

    2014-09-01

    In the current paper we present a fast, reliable technique for simulating wave propagation in complex structures made of heterogeneous materials. The proposed approach, the spectral cell method, is a combination of the finite cell method and the spectral element method that significantly lowers preprocessing and computational expenditure. The spectral cell method takes advantage of explicit time-integration schemes coupled with a diagonal mass matrix to reduce the time spent on solving the equation system. By employing a fictitious domain approach, this method also helps to eliminate some of the difficulties associated with mesh generation. Besides introducing a proper, specific mass lumping technique, we also study the performance of the low-order and high-order versions of this approach based on several numerical examples. Our results show that the high-order version of the spectral cell method together requires less memory storage and less CPU time than other possible versions, when combined simultaneously with explicit time-integration algorithms. Moreover, as the implementation of the proposed method in available finite element programs is straightforward, these properties turn the method into a viable tool for practical applications such as structural health monitoring [1-3], quantitative ultrasound applications [4], or the active control of vibrations and noise [5, 6].

  12. Using the Objective Borderline Method (OBM) to Support Board of Examiners' Decisions in a Medical Programme

    ERIC Educational Resources Information Center

    Shulruf, Boaz; Booth, Roger; Baker, Heather; Bagg, Warwick; Barrow, Mark

    2017-01-01

    Decisions about progress through an academic programme are made by Boards of Examiners, on the basis of students' course assessments. For most students such pass/fail grading decisions are straightforward. However, for those students whose results are borderline (either at a pass/fail boundary or boundaries between grades) the exercise of some…

  13. Controlled-Root Approach To Digital Phase-Locked Loops

    NASA Technical Reports Server (NTRS)

    Stephens, Scott A.; Thomas, J. Brooks

    1995-01-01

    Performance tailored more flexibly and directly to satisfy design requirements. Controlled-root approach improved method for analysis and design of digital phase-locked loops (DPLLs). Developed rigorously from first principles for fully digital loops, making DPLL theory and design simpler and more straightforward (particularly for third- or fourth-order DPLL) and controlling performance more accurately in case of high gain.

  14. Driven by Data: How Three Districts Are Successfully Using Data, Rather than Gut Feelings, to Align Staff Development with School Needs

    ERIC Educational Resources Information Center

    Gold, Stephanie

    2005-01-01

    The concept of data-driven professional development is both straight-forward and sensible. Implementing this approach is another story, which is why many administrators are turning to sophisticated tools to help manage data collection and analysis. These tools allow educators to assess and correlate student outcomes, instructional methods, and…

  15. Straightforward Preparation Method for Complexes Bearing a Bidentate N-Heterocyclic Carbene to Introduce Undergraduate Students to Research Methodology

    ERIC Educational Resources Information Center

    Fernández, Alberto; López-Torres, Margarita; Fernández, Jesús J.; Vázquez-García, Digna; Marcos, Ismael

    2017-01-01

    A laboratory experiment for students in advanced inorganic chemistry is described. In this experiment, students prepare two metal complexes with a potentially bidentate-carbene ligand. The complexes are synthesized by reaction of a bisimidazolium salt with silver(I) oxide or palladium(II) acetate. Silver and palladium complexes are binuclear and…

  16. A First Laboratory Utilizing NMR for Undergraduate Education: Characterization of Edible Fats and Oils by Quantitative [superscript 13]C NMR

    ERIC Educational Resources Information Center

    Fry, Charles G.; Hofstetter, Heike; Bowman, Matthew D.

    2017-01-01

    Quantitative [superscript 13]C NMR provides a straightforward method of analyzing edible oils in undergraduate chemistry laboratories. [superscript 13]C spectra are relatively easy to understand, and are much simpler to analyze and workup than corresponding [superscript 1]H spectra. Average chain length, degree of saturation, and average…

  17. Structuring a risk-based bioassay program for uranium usage in university laboratories

    NASA Astrophysics Data System (ADS)

    Dawson, Johnne Talia

    Bioassay programs are integral in a radiation safety program. They are used as a method of determining whether individuals working with radioactive material have been exposed and have received a resulting dose. For radionuclides that are not found in nature, determining an exposure is straightforward. However, for a naturally occurring radionuclide like uranium, it is not as straightforward to determine whether a dose is the result of an occupational exposure. The purpose of this project is to address this issue within the University of Nevada, Las Vegas's (UNLV) bioassay program. This project consisted of two components that studied the effectiveness of a bioassay program in determining the dose for an acute inhalation of uranium. The first component of the plan addresses the creation of excretion curves, utilizing MATLAB that would allow UNLV to be able to determine at what time an inhalation dose can be attributed to. The excretion curves were based on the ICRP 30 lung model, as well as the Annual Limit Intake (ALI) values located in the Nuclear Regulatory Commission's 10CFR20 which is based on ICRP 30 (International Commission on Radiological Protection). The excretion curves would allow UNLV to be able to conduct in-house investigations of inhalation doses without solely depending on outside investigations and sources. The second component of the project focused on the creation of a risk based bioassay program to be utilized by UNLV that would take into account bioassay frequency that depended on the individual. Determining the risk based bioassay program required the use of baseline variance in order to minimize the investigation of false positives among those individuals who undergo bioassays for uranium work. The proposed program was compared against an evaluation limit of 10 mrem per quarter, an investigational limit of 125 mrem per quarter, and the federal/state requirement of 1.25 rem per quarter. It was determined that a bioassay program whose bioassay frequency varies per person, depending on the chemical class of material being worked with, in conjunction with continuous air monitoring can sufficiently meet ALARA standards.

  18. The concept of template-based de novo design from drug-derived molecular fragments and its application to TAR RNA.

    PubMed

    Schüller, Andreas; Suhartono, Marcel; Fechner, Uli; Tanrikulu, Yusuf; Breitung, Sven; Scheffer, Ute; Göbel, Michael W; Schneider, Gisbert

    2008-02-01

    Principles of fragment-based molecular design are presented and discussed in the context of de novo drug design. The underlying idea is to dissect known drug molecules in fragments by straightforward pseudo-retro-synthesis. The resulting building blocks are then used for automated assembly of new molecules. A particular question has been whether this approach is actually able to perform scaffold-hopping. A prospective case study illustrates the usefulness of fragment-based de novo design for finding new scaffolds. We were able to identify a novel ligand disrupting the interaction between the Tat peptide and TAR RNA, which is part of the human immunodeficiency virus (HIV-1) mRNA. Using a single template structure (acetylpromazine) as reference molecule and a topological pharmacophore descriptor (CATS), new chemotypes were automatically generated by our de novo design software Flux. Flux features an evolutionary algorithm for fragment-based compound assembly and optimization. Pharmacophore superimposition and docking into the target RNA suggest perfect matching between the template molecule and the designed compound. Chemical synthesis was straightforward, and bioactivity of the designed molecule was confirmed in a FRET assay. This study demonstrates the practicability of de novo design to generating RNA ligands containing novel molecular scaffolds.

  19. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  20. Efficient Band-to-Trap Tunneling Model Including Heterojunction Band Offset

    DOE PAGES

    Gao, Xujiao; Huang, Andy; Kerr, Bert

    2017-10-25

    In this paper, we present an efficient band-to-trap tunneling model based on the Schenk approach, in which an analytic density-of-states (DOS) model is developed based on the open boundary scattering method. The new model explicitly includes the effect of heterojunction band offset, in addition to the well-known field effect. Its analytic form enables straightforward implementation into TCAD device simulators. It is applicable to all one-dimensional potentials, which can be approximated to a good degree such that the approximated potentials lead to piecewise analytic wave functions with open boundary conditions. The model allows for simulating both the electric-field-enhanced and band-offset-enhanced carriermore » recombination due to the band-to-trap tunneling near the heterojunction in a heterojunction bipolar transistor (HBT). Simulation results of an InGaP/GaAs/GaAs NPN HBT show that the proposed model predicts significantly increased base currents, due to the hole-to-trap tunneling enhanced by the emitter-base junction band offset. Finally, the results compare favorably with experimental observation.« less

  1. Construction of mutually unbiased bases with cyclic symmetry for qubit systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seyfarth, Ulrich; Ranade, Kedar S.

    2011-10-15

    For the complete estimation of arbitrary unknown quantum states by measurements, the use of mutually unbiased bases has been well established in theory and experiment for the past 20 years. However, most constructions of these bases make heavy use of abstract algebra and the mathematical theory of finite rings and fields, and no simple and generally accessible construction is available. This is particularly true in the case of a system composed of several qubits, which is arguably the most important case in quantum information science and quantum computation. In this paper, we close this gap by providing a simple andmore » straightforward method for the construction of mutually unbiased bases in the case of a qubit register. We show that our construction is also accessible to experiments, since only Hadamard and controlled-phase gates are needed, which are available in most practical realizations of a quantum computer. Moreover, our scheme possesses the optimal scaling possible, i.e., the number of gates scales only linearly in the number of qubits.« less

  2. Efficient Band-to-Trap Tunneling Model Including Heterojunction Band Offset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Xujiao; Huang, Andy; Kerr, Bert

    In this paper, we present an efficient band-to-trap tunneling model based on the Schenk approach, in which an analytic density-of-states (DOS) model is developed based on the open boundary scattering method. The new model explicitly includes the effect of heterojunction band offset, in addition to the well-known field effect. Its analytic form enables straightforward implementation into TCAD device simulators. It is applicable to all one-dimensional potentials, which can be approximated to a good degree such that the approximated potentials lead to piecewise analytic wave functions with open boundary conditions. The model allows for simulating both the electric-field-enhanced and band-offset-enhanced carriermore » recombination due to the band-to-trap tunneling near the heterojunction in a heterojunction bipolar transistor (HBT). Simulation results of an InGaP/GaAs/GaAs NPN HBT show that the proposed model predicts significantly increased base currents, due to the hole-to-trap tunneling enhanced by the emitter-base junction band offset. Finally, the results compare favorably with experimental observation.« less

  3. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)-A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes.

    PubMed

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes.

  4. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    PubMed Central

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes. PMID:29250096

  5. Field-gradient partitioning for fracture and frictional contact in the material point method: Field-gradient partitioning for fracture and frictional contact in the material point method [Fracture and frictional contact in material point method using damage-field gradients for velocity-field partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Homel, Michael A.; Herbold, Eric B.

    Contact and fracture in the material point method require grid-scale enrichment or partitioning of material into distinct velocity fields to allow for displacement or velocity discontinuities at a material interface. We present a new method which a kernel-based damage field is constructed from the particle data. The gradient of this field is used to dynamically repartition the material into contact pairs at each node. Our approach avoids the need to construct and evolve explicit cracks or contact surfaces and is therefore well suited to problems involving complex 3-D fracture with crack branching and coalescence. A straightforward extension of this approachmore » permits frictional ‘self-contact’ between surfaces that are initially part of a single velocity field, enabling more accurate simulation of granular flow, porous compaction, fragmentation, and comminution of brittle materials. Finally, numerical simulations of self contact and dynamic crack propagation are presented to demonstrate the accuracy of the approach.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Millis, Andrew

    Understanding the behavior of interacting electrons in molecules and solids so that one can predict new superconductors, catalysts, light harvesters, energy and battery materials and optimize existing ones is the ``quantum many-body problem’’. This is one of the scientific grand challenges of the 21 st century. A complete solution to the problem has been proven to be exponentially hard, meaning that straightforward numerical approaches fail. New insights and new methods are needed to provide accurate yet feasible approximate solutions. This CMSCN project brought together chemists and physicists to combine insights from the two disciplines to develop innovative new approaches. Outcomesmore » included the Density Matrix Embedding method, a new, computationally inexpensive and extremely accurate approach that may enable first principles treatment of superconducting and magnetic properties of strongly correlated materials, new techniques for existing methods including an Adaptively Truncated Hilbert Space approach that will vastly expand the capabilities of the dynamical mean field method, a self-energy embedding theory and a new memory-function based approach to the calculations of the behavior of driven systems. The methods developed under this project are now being applied to improve our understanding of superconductivity, to calculate novel topological properties of materials and to characterize and improve the properties of nanoscale devices.« less

  7. Field-gradient partitioning for fracture and frictional contact in the material point method: Field-gradient partitioning for fracture and frictional contact in the material point method [Fracture and frictional contact in material point method using damage-field gradients for velocity-field partitioning

    DOE PAGES

    Homel, Michael A.; Herbold, Eric B.

    2016-08-15

    Contact and fracture in the material point method require grid-scale enrichment or partitioning of material into distinct velocity fields to allow for displacement or velocity discontinuities at a material interface. We present a new method which a kernel-based damage field is constructed from the particle data. The gradient of this field is used to dynamically repartition the material into contact pairs at each node. Our approach avoids the need to construct and evolve explicit cracks or contact surfaces and is therefore well suited to problems involving complex 3-D fracture with crack branching and coalescence. A straightforward extension of this approachmore » permits frictional ‘self-contact’ between surfaces that are initially part of a single velocity field, enabling more accurate simulation of granular flow, porous compaction, fragmentation, and comminution of brittle materials. Finally, numerical simulations of self contact and dynamic crack propagation are presented to demonstrate the accuracy of the approach.« less

  8. Socializing Messages in Blue-Collar Families: Communicative Pathways to Social Mobility and Reproduction

    ERIC Educational Resources Information Center

    Lucas, Kristen

    2011-01-01

    This study explicitly links processes of anticipatory socialization to social mobility and reproduction. An examination of the socializing messages exchanged between blue-collar parents (n = 41) and their children (n = 25) demonstrate that family-based messages about work and career seldom occur in straightforward, unambiguous ways. Instead,…

  9. Reinforcing Alcohol Prevention (RAP) Program: A Secondary School Curriculum to Combat Underage Drinking and Impaired Driving

    ERIC Educational Resources Information Center

    Will, Kelli England; Sabo, Cynthia Shier

    2010-01-01

    The Reinforcing Alcohol Prevention (RAP) Program is an alcohol prevention curriculum developed in partnership with secondary schools to serve their need for a brief, evidence-based, and straightforward program that aligned with state learning objectives. Program components included an educational lesson, video, and interactive activities delivered…

  10. Test Review: A Review of the Five Factor Personality Inventory-Children

    ERIC Educational Resources Information Center

    Klingbeil, David A.

    2009-01-01

    This article presents a review of the Five Factor Personality Inventory-Children (FFPI-C), a quick and easily administered personality assessment for children and adolescents with clear and straightforward scoring and interpretation procedures. The FFPI-C is based on a theoretical model of personality developed through the work of Allport (Allport…

  11. Modeling Time-Dependent Association in Longitudinal Data: A Lag as Moderator Approach

    ERIC Educational Resources Information Center

    Selig, James P.; Preacher, Kristopher J.; Little, Todd D.

    2012-01-01

    We describe a straightforward, yet novel, approach to examine time-dependent association between variables. The approach relies on a measurement-lag research design in conjunction with statistical interaction models. We base arguments in favor of this approach on the potential for better understanding the associations between variables by…

  12. Connect the Dots: A Dedicated System for Learning Links Teacher Teams to Student Outcomes

    ERIC Educational Resources Information Center

    Ermeling, Bradley A.

    2012-01-01

    Establishing school-based professional learning appears so simple and straightforward during inspiring presentations at summer workshops, but keeping collaborative work focused on teaching and learning in such a way that it produces consistent results is a highly underestimated task. Investigations and experience from a group of researchers at the…

  13. Effect-size measures as descriptors of assay quality in high-content screening: A brief review of some available methodologies

    USDA-ARS?s Scientific Manuscript database

    The field of high-content screening (HCS) typically uses measures of screen quality conceived for fairly straightforward high-throughput screening (HTS) scenarios. However, in contrast to HTS, image-based HCS systems rely on multidimensional readouts reporting biological responses associated with co...

  14. Native Amazonian Children Forego Egalitarianism in Merit-Based Tasks When They Learn to Count

    ERIC Educational Resources Information Center

    Jara-Ettinger, Julian; Gibson, Edward; Kidd, Celeste; Piantadosi, Steve

    2016-01-01

    Cooperation often results in a final material resource that must be shared, but deciding how to distribute that resource is not straightforward. A distribution could count as fair if all members receive an equal reward ("egalitarian distributions"), or if each member's reward is proportional to their merit ("merit-based…

  15. Profile and effects of consumer involvement in fresh meat.

    PubMed

    Verbeke, Wim; Vackier, Isabelle

    2004-05-01

    This study investigates the profile and effects of consumer involvement in fresh meat as a product category based on cross-sectional data collected in Belgium. Analyses confirm that involvement in meat is a multidimensional construct including four facets: pleasure value, symbolic value, risk importance and risk probability. Four involvement-based meat consumer segments are identified: straightforward, cautious, indifferent, and concerned. Socio-demographic differences between the segments relate to gender, age and presence of children. The segments differ in terms of extensiveness of the decision-making process, impact and trust in information sources, levels of concern, price consciousness, claimed meat consumption, consumption intention, and preferred place of purchase. The two segments with a strong perception of meat risks constitute two-thirds of the market. They can be typified as cautious meat lovers versus concerned meat consumers. Efforts aiming at consumer reassurance through quality improvement, traceability, labelling or communication may gain effectiveness when targeted specifically to these two segments. Whereas straightforward meat lovers focus mainly on taste as the decisive criterion, indifferent consumers are strongly price oriented.

  16. Inverse Calibration Free fs-LIBS of Copper-Based Alloys

    NASA Astrophysics Data System (ADS)

    Smaldone, Antonella; De Bonis, Angela; Galasso, Agostino; Guarnaccio, Ambra; Santagata, Antonio; Teghil, Roberto

    2016-09-01

    In this work the analysis by Laser Induced Breakdown Spectroscopy (LIBS) technique of copper-based alloys having different composition and performed with fs laser pulses is presented. A Nd:Glass laser (Twinkle Light Conversion, λ = 527 nm at 250 fs) and a set of bronze and brass certified standards were used. The inverse Calibration-Free method (inverse CF-LIBS) was applied for estimating the temperature of the fs laser induced plasma in order to achieve quantitative elemental analysis of such materials. This approach strengthens the hypothesis that, through the assessment of the plasma temperature occurring in fs-LIBS, straightforward and reliable analytical data can be provided. With this aim the capability of the here adopted inverse CF-LIBS method, which is based on the fulfilment of the Local Thermodynamic Equilibrium (LTE) condition, for an indirect determination of the species excitation temperature, is shown. It is reported that the estimated temperatures occurring during the process provide a good figure of merit between the certified and the experimentally determined composition of the bronze and brass materials, here employed, although further correction procedure, like the use of calibration curves, can be demanded. The reported results demonstrate that the inverse CF-LIBS method can be applied when fs laser pulses are used even though the plasma properties could be affected by the matrix effects restricting its full employment to unknown samples provided that a certified standard having similar composition is available.

  17. In vivo recording of aerodynamic force with an aerodynamic force platform: from drones to birds.

    PubMed

    Lentink, David; Haselsteiner, Andreas F; Ingersoll, Rivers

    2015-03-06

    Flapping wings enable flying animals and biomimetic robots to generate elevated aerodynamic forces. Measurements that demonstrate this capability are based on experiments with tethered robots and animals, and indirect force calculations based on measured kinematics or airflow during free flight. Remarkably, there exists no method to measure these forces directly during free flight. Such in vivo recordings in freely behaving animals are essential to better understand the precise aerodynamic function of their flapping wings, in particular during the downstroke versus upstroke. Here, we demonstrate a new aerodynamic force platform (AFP) for non-intrusive aerodynamic force measurement in freely flying animals and robots. The platform encloses the animal or object that generates fluid force with a physical control surface, which mechanically integrates the net aerodynamic force that is transferred to the earth. Using a straightforward analytical solution of the Navier-Stokes equation, we verified that the method is accurate. We subsequently validated the method with a quadcopter that is suspended in the AFP and generates unsteady thrust profiles. These independent measurements confirm that the AFP is indeed accurate. We demonstrate the effectiveness of the AFP by studying aerodynamic weight support of a freely flying bird in vivo. These measurements confirm earlier findings based on kinematics and flow measurements, which suggest that the avian downstroke, not the upstroke, is primarily responsible for body weight support during take-off and landing.

  18. A method to improve the nutritional quality of foods and beverages based on dietary recommendations.

    PubMed

    Nijman, C A J; Zijp, I M; Sierksma, A; Roodenburg, A J C; Leenen, R; van den Kerkhoff, C; Weststrate, J A; Meijer, G W

    2007-04-01

    The increasing consumer interest in health prompted Unilever to develop a globally applicable method (Nutrition Score) to evaluate and improve the nutritional composition of its foods and beverages portfolio. Based on (inter)national dietary recommendations, generic benchmarks were developed to evaluate foods and beverages on their content of trans fatty acids, saturated fatty acids, sodium and sugars. High intakes of these key nutrients are associated with undesirable health effects. In principle, the developed generic benchmarks can be applied globally for any food and beverage product. Product category-specific benchmarks were developed when it was not feasible to meet generic benchmarks because of technological and/or taste factors. The whole Unilever global foods and beverages portfolio has been evaluated and actions have been taken to improve the nutritional quality. The advantages of this method over other initiatives to assess the nutritional quality of foods are that it is based on the latest nutritional scientific insights and its global applicability. The Nutrition Score is the first simple, transparent and straightforward method that can be applied globally and across all food and beverage categories to evaluate the nutritional composition. It can help food manufacturers to improve the nutritional value of their products. In addition, the Nutrition Score can be a starting point for a powerful health indicator front-of-pack. This can have a significant positive impact on public health, especially when implemented by all food manufacturers.

  19. Non-parametric estimation of population size changes from the site frequency spectrum.

    PubMed

    Waltoft, Berit Lindum; Hobolth, Asger

    2018-06-11

    Changes in population size is a useful quantity for understanding the evolutionary history of a species. Genetic variation within a species can be summarized by the site frequency spectrum (SFS). For a sample of size n, the SFS is a vector of length n - 1 where entry i is the number of sites where the mutant base appears i times and the ancestral base appears n - i times. We present a new method, CubSFS, for estimating the changes in population size of a panmictic population from an observed SFS. First, we provide a straightforward proof for the expression of the expected site frequency spectrum depending only on the population size. Our derivation is based on an eigenvalue decomposition of the instantaneous coalescent rate matrix. Second, we solve the inverse problem of determining the changes in population size from an observed SFS. Our solution is based on a cubic spline for the population size. The cubic spline is determined by minimizing the weighted average of two terms, namely (i) the goodness of fit to the observed SFS, and (ii) a penalty term based on the smoothness of the changes. The weight is determined by cross-validation. The new method is validated on simulated demographic histories and applied on unfolded and folded SFS from 26 different human populations from the 1000 Genomes Project.

  20. FTTH: the overview of existing technologies

    NASA Astrophysics Data System (ADS)

    Nowak, Dawid; Murphy, John

    2005-06-01

    The growing popularity of the Internet is the key driver behind the development of new access methods which would enable a customer to experience a true broadband. Amongst various technologies, the access methods based on the optical fiber are getting more and more attention as they offer the ultimate solution in delivering different services to the customers' premises. Three different architectures have been proposed that facilitate the roll out of Fiber-to-the-Home (FTTH) infrastructure. Point-to-point Ethernet networks are the most straightforward and already matured solution. Different flavors of Passive Optical Networks (PONs) with Time Division Multiplexing Access (TDMA) are getting more widespread as necessary equipment is becoming available on the market. The third main contender are PONs withWavelength DivisionMultiplexing Access (WDMA). Although still in their infancy, the laboratory tests show that they have many advantages over present solutions. In this paper we show a brief comparison of these three access methods. In our analysis the architecture of each solution is presented. The applicability of each system is looked at from different viewpoint and their advantages and disadvantages are highlighted.

  1. On the critical forcing amplitude of forced nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Febbo, Mariano; Ji, Jinchen C.

    2013-12-01

    The steady-state response of forced single degree-of-freedom weakly nonlinear oscillators under primary resonance conditions can exhibit saddle-node bifurcations, jump and hysteresis phenomena, if the amplitude of the excitation exceeds a certain value. This critical value of excitation amplitude or critical forcing amplitude plays an important role in determining the occurrence of saddle-node bifurcations in the frequency-response curve. This work develops an alternative method to determine the critical forcing amplitude for single degree-of-freedom nonlinear oscillators. Based on Lagrange multipliers approach, the proposed method considers the calculation of the critical forcing amplitude as an optimization problem with constraints that are imposed by the existence of locations of vertical tangency. In comparison with the Gröbner basis method, the proposed approach is more straightforward and thus easy to apply for finding the critical forcing amplitude both analytically and numerically. Three examples are given to confirm the validity of the theoretical predictions. The first two present the analytical form for the critical forcing amplitude and the third one is an example of a numerically computed solution.

  2. Specular reflection treatment for the 3D radiative transfer equation solved with the discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.

    The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less

  3. Solving gap metabolites and blocked reactions in genome-scale models: application to the metabolic network of Blattabacterium cuenoti.

    PubMed

    Ponce-de-León, Miguel; Montero, Francisco; Peretó, Juli

    2013-10-31

    Metabolic reconstruction is the computational-based process that aims to elucidate the network of metabolites interconnected through reactions catalyzed by activities assigned to one or more genes. Reconstructed models may contain inconsistencies that appear as gap metabolites and blocked reactions. Although automatic methods for solving this problem have been previously developed, there are many situations where manual curation is still needed. We introduce a general definition of gap metabolite that allows its detection in a straightforward manner. Moreover, a method for the detection of Unconnected Modules, defined as isolated sets of blocked reactions connected through gap metabolites, is proposed. The method has been successfully applied to the curation of iCG238, the genome-scale metabolic model for the bacterium Blattabacterium cuenoti, obligate endosymbiont of cockroaches. We found the proposed approach to be a valuable tool for the curation of genome-scale metabolic models. The outcome of its application to the genome-scale model B. cuenoti iCG238 is a more accurate model version named as B. cuenoti iMP240.

  4. Application of geometric algebra for the description of polymer conformations.

    PubMed

    Chys, Pieter

    2008-03-14

    In this paper a Clifford algebra-based method is applied to calculate polymer chain conformations. The approach enables the calculation of the position of an atom in space with the knowledge of the bond length (l), valence angle (theta), and rotation angle (phi) of each of the preceding bonds in the chain. Hence, the set of geometrical parameters {l(i),theta(i),phi(i)} yields all the position coordinates p(i) of the main chain atoms. Moreover, the method allows the calculation of side chain conformations and the computation of rotations of chain segments. With these features it is, in principle, possible to generate conformations of any type of chemical structure. This method is proposed as an alternative for the classical approach by matrix algebra. It is more straightforward and its final symbolic representation considerably simpler than that of matrix algebra. Approaches for realistic modeling by means of incorporation of energetic considerations can be combined with it. This article, however, is entirely focused at showing the suitable mathematical framework on which further developments and applications can be built.

  5. Benchmarking Density Functional Theory Based Methods To Model NiOOH Material Properties: Hubbard and van der Waals Corrections vs Hybrid Functionals.

    PubMed

    Zaffran, Jeremie; Caspary Toroker, Maytal

    2016-08-09

    NiOOH has recently been used to catalyze water oxidation by way of electrochemical water splitting. Few experimental data are available to rationalize the successful catalytic capability of NiOOH. Thus, theory has a distinctive role for studying its properties. However, the unique layered structure of NiOOH is associated with the presence of essential dispersion forces within the lattice. Hence, the choice of an appropriate exchange-correlation functional within Density Functional Theory (DFT) is not straightforward. In this work, we will show that standard DFT is sufficient to evaluate the geometry, but DFT+U and hybrid functionals are required to calculate the oxidation states. Notably, the benefit of DFT with van der Waals correction is marginal. Furthermore, only hybrid functionals succeed in opening a bandgap, and such methods are necessary to study NiOOH electronic structure. In this work, we expect to give guidelines to theoreticians dealing with this material and to present a rational approach in the choice of the DFT method of calculation.

  6. Illustrating, Quantifying, and Correcting for Bias in Post-hoc Analysis of Gene-Based Rare Variant Tests of Association

    PubMed Central

    Grinde, Kelsey E.; Arbet, Jaron; Green, Alden; O'Connell, Michael; Valcarcel, Alessandra; Westra, Jason; Tintle, Nathan

    2017-01-01

    To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s) in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p < 2.2 × 10−6) and, consequently, substantially improves mean squared error and variant prioritization/ranking. The method is particularly helpful in adjustment for winner's curse effects when the initial gene-based test has low power and for relatively more common, non-causal variants. Adjustment for winner's curse is recommended for all post-hoc estimation and ranking of variants after a gene-based test. Further work is necessary to continue seeking ways to reduce bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures. PMID:28959274

  7. Isolation and determination of absolute configurations of insect-produced methyl-branched hydrocarbons

    PubMed Central

    Bello, Jan E.; McElfresh, J. Steven; Millar, Jocelyn G.

    2015-01-01

    Although the effects of stereochemistry have been studied extensively for volatile insect pheromones, little is known about the effects of chirality in the nonvolatile methyl-branched hydrocarbons (MBCHs) used by many insects as contact pheromones. MBCHs generally contain one or more chiral centers and so two or more stereoisomeric forms are possible for each structure. However, it is not known whether insects biosynthesize these molecules in high stereoisomeric purity, nor is it known whether insects can distinguish the different stereoisomeric forms of MBCHs. This knowledge gap is due in part to the lack of methods for isolating individual MBCHs from the complex cuticular hydrocarbon (CHC) blends of insects, as well as the difficulty in determining the absolute configurations of the isolated MBCHs. To address these deficiencies, we report a straightforward method for the isolation of individual cuticular hydrocarbons from the complex CHC blend. The method was used to isolate 36 pure MBCHs from 20 species in nine insect orders. The absolute stereochemistries of the purified MBCHs then were determined by digital polarimetry. The absolute configurations of all of the isolated MBCHs were determined to be (R) by comparison with a library of synthesized, enantiomerically pure standards, suggesting that the biosynthetic pathways used to construct MBCHs are highly conserved within the Insecta. The development of a straightforward method for isolation of specific CHCs will enable determination of their functional roles by providing pure compounds for bioassays. PMID:25583471

  8. A sparse matrix-vector multiplication based algorithm for accurate density matrix computations on systems of millions of atoms

    NASA Astrophysics Data System (ADS)

    Ghale, Purnima; Johnson, Harley T.

    2018-06-01

    We present an efficient sparse matrix-vector (SpMV) based method to compute the density matrix P from a given Hamiltonian in electronic structure computations. Our method is a hybrid approach based on Chebyshev-Jackson approximation theory and matrix purification methods like the second order spectral projection purification (SP2). Recent methods to compute the density matrix scale as O(N) in the number of floating point operations but are accompanied by large memory and communication overhead, and they are based on iterative use of the sparse matrix-matrix multiplication kernel (SpGEMM), which is known to be computationally irregular. In addition to irregularity in the sparse Hamiltonian H, the nonzero structure of intermediate estimates of P depends on products of H and evolves over the course of computation. On the other hand, an expansion of the density matrix P in terms of Chebyshev polynomials is straightforward and SpMV based; however, the resulting density matrix may not satisfy the required constraints exactly. In this paper, we analyze the strengths and weaknesses of the Chebyshev-Jackson polynomials and the second order spectral projection purification (SP2) method, and propose to combine them so that the accurate density matrix can be computed using the SpMV computational kernel only, and without having to store the density matrix P. Our method accomplishes these objectives by using the Chebyshev polynomial estimate as the initial guess for SP2, which is followed by using sparse matrix-vector multiplications (SpMVs) to replicate the behavior of the SP2 algorithm for purification. We demonstrate the method on a tight-binding model system of an oxide material containing more than 3 million atoms. In addition, we also present the predicted behavior of our method when applied to near-metallic Hamiltonians with a wide energy spectrum.

  9. Effective Presentation of Metabolic Rate Information for Lunar Extravehicular Activity (EVA)

    NASA Technical Reports Server (NTRS)

    Mackin, Michael A.; Gonia, Philip; Lombay-Gonzalez, Jose

    2010-01-01

    During human exploration of the lunar surface, a suited crewmember needs effective and accurate information about consumable levels remaining in their life support system. The information must be presented in a manner that supports real-time consumable monitoring and route planning. Since consumable usage is closely tied to metabolic rate, the lunar suit must estimate metabolic rate from life support sensors, such as oxygen tank pressures, carbon dioxide partial pressure, and cooling water inlet and outlet temperatures. To provide adequate warnings that account for traverse time for a crewmember to return to a safe haven, accurate forecasts of consumable depletion rates are required. The forecasts must be presented to the crewmember in a straightforward, effective manner. In order to evaluate methods for displaying consumable forecasts, a desktop-based simulation of a lunar Extravehicular Activity (EVA) has been developed for the Constellation lunar suite s life-support system. The program was used to compare the effectiveness of several different data presentation methods.

  10. Low-Energy Electron Potentiometry: Contactless Imaging of Charge Transport on the Nanoscale.

    PubMed

    Kautz, J; Jobst, J; Sorger, C; Tromp, R M; Weber, H B; van der Molen, S J

    2015-09-04

    Charge transport measurements form an essential tool in condensed matter physics. The usual approach is to contact a sample by two or four probes, measure the resistance and derive the resistivity, assuming homogeneity within the sample. A more thorough understanding, however, requires knowledge of local resistivity variations. Spatially resolved information is particularly important when studying novel materials like topological insulators, where the current is localized at the edges, or quasi-two-dimensional (2D) systems, where small-scale variations can determine global properties. Here, we demonstrate a new method to determine spatially-resolved voltage maps of current-carrying samples. This technique is based on low-energy electron microscopy (LEEM) and is therefore quick and non-invasive. It makes use of resonance-induced contrast, which strongly depends on the local potential. We demonstrate our method using single to triple layer graphene. However, it is straightforwardly extendable to other quasi-2D systems, most prominently to the upcoming class of layered van der Waals materials.

  11. FDTD simulation of field performance in reverberation chamber excited by two excitation antennas

    NASA Astrophysics Data System (ADS)

    Wang, Song; Wu, Zhan-cheng; Cui, Yao-zhong

    2013-03-01

    The excitation source is one of the critical items that determine the electromagnetic fields in a reverberation chamber (RC). In order to optimize the electromagnetic fields performance, a new method of exciting RC with two antennas is proposed based on theoretical analysis. The full 3D simulation of RC is carried out by the finite difference time domain (FDTD) method on two excitation conditions of one antenna and two antennas. The broadband response of RC is obtained by fast Fourier transformation (FFT) after only one simulation. Numerical data show that the field uniformity in the test space is improved on the condition of two transmitting antennas while the normalized electric fields decreased slightly compared to the one antenna condition. It is straightforward to recognize that two antennas excitation can reduce the demands on power amplifier as the total input power is split among the two antennas, and consequently the cost of electromagnetic compatibility (EMC) test in large-scale RC can be reduced.

  12. Advanced Multipurpose Rendezvous Tracking System Study

    NASA Technical Reports Server (NTRS)

    Laurie, R. J.; Sterzer, F.

    1982-01-01

    Rendezvous and docking (R&D) sensors needed to support Earth orbital operations of vehicles were investigated to determine the form they should take. An R&D sensor must enable an interceptor vehicle to determine both the relative position and the relative attitude of a target vehicle. Relative position determination is fairly straightforward and places few constraints on the sensor. Relative attitude determination, however, is more difficult. The attitude is calculated based on relative position measurements of several reflectors placed in a known arrangement on the target vehicle. The constraints imposed on the sensor by the attitude determination method are severe. Narrow beamwidth, wide field of view (fov), high range accuracy, and fast random scan capability are all required to determine attitude by this method. A consideration of these constraints as well as others imposed by expected operating conditions and the available technology led to the conclusion that the sensor should be a cw optical radar employing a semiconductor laser transmitter and an image dissector receiver.

  13. Photothermal conversion of CO₂ into CH₄ with H₂ over Group VIII nanocatalysts: an alternative approach for solar fuel production.

    PubMed

    Meng, Xianguang; Wang, Tao; Liu, Lequan; Ouyang, Shuxin; Li, Peng; Hu, Huilin; Kako, Tetsuya; Iwai, Hideo; Tanaka, Akihiro; Ye, Jinhua

    2014-10-20

    The photothermal conversion of CO2 provides a straightforward and effective method for the highly efficient production of solar fuels with high solar-light utilization efficiency. This is due to several crucial features of the Group VIII nanocatalysts, including effective energy utilization over the whole range of the solar spectrum, excellent photothermal performance, and unique activation abilities. Photothermal CO2 reaction rates (mol h(-1) g(-1)) that are several orders of magnitude larger than those obtained with photocatalytic methods (μmol h(-1) g(-1)) were thus achieved. It is proposed that the overall water-based CO2 conversion process can be achieved by combining light-driven H2 production from water and photothermal CO2 conversion with H2. More generally, this work suggests that traditional catalysts that are characterized by intense photoabsorption will find new applications in photo-induced green-chemistry processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Skiving stacked sheets of paper into test paper for rapid and multiplexed assay

    PubMed Central

    Yang, Mingzhu; Zhang, Wei; Yang, Junchuan; Hu, Binfeng; Cao, Fengjing; Zheng, Wenshu; Chen, Yiping; Jiang, Xingyu

    2017-01-01

    This paper shows that stacked sheets of paper preincubated with different biological reagents and skiving them into uniform test paper sheets allow mass manufacturing of multiplexed immunoassay devices and simultaneous detection of multiplex targets that can be read out by a barcode scanner. The thickness of one sheet of paper can form the width of a module for the barcode; when stacked, these sheets of paper can form a series of barcodes representing the targets, depending on the color contrast provided by a colored precipitate of an immunoassay. The uniform thickness of sheets of paper allows high-quality signal readout. The manufacturing method allows highly efficient fabrication of the materials and substrates for a straightforward assay of targets that range from drugs of abuse to biomarkers of blood-transmitted infections. In addition, as a novel alternative to the conventional point-of-care testing method, the paper-based barcode assay system can provide highly efficient, accurate, and objective diagnoses. PMID:29214218

  15. Structural kinetic modeling of metabolic networks.

    PubMed

    Steuer, Ralf; Gross, Thilo; Selbig, Joachim; Blasius, Bernd

    2006-08-08

    To develop and investigate detailed mathematical models of metabolic processes is one of the primary challenges in systems biology. However, despite considerable advance in the topological analysis of metabolic networks, kinetic modeling is still often severely hampered by inadequate knowledge of the enzyme-kinetic rate laws and their associated parameter values. Here we propose a method that aims to give a quantitative account of the dynamical capabilities of a metabolic system, without requiring any explicit information about the functional form of the rate equations. Our approach is based on constructing a local linear model at each point in parameter space, such that each element of the model is either directly experimentally accessible or amenable to a straightforward biochemical interpretation. This ensemble of local linear models, encompassing all possible explicit kinetic models, then allows for a statistical exploration of the comprehensive parameter space. The method is exemplified on two paradigmatic metabolic systems: the glycolytic pathway of yeast and a realistic-scale representation of the photosynthetic Calvin cycle.

  16. The use of noise equivalent count rate and the NEMA phantom for PET image quality evaluation.

    PubMed

    Yang, Xin; Peng, Hao

    2015-03-01

    PET image quality is directly associated with two important parameters among others: count-rate performance and image signal-to-noise ratio (SNR). The framework of noise equivalent count rate (NECR) was developed back in the 1990s and has been widely used since then to evaluate count-rate performance for PET systems. The concept of NECR is not entirely straightforward, however, and among the issues requiring clarification are its original definition, its relationship to image quality, and its consistency among different derivation methods. In particular, we try to answer whether a higher NECR measurement using a standard NEMA phantom actually corresponds to better imaging performance. The paper includes the following topics: 1) revisiting the original analytical model for NECR derivation; 2) validating three methods for NECR calculation based on the NEMA phantom/standard; and 3) studying the spatial dependence of NECR and quantitative relationship between NECR and image SNR. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Analysis of Glyphosate and Aminomethylphosphonic Acid in Nutritional Ingredients and Milk by Derivatization with Fluorenylmethyloxycarbonyl Chloride and Liquid Chromatography-Mass Spectrometry.

    PubMed

    Ehling, Stefan; Reddy, Todime M

    2015-12-09

    A straightforward analytical method based on derivatization with fluorenylmethyloxycarbonyl chloride and liquid chromatography-mass spectrometry has been developed for the analysis of residues of glyphosate and aminomethylphosphonic acid (AMPA) in a suite of nutritional ingredients derived from soybean, corn, and sugar beet and also in cow's milk and human breast milk. Accuracy and intermediate precision were 91-116% and <10% RSD, respectively, in soy protein isolate. Limits of quantitation were 0.05 and 0.005 μg/g in powdered and liquid samples, respectively. Glyphosate and AMPA were quantified at 0.105 and 0.210 μg/g (soy protein isolate) and 0.850 and 2.71 μg/g (soy protein concentrate, both derived from genetically modified soybean), respectively. Residues were not detected in soy milk, soybean oil, corn oil, maltodextrin, sucrose, cow's milk, whole milk powder, or human breast milk. The method is proposed as a convenient tool for the survey of glyphosate and AMPA in the ingredient supply chain.

  18. Eigenstates and dynamics of Hooke's atom: Exact results and path integral simulations

    NASA Astrophysics Data System (ADS)

    Gholizadehkalkhoran, Hossein; Ruokosenmäki, Ilkka; Rantala, Tapio T.

    2018-05-01

    The system of two interacting electrons in one-dimensional harmonic potential or Hooke's atom is considered, again. On one hand, it appears as a model for quantum dots in a strong confinement regime, and on the other hand, it provides us with a hard test bench for new methods with the "space splitting" arising from the one-dimensional Coulomb potential. Here, we complete the numerous previous studies of the ground state of Hooke's atom by including the excited states and dynamics, not considered earlier. With the perturbation theory, we reach essentially exact eigenstate energies and wave functions for the strong confinement regime as novel results. We also consider external perturbation induced quantum dynamics in a simple separable case. Finally, we test our novel numerical approach based on real-time path integrals (RTPIs) in reproducing the above. The RTPI turns out to be a straightforward approach with exact account of electronic correlations for solving the eigenstates and dynamics without the conventional restrictions of electronic structure methods.

  19. Determination of bulk and interface density of states in metal oxide semiconductor thin-film transistors by using capacitance-voltage characteristics

    NASA Astrophysics Data System (ADS)

    Wei, Xixiong; Deng, Wanling; Fang, Jielin; Ma, Xiaoyu; Huang, Junkai

    2017-10-01

    A physical-based straightforward extraction technique for interface and bulk density of states in metal oxide semiconductor thin film transistors (TFTs) is proposed by using the capacitance-voltage (C-V) characteristics. The interface trap density distribution with energy has been extracted from the analysis of capacitance-voltage characteristics. Using the obtained interface state distribution, the bulk trap density has been determined. With this method, for the interface trap density, it is found that deep state density nearing the mid-gap is approximately constant and tail states density increases exponentially with energy; for the bulk trap density, it is a superposition of exponential deep states and exponential tail states. The validity of the extraction is verified by comparisons with the measured current-voltage (I-V) characteristics and the simulation results by the technology computer-aided design (TCAD) model. This extraction method uses non-numerical iteration which is simple, fast and accurate. Therefore, it is very useful for TFT device characterization.

  20. A method to estimate statistical errors of properties derived from charge-density modelling

    PubMed Central

    Lecomte, Claude

    2018-01-01

    Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964

  1. Parameter-free determination of the exchange constant in thin films using magnonic patterning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langer, M.; Wagner, K.; Fassbender, J.

    2016-03-07

    An all-electrical method is presented to determine the exchange constant of magnetic thin films using ferromagnetic resonance. For films of 20 nm thickness and below, the determination of the exchange constant A, a fundamental magnetic quantity, is anything but straightforward. Among others, the most common methods are based on the characterization of perpendicular standing spin-waves. These approaches are however challenging, due to (i) very high energies and (ii) rather small intensities in this thickness regime. In the presented approach, surface patterning is applied to a permalloy (Ni{sub 80}Fe{sub 20}) film and a Co{sub 2}Fe{sub 0.4}Mn{sub 0.6}Si Heusler compound. Acting as amore » magnonic crystal, such structures enable the coupling of backward volume spin-waves to the uniform mode. Subsequent ferromagnetic resonance measurements give access to the spin-wave spectra free of unquantifiable parameters and, thus, to the exchange constant A with high accuracy.« less

  2. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    PubMed

    Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf

    2014-01-01

    Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  3. Energy Harvesting with a Liquid-Metal Microfluidic Influence Machine

    NASA Astrophysics Data System (ADS)

    Conner, Christopher; de Visser, Tim; Loessberg, Joshua; Sherman, Sam; Smith, Andrew; Ma, Shuo; Napoli, Maria Teresa; Pennathur, Sumita; Weld, David

    2018-04-01

    We describe and demonstrate an alternative energy-harvesting technology based on a microfluidic realization of a Wimshurst influence machine. The prototype device converts the mechanical energy of a pressure-driven flow into electrical energy, using a multiphase system composed of droplets of liquid mercury surrounded by insulating oil. Electrostatic induction between adjacent metal droplets drives charge through external electrode paths, resulting in continuous charge amplification and collection. We demonstrate a power output of 4 nW from the initial prototype and present calculations suggesting that straightforward device optimization could increase the power output by more than 3 orders of magnitude. At that level, the power efficiency of this energy-harvesting mechanism, limited by viscous dissipation, could exceed 90%. The microfluidic context enables straightforward scaling and parallelization, as well as hydraulic matching to a variety of ambient mechanical energy sources, such as human locomotion.

  4. The difficult mountain: enriched composition in adjective–noun phrases

    PubMed Central

    Pickering, Martin J.; McElree, Brian

    2012-01-01

    When readers need to go beyond the straightforward compositional meaning of a sentence (i.e., when enriched composition is required), costly additional processing is the norm. However, this conclusion is based entirely on research that has looked at enriched composition between two phrases or within the verb phrase (e.g., the verb and its complement in … started the book …) where there is a discrepancy between the semantic expectations of the verb and the semantics of the noun. We carried out an eye-tracking experiment investigating enriched composition within a single noun phrase, as in the difficult mountain. As compared with adjective–noun phrases that allow a straightforward compositional interpretation (the difficult exercise), the coerced phrases were more difficult to process. These results indicate that coercion effects can be found in the absence of a typing violation and within a single noun phrase. PMID:21826403

  5. Sandia Corporation (Albuquerque, NM)

    DOEpatents

    Diver, Richard B.

    2010-02-23

    A Theoretical Overlay Photographic (TOP) alignment method uses the overlay of a theoretical projected image of a perfectly aligned concentrator on a photographic image of the concentrator to align the mirror facets of a parabolic trough solar concentrator. The alignment method is practical and straightforward, and inherently aligns the mirror facets to the receiver. When integrated with clinometer measurements for which gravity and mechanical drag effects have been accounted for and which are made in a manner and location consistent with the alignment method, all of the mirrors on a common drive can be aligned and optimized for any concentrator orientation.

  6. Individualized localization and cortical surface-based registration of intracranial electrodes

    PubMed Central

    Dykstra, Andrew R.; Chan, Alexander M.; Quinn, Brian T.; Zepeda, Rodrigo; Keller, Corey J.; Cormier, Justine; Madsen, Joseph R.; Eskandar, Emad N.; Cash, Sydney S.

    2011-01-01

    In addition to its widespread clinical use, the intracranial electroencephalogram (iEEG) is increasingly being employed as a tool to map the neural correlates of normal cognitive function as well as for developing neuroprosthetics. Despite recent advances, and unlike other established brain mapping modalities (e.g. functional MRI, magneto- and electroencephalography), registering the iEEG with respect to neuroanatomy in individuals – and coregistering functional results across subjects – remains a significant challenge. Here we describe a method which coregisters high-resolution preoperative MRI with postoperative computerized tomography (CT) for the purpose of individualized functional mapping of both normal and pathological (e.g., interictal discharges and seizures) brain activity. Our method accurately (within 3mm, on average) localizes electrodes with respect to an individual’s neuroanatomy. Furthermore, we outline a principled procedure for either volumetric or surface-based group analyses. We demonstrate our method in five patients with medically-intractable epilepsy undergoing invasive monitoring of the seizure focus prior to its surgical removal. The straight-forward application of this procedure to all types of intracranial electrodes, robustness to deformations in both skull and brain, and the ability to compare electrode locations across groups of patients makes this procedure an important tool for basic scientists as well as clinicians. PMID:22155045

  7. A novel 96-well gel-based assay for determining antifungal activity against filamentous fungi.

    PubMed

    Troskie, Anscha Mari; Vlok, Nicolas Maré; Rautenbach, Marina

    2012-12-01

    In recent years the global rise in antibiotic resistance and environmental consciousness lead to a renewed fervour to find and develop novel antibiotics, including antifungals. However, the influence of the environment on antifungal activity is often disregarded and many in vitro assays may cause the activity of certain antifungals to be overestimated or underestimated. The general antifungal test assays that are economically accessible to the majority of scientists primarily rely on visual examination or on spectrophotometric analysis. The effect of certain morphogenic antifungals, which may lead to hyperbranching of filamentous fungi, unfortunately renders these methods unreliable. To minimise the difficulties experienced as a result of hyperbranching, we developed a straightforward, economical 96-well gel-based method, independent of spectrophotometric analysis, for highly repeatable determination of antifungal activity. For the calculation of inhibition parameters, this method relies on the visualisation of assay results by digitisation. The antifungal activity results from our novel micro-gel dilution assay are comparable to that of the micro-broth dilution assay used as standard reference test of The Clinical and Laboratory Standard Institute. Furthermore, our economical assay is multifunctional as it permits microscopic analysis of the preserved assay results, as well as rendering highly reliable data. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Individualized localization and cortical surface-based registration of intracranial electrodes.

    PubMed

    Dykstra, Andrew R; Chan, Alexander M; Quinn, Brian T; Zepeda, Rodrigo; Keller, Corey J; Cormier, Justine; Madsen, Joseph R; Eskandar, Emad N; Cash, Sydney S

    2012-02-15

    In addition to its widespread clinical use, the intracranial electroencephalogram (iEEG) is increasingly being employed as a tool to map the neural correlates of normal cognitive function as well as for developing neuroprosthetics. Despite recent advances, and unlike other established brain-mapping modalities (e.g. functional MRI, magneto- and electroencephalography), registering the iEEG with respect to neuroanatomy in individuals-and coregistering functional results across subjects-remains a significant challenge. Here we describe a method which coregisters high-resolution preoperative MRI with postoperative computerized tomography (CT) for the purpose of individualized functional mapping of both normal and pathological (e.g., interictal discharges and seizures) brain activity. Our method accurately (within 3mm, on average) localizes electrodes with respect to an individual's neuroanatomy. Furthermore, we outline a principled procedure for either volumetric or surface-based group analyses. We demonstrate our method in five patients with medically-intractable epilepsy undergoing invasive monitoring of the seizure focus prior to its surgical removal. The straight-forward application of this procedure to all types of intracranial electrodes, robustness to deformations in both skull and brain, and the ability to compare electrode locations across groups of patients makes this procedure an important tool for basic scientists as well as clinicians. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Reducing workpieces to their base geometry for multi-step incremental forming using manifold harmonics

    NASA Astrophysics Data System (ADS)

    Carette, Yannick; Vanhove, Hans; Duflou, Joost

    2018-05-01

    Single Point Incremental Forming is a flexible process that is well-suited for small batch production and rapid prototyping of complex sheet metal parts. The distributed nature of the deformation process and the unsupported sheet imply that controlling the final accuracy of the workpiece is challenging. To improve the process limits and the accuracy of SPIF, the use of multiple forming passes has been proposed and discussed by a number of authors. Most methods use multiple intermediate models, where the previous one is strictly smaller than the next one, while gradually increasing the workpieces' wall angles. Another method that can be used is the manufacture of a smoothed-out "base geometry" in the first pass, after which more detailed features can be added in subsequent passes. In both methods, the selection of these intermediate shapes is freely decided by the user. However, their practical implementation in the production of complex freeform parts is not straightforward. The original CAD model can be manually adjusted or completely new CAD models can be created. This paper discusses an automatic method that is able to extract the base geometry from a full STL-based CAD model in an analytical way. Harmonic decomposition is used to express the final geometry as the sum of individual surface harmonics. It is then possible to filter these harmonic contributions to obtain a new CAD model with a desired level of geometric detail. This paper explains the technique and its implementation, as well as its use in the automatic generation of multi-step geometries.

  10. Web-Based Phylogenetic Assignment Tool for Analysis of Terminal Restriction Fragment Length Polymorphism Profiles of Microbial Communities

    PubMed Central

    Kent, Angela D.; Smith, Dan J.; Benson, Barbara J.; Triplett, Eric W.

    2003-01-01

    Culture-independent DNA fingerprints are commonly used to assess the diversity of a microbial community. However, relating species composition to community profiles produced by community fingerprint methods is not straightforward. Terminal restriction fragment length polymorphism (T-RFLP) is a community fingerprint method in which phylogenetic assignments may be inferred from the terminal restriction fragment (T-RF) sizes through the use of web-based resources that predict T-RF sizes for known bacteria. The process quickly becomes computationally intensive due to the need to analyze profiles produced by multiple restriction digests and the complexity of profiles generated by natural microbial communities. A web-based tool is described here that rapidly generates phylogenetic assignments from submitted community T-RFLP profiles based on a database of fragments produced by known 16S rRNA gene sequences. Users have the option of submitting a customized database generated from unpublished sequences or from a gene other than the 16S rRNA gene. This phylogenetic assignment tool allows users to employ T-RFLP to simultaneously analyze microbial community diversity and species composition. An analysis of the variability of bacterial species composition throughout the water column in a humic lake was carried out to demonstrate the functionality of the phylogenetic assignment tool. This method was validated by comparing the results generated by this program with results from a 16S rRNA gene clone library. PMID:14602639

  11. Mechanical testing of hydrogels in cartilage tissue engineering: beyond the compressive modulus.

    PubMed

    Xiao, Yinghua; Friis, Elizabeth A; Gehrke, Stevin H; Detamore, Michael S

    2013-10-01

    Injuries to articular cartilage result in significant pain to patients and high medical costs. Unfortunately, cartilage repair strategies have been notoriously unreliable and/or complex. Biomaterial-based tissue-engineering strategies offer great promise, including the use of hydrogels to regenerate articular cartilage. Mechanical integrity is arguably the most important functional outcome of engineered cartilage, although mechanical testing of hydrogel-based constructs to date has focused primarily on deformation rather than failure properties. In addition to deformation testing, as the field of cartilage tissue engineering matures, this community will benefit from the addition of mechanical failure testing to outcome analyses, given the crucial clinical importance of the success of engineered constructs. However, there is a tremendous disparity in the methods used to evaluate mechanical failure of hydrogels and articular cartilage. In an effort to bridge the gap in mechanical testing methods of articular cartilage and hydrogels in cartilage regeneration, this review classifies the different toughness measurements for each. The urgency for identifying the common ground between these two disparate fields is high, as mechanical failure is ready to stand alongside stiffness as a functional design requirement. In comparing toughness measurement methods between hydrogels and cartilage, we recommend that the best option for evaluating mechanical failure of hydrogel-based constructs for cartilage tissue engineering may be tensile testing based on the single edge notch test, in part because specimen preparation is more straightforward and a related American Society for Testing and Materials (ASTM) standard can be adopted in a fracture mechanics context.

  12. Climate-based models for pulsed resources improve predictability of consumer population dynamics: outbreaks of house mice in forest ecosystems.

    PubMed

    Holland, E Penelope; James, Alex; Ruscoe, Wendy A; Pech, Roger P; Byrom, Andrea E

    2015-01-01

    Accurate predictions of the timing and magnitude of consumer responses to episodic seeding events (masts) are important for understanding ecosystem dynamics and for managing outbreaks of invasive species generated by masts. While models relating consumer populations to resource fluctuations have been developed successfully for a range of natural and modified ecosystems, a critical gap that needs addressing is better prediction of resource pulses. A recent model used change in summer temperature from one year to the next (ΔT) for predicting masts for forest and grassland plants in New Zealand. We extend this climate-based method in the framework of a model for consumer-resource dynamics to predict invasive house mouse (Mus musculus) outbreaks in forest ecosystems. Compared with previous mast models based on absolute temperature, the ΔT method for predicting masts resulted in an improved model for mouse population dynamics. There was also a threshold effect of ΔT on the likelihood of an outbreak occurring. The improved climate-based method for predicting resource pulses and consumer responses provides a straightforward rule of thumb for determining, with one year's advance warning, whether management intervention might be required in invaded ecosystems. The approach could be applied to consumer-resource systems worldwide where climatic variables are used to model the size and duration of resource pulses, and may have particular relevance for ecosystems where global change scenarios predict increased variability in climatic events.

  13. SNDR Limits of Oscillator-Based Sensor Readout Circuits

    PubMed Central

    Buffa, Cesare; Wiesbauer, Andreas; Hernandez, Luis

    2018-01-01

    This paper analyzes the influence of phase noise and distortion on the performance of oscillator-based sensor data acquisition systems. Circuit noise inherent to the oscillator circuit manifests as phase noise and limits the SNR. Moreover, oscillator nonlinearity generates distortion for large input signals. Phase noise analysis of oscillators is well known in the literature, but the relationship between phase noise and the SNR of an oscillator-based sensor is not straightforward. This paper proposes a model to estimate the influence of phase noise in the performance of an oscillator-based system by reflecting the phase noise to the oscillator input. The proposed model is based on periodic steady-state analysis tools to predict the SNR of the oscillator. The accuracy of this model has been validated by both simulation and experiment in a 130 nm CMOS prototype. We also propose a method to estimate the SNDR and the dynamic range of an oscillator-based readout circuit that improves by more than one order of magnitude the simulation time compared to standard time domain simulations. This speed up enables the optimization and verification of this kind of systems with iterative algorithms. PMID:29401646

  14. Refractive collimation beam shaper design and sensitivity analysis using a free-form profile construction method.

    PubMed

    Tsai, Chung-Yu

    2017-07-01

    A refractive laser beam shaper comprising two free-form profiles is presented. The profiles are designed using a free-form profile construction method such that each incident ray is directed in a certain user-specified direction or to a particular point on the target surface so as to achieve the required illumination distribution of the output beam. The validity of the proposed design method is demonstrated by means of ZEMAX simulations. The method is mathematically straightforward and easily implemented in computer code. It thus provides a convenient tool for the design and sensitivity analysis of laser beam shapers and similar optical components.

  15. Anatomising proton NMR spectra with pure shift 2D J-spectroscopy: A cautionary tale

    NASA Astrophysics Data System (ADS)

    Kiraly, Peter; Foroozandeh, Mohammadali; Nilsson, Mathias; Morris, Gareth A.

    2017-09-01

    Analysis of proton NMR spectra has been a key tool in structure determination for over 60 years. A classic tool is 2D J-spectroscopy, but common problems are the difficulty of obtaining the absorption mode lineshapes needed for accurate results, and the need for a 45° shear of the final 2D spectrum. A novel 2D NMR method is reported here that allows straightforward determination of homonuclear couplings, using a modified version of the PSYCHE method to suppress couplings in the direct dimension. The method illustrates the need for care when combining pure shift data acquisition with multiple pulse methods.

  16. Computing Instantaneous Frequency by normalizing Hilbert Transform

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  17. Computing Instantaneous Frequency by normalizing Hilbert Transform

    DOEpatents

    Huang, Norden E.

    2005-05-31

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  18. [Application of a trans-membrane migration method in the study of human sperm motility: a review].

    PubMed

    Hong, C Y

    1991-09-01

    Transmembrane migration method is a bioassay specifically designed to study drug effect on human sperm motility. It was first used in the study of sperm immobilizing agents which have a membrane stabilizing effect. Then it was used to investigate the relationship between calcium ion and sperm motility. Recently, this method has been used to screen drugs that stimulate sperm motility. It has also been modified for the study of porcine sperm motility. Computer assisted semen analysis showed that the transmembrane migration method is most suitable for studying drug effect on rapid and straight-forward motility of sperm.

  19. Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.

    2004-01-01

    Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.

  20. Application of artificial neural networks and genetic algorithms to modeling molecular electronic spectra in solution

    NASA Astrophysics Data System (ADS)

    Lilichenko, Mark; Kelley, Anne Myers

    2001-04-01

    A novel approach is presented for finding the vibrational frequencies, Franck-Condon factors, and vibronic linewidths that best reproduce typical, poorly resolved electronic absorption (or fluorescence) spectra of molecules in condensed phases. While calculation of the theoretical spectrum from the molecular parameters is straightforward within the harmonic oscillator approximation for the vibrations, "inversion" of an experimental spectrum to deduce these parameters is not. Standard nonlinear least-squares fitting methods such as Levenberg-Marquardt are highly susceptible to becoming trapped in local minima in the error function unless very good initial guesses for the molecular parameters are made. Here we employ a genetic algorithm to force a broad search through parameter space and couple it with the Levenberg-Marquardt method to speed convergence to each local minimum. In addition, a neural network trained on a large set of synthetic spectra is used to provide an initial guess for the fitting parameters and to narrow the range searched by the genetic algorithm. The combined algorithm provides excellent fits to a variety of single-mode absorption spectra with experimentally negligible errors in the parameters. It converges more rapidly than the genetic algorithm alone and more reliably than the Levenberg-Marquardt method alone, and is robust in the presence of spectral noise. Extensions to multimode systems, and/or to include other spectroscopic data such as resonance Raman intensities, are straightforward.

  1. Pegasus first mission - Flight results

    NASA Astrophysics Data System (ADS)

    Mosier, Marty; Harris, Gary; Richards, Bob; Rovner, Dan; Carroll, Brent

    On April 5, 1990, after release from a B-52 aircraft at 43,198 ft, the three-stage Pegasus solid-propellant rocket successfully completed its maiden flight by injecting its 423-lb payload into a 273 x 370-nmi 94-deg-inclination orbit. The first flight successfully achieved all mission objectives, validating Pegasus's unique air-launched concept, the vehicle's design, and its straightforward ground processing, integration and test methods.

  2. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  3. Adding Resistances and Capacitances in Introductory Electricity

    NASA Astrophysics Data System (ADS)

    Efthimiou, C. J.; Llewellyn, R. A.

    2005-09-01

    All introductory physics textbooks, with or without calculus, cover the addition of both resistances and capacitances in series and in parallel as discrete summations. However, none includes problems that involve continuous versions of resistors in parallel or capacitors in series. This paper introduces a method for solving the continuous problems that is logical, straightforward, and within the mathematical preparation of students at the introductory level.

  4. Influences of barriers to cessation and reasons for quitting on substance use among treatment-seeking smokers who report heavy drinking

    PubMed Central

    Foster, Dawn W.; Schmidt, Norman B.; Zvolensky, Michael J.

    2015-01-01

    Objectives We examined behavioral and cognitively-based quit processes among concurrent alcohol and tobacco users and assessed whether smoking and drinking were differentially influenced. Methods Participants were 200 treatment-seeking smokers (37.50% female; Mage = 30.72; SD = 12.68) who reported smoking an average of 10 or more cigarettes daily for at least one year. Results Barriers to cessation (BCS) and reasons for quitting (RFQ) were generally correlated with substance use. BCS moderated the relationship between quit methods and cigarette use such that quit methods were negatively associated with smoking, particularly among those with more BCS. RFQ moderated the association between quit methods and cigarette use such that quit methods were negatively linked with smoking among those with fewer RFQ, but positively linked with smoking among those with more RFQ. Two 3-way interactions emerged. The first 3-way indicated that among individuals with fewer RFQ, quit methods was negatively associated with smoking, and this was strongest among those with more BCS. However, among those with more RFQ, smoking and quit methods were positively associated, particularly among those with more BCS. The second 3-way showed that among those with fewer RFQ, quit methods was negatively linked with drinking frequency, and this was strongest among those with more BCS. However, among those with fewer BCS, drinking and quit methods were positively linked. Conclusions The relationship between behavioral and cognitively-based quit processes and substance use is not straightforward. There may be concurrent substance-using individuals for whom these processes might be associated with increased substance use. PMID:26949566

  5. p-TSA/Base-Promoted Propargylation/Cyclization of β-Ketothioamides for the Regioselective Synthesis of Highly Substituted (Hydro)thiophenes.

    PubMed

    Nandi, Ganesh Chandra; Singh, Maya Shankar

    2016-07-15

    Metal-free, p-toluenesulfonic acid (p-TSA)-mediated, straightforward propargylation of β-ketothioamides with aryl propargyl alcohol has been achieved at room temperature. In addition, the reaction also provided thiazole rings as byproducts. Furthermore, the propargylated thioamides undergo intramolecular 1,5-cyclization to afford fully substituted (hydro)thiophenes in the presence of base. Notably, the approach is pot, atom, and step economical (PASE).

  6. Cleft lift procedure for pilonidal disease: technique and perioperative management.

    PubMed

    Favuzza, J; Brand, M; Francescatti, A; Orkin, B

    2015-08-01

    Pilonidal disease is a common condition affecting young patients. It is often disruptive to their lifestyle due to recurrent abscesses or chronic wound drainage. The most common surgical treatment, "cystectomy," removes useful tissue unnecessarily and does not address the etiology of the condition. Herein, we describe the etiology of pilonidal disease and our technique for definitive management of pilonidal disease using the cleft lift procedure. In this paper, we present our method of performing the cleft lift procedure for pilonidal disease including perioperative management and surgical technique. We have used the cleft lift procedure in nearly 200 patients with pilonidal disease, in both primary and salvage procedures settings. It has been equally successful in both settings with a high rate of success. It results in a closed wound with relatively minimal discomfort and straightforward wound care. We have described our current approach to recurrent and complex pilonidal disease using the cleft lift procedure. Once learned, the cleft lift procedure is a straightforward and highly successful solution to a chronic and challenging condition.

  7. Free space optical communication based on pulsed lasers

    NASA Astrophysics Data System (ADS)

    Drozd, Tadeusz; Mierczyk, Zygmunt; Zygmunt, Marek; Wojtanowski, Jacek

    2016-12-01

    Most of the current optical data transmission systems are based on continuous wave (cw) lasers. It results from the tendency to increase data transmission speed, and from the simplicity in implementation (straightforward modulation). Pulsed lasers, which find many applications in a variety of industrial, medical and military systems, in this field are not common. Depending on the type, pulsed lasers can generate instantaneous power which is many times greater when compared with cw lasers. As such, they seem to be very attractive to be used in data transmission technology, especially due to the potentially larger ranges of transmission, or in adverse atmospheric conditions where low power cw-lasersbased transmission is no longer feasible. It is also a very practical idea to implement data transmission capability in the pulsed laser devices that have been around and already used, increasing the functionality of this type of equipment. At the Institute of Optoelectronics at Military University of Technology, a unique method of data transmission based on pulsed laser radiation has been developed. This method is discussed in the paper in terms of both data transmission speed and transmission range. Additionally, in order to verify the theoretical assumptions, modules for voice and data transmission were developed and practically tested which is also reported, including the measurements of Bit Error Rate (BER) and performance vs. range analysis.

  8. Nonenzymatic glucose sensor based on renewable electrospun Ni nanoparticle-loaded carbon nanofiber paste electrode.

    PubMed

    Liu, Yang; Teng, Hong; Hou, Haoqing; You, Tianyan

    2009-07-15

    A novel nonenzymatic glucose sensor was developed based on the renewable Ni nanoparticle-loaded carbon nanofiber paste (NiCFP) electrode. The NiCF nanocomposite was prepared by combination of electrospinning technique with thermal treatment method. The scanning electron microscopy (SEM) and transmission electron microscopy (TEM) images showed that large amounts of spherical nanoparticles were well dispersed on the surface or embedded in the carbon nanofibers. And the nanoparticles were composed of Ni and NiO, as revealed by energy dispersive X-ray spectroscopy (EDX) and X-ray powder diffraction (XRD). In application to nonenzymatic glucose determination, the renewable NiCFP electrodes, which were constructed by simply mixing the electrospun nanocomposite with mineral oil, exhibited strong and fast amperometric response without being poisoned by chloride ions. Low detection limit of 1 microM with wide linear range from 2 microM to 2.5 mM (R=0.9997) could be obtained. The current response of the proposed glucose sensor was highly sensitive and stable, attributing to the electrocatalytic performance of the firmly embedded Ni nanoparticles as well as the chemical inertness of the carbon-based electrode. The good analytical performance, low cost and straightforward preparation method made this novel electrode material promising for the development of effective glucose sensor.

  9. Gathering opinion leader data for a tailored implementation intervention in secondary healthcare: a randomised trial.

    PubMed

    Farley, Katherine; Hanbury, Andria; Thompson, Carl

    2014-03-10

    Health professionals' behaviour is a key component in compliance with evidence-based recommendations. Opinion leaders are an oft-used method of influencing such behaviours in implementation studies, but reliably and cost effectively identifying them is not straightforward. Survey and questionnaire based data collection methods have potential and carefully chosen items can - in theory - both aid identification of opinion leaders and help in the design of an implementation strategy itself. This study compares two methods of identifying opinion leaders for behaviour-change interventions. Healthcare professionals working in a single UK mental health NHS Foundation Trust were randomly allocated to one of two questionnaires. The first, slightly longer questionnaire, asked for multiple nominations of opinion leaders, with specific information about the nature of the relationship with each nominee. The second, shorter version, asked simply for a list of named "champions" but no more additional information. We compared, using Chi Square statistics, both the questionnaire response rates and the number of health professionals likely to be influenced by the opinion leaders (i.e. the "coverage" rates) for both questionnaire conditions. Both questionnaire versions had low response rates: only 15% of health professionals named colleagues in the longer questionnaire and 13% in the shorter version. The opinion leaders identified by both methods had a low number of contacts (range of coverage, 2-6 each). There were no significant differences in response rates or coverage between the two identification methods. The low response and population coverage rates for both questionnaire versions suggest that alternative methods of identifying opinion leaders for implementation studies may be more effective. Future research should seek to identify and evaluate alternative, non-questionnaire based, methods of identifying opinion leaders in order to maximise their potential in organisational behaviour change interventions.

  10. STELLAR COLOR REGRESSION: A SPECTROSCOPY-BASED METHOD FOR COLOR CALIBRATION TO A FEW MILLIMAGNITUDE ACCURACY AND THE RECALIBRATION OF STRIPE 82

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Haibo; Liu, Xiaowei; Xiang, Maosheng

    In this paper we propose a spectroscopy-based stellar color regression (SCR) method to perform accurate color calibration for modern imaging surveys, taking advantage of millions of stellar spectra now available. The method is straightforward, insensitive to systematic errors in the spectroscopically determined stellar atmospheric parameters, applicable to regions that are effectively covered by spectroscopic surveys, and capable of delivering an accuracy of a few millimagnitudes for color calibration. As an illustration, we have applied the method to the Sloan Digital Sky Survey (SDSS) Stripe 82 data. With a total number of 23,759 spectroscopically targeted stars, we have mapped out the smallmore » but strongly correlated color zero-point errors present in the photometric catalog of Stripe 82, and we improve the color calibration by a factor of two to three. Our study also reveals some small but significant magnitude dependence errors in the z band for some charge-coupled devices (CCDs). Such errors are likely to be present in all the SDSS photometric data. Our results are compared with those from a completely independent test based on the intrinsic colors of red galaxies presented by Ivezić et al. The comparison, as well as other tests, shows that the SCR method has achieved a color calibration internally consistent at a level of about 5 mmag in u – g, 3 mmag in g – r, and 2 mmag in r – i and i – z. Given the power of the SCR method, we discuss briefly the potential benefits by applying the method to existing, ongoing, and upcoming imaging surveys.« less

  11. Inverse Planning Approach for 3-D MRI-Based Pulse-Dose Rate Intracavitary Brachytherapy in Cervix Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chajon, Enrique; Dumas, Isabelle; Touleimat, Mahmoud B.Sc.

    2007-11-01

    Purpose: The purpose of this study was to evaluate the inverse planning simulated annealing (IPSA) software for the optimization of dose distribution in patients with cervix carcinoma treated with MRI-based pulsed-dose rate intracavitary brachytherapy. Methods and Materials: Thirty patients treated with a technique using a customized vaginal mold were selected. Dose-volume parameters obtained using the IPSA method were compared with the classic manual optimization method (MOM). Target volumes and organs at risk were delineated according to the Gynecological Brachytherapy Group/European Society for Therapeutic Radiology and Oncology recommendations. Because the pulsed dose rate program was based on clinical experience with lowmore » dose rate, dwell time values were required to be as homogeneous as possible. To achieve this goal, different modifications of the IPSA program were applied. Results: The first dose distribution calculated by the IPSA algorithm proposed a heterogeneous distribution of dwell time positions. The mean D90, D100, and V100 calculated with both methods did not differ significantly when the constraints were applied. For the bladder, doses calculated at the ICRU reference point derived from the MOM differed significantly from the doses calculated by the IPSA method (mean, 58.4 vs. 55 Gy respectively; p = 0.0001). For the rectum, the doses calculated at the ICRU reference point were also significantly lower with the IPSA method. Conclusions: The inverse planning method provided fast and automatic solutions for the optimization of dose distribution. However, the straightforward use of IPSA generated significant heterogeneity in dwell time values. Caution is therefore recommended in the use of inverse optimization tools with clinical relevance study of new dosimetric rules.« less

  12. A note on improved F-expansion method combined with Riccati equation applied to nonlinear evolution equations.

    PubMed

    Islam, Md Shafiqul; Khan, Kamruzzaman; Akbar, M Ali; Mastroberardino, Antonio

    2014-10-01

    The purpose of this article is to present an analytical method, namely the improved F-expansion method combined with the Riccati equation, for finding exact solutions of nonlinear evolution equations. The present method is capable of calculating all branches of solutions simultaneously, even if multiple solutions are very close and thus difficult to distinguish with numerical techniques. To verify the computational efficiency, we consider the modified Benjamin-Bona-Mahony equation and the modified Korteweg-de Vries equation. Our results reveal that the method is a very effective and straightforward way of formulating the exact travelling wave solutions of nonlinear wave equations arising in mathematical physics and engineering.

  13. A note on improved F-expansion method combined with Riccati equation applied to nonlinear evolution equations

    PubMed Central

    Islam, Md. Shafiqul; Khan, Kamruzzaman; Akbar, M. Ali; Mastroberardino, Antonio

    2014-01-01

    The purpose of this article is to present an analytical method, namely the improved F-expansion method combined with the Riccati equation, for finding exact solutions of nonlinear evolution equations. The present method is capable of calculating all branches of solutions simultaneously, even if multiple solutions are very close and thus difficult to distinguish with numerical techniques. To verify the computational efficiency, we consider the modified Benjamin–Bona–Mahony equation and the modified Korteweg-de Vries equation. Our results reveal that the method is a very effective and straightforward way of formulating the exact travelling wave solutions of nonlinear wave equations arising in mathematical physics and engineering. PMID:26064530

  14. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.

    PubMed

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  15. Background-independent condensed matter models for quantum gravity

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Markopoulou, Fotini

    2011-09-01

    A number of recent proposals on a quantum theory of gravity are based on the idea that spacetime geometry and gravity are derivative concepts and only apply at an approximate level. There are two fundamental challenges to any such approach. At the conceptual level, there is a clash between the 'timelessness' of general relativity and emergence. Secondly, the lack of a fundamental spacetime renders difficult the straightforward application of well-known methods of statistical physics to the problem. We recently initiated a study of such problems using spin systems based on the evolution of quantum networks with no a priori geometric notions as models for emergent geometry and gravity. In this paper, we review two such models. The first model is a model of emergent (flat) space and matter, and we show how to use methods from quantum information theory to derive features such as the speed of light from a non-geometric quantum system. The second model exhibits interacting matter and geometry, with the geometry defined by the behavior of matter. This model has primitive notions of gravitational attraction that we illustrate with a toy black hole, and exhibits entanglement between matter and geometry and thermalization of the quantum geometry.

  16. Robust and Simple Non-Reflecting Boundary Conditions for the Euler Equations - A New Approach based on the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, S.-C.; Himansu, A.; Loh, C.-Y.; Wang, X.-Y.; Yu, S.-T.J.

    2005-01-01

    This paper reports on a significant advance in the area of nonreflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of t he development of t he space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics- based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains t heir unique robustness and accuracy in terms of t he conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.

  17. TEA: the epigenome platform for Arabidopsis methylome study.

    PubMed

    Su, Sheng-Yao; Chen, Shu-Hwa; Lu, I-Hsuan; Chiang, Yih-Shien; Wang, Yu-Bin; Chen, Pao-Yang; Lin, Chung-Yen

    2016-12-22

    Bisulfite sequencing (BS-seq) has become a standard technology to profile genome-wide DNA methylation at single-base resolution. It allows researchers to conduct genome-wise cytosine methylation analyses on issues about genomic imprinting, transcriptional regulation, cellular development and differentiation. One single data from a BS-Seq experiment is resolved into many features according to the sequence contexts, making methylome data analysis and data visualization a complex task. We developed a streamlined platform, TEA, for analyzing and visualizing data from whole-genome BS-Seq (WGBS) experiments conducted in the model plant Arabidopsis thaliana. To capture the essence of the genome methylation level and to meet the efficiency for running online, we introduce a straightforward method for measuring genome methylation in each sequence context by gene. The method is scripted in Java to process BS-Seq mapping results. Through a simple data uploading process, the TEA server deploys a web-based platform for deep analysis by linking data to an updated Arabidopsis annotation database and toolkits. TEA is an intuitive and efficient online platform for analyzing the Arabidopsis genomic DNA methylation landscape. It provides several ways to help users exploit WGBS data. TEA is freely accessible for academic users at: http://tea.iis.sinica.edu.tw .

  18. Vapor and healing treatment for CH3NH3PbI3-xClx films toward large-area perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Gouda, Laxman; Gottesman, Ronen; Tirosh, Shay; Haltzi, Eynav; Hu, Jiangang; Ginsburg, Adam; Keller, David A.; Bouhadana, Yaniv; Zaban, Arie

    2016-03-01

    Hybrid methyl-ammonium lead trihalide perovskites are promising low-cost materials for use in solar cells and other optoelectronic applications. With a certified photovoltaic conversion efficiency record of 20.1%, scale-up for commercial purposes is already underway. However, preparation of large-area perovskite films remains a challenge, and films of perovskites on large electrodes suffer from non-uniform performance. Thus, production and characterization of the lateral uniformity of large-area films is a crucial step towards scale-up of devices. In this paper, we present a reproducible method for improving the lateral uniformity and performance of large-area perovskite solar cells (32 cm2). The method is based on methyl-ammonium iodide (MAI) vapor treatment as a new step in the sequential deposition of perovskite films. Following the MAI vapor treatment, we used high throughput techniques to map the photovoltaic performance throughout the large-area device. The lateral uniformity and performance of all photovoltaic parameters (Voc, Jsc, Fill Factor, Photo-conversion efficiency) increased, with an overall improved photo-conversion efficiency of ~100% following a vapor treatment at 140 °C. Based on XRD and photoluminescence measurements, We propose that the MAI treatment promotes a ``healing effect'' to the perovskite film which increases the lateral uniformity across the large-area solar cell. Thus, the straightforward MAI vapor treatment is highly beneficial for large scale commercialization of perovskite solar cells, regardless of the specific deposition method.Hybrid methyl-ammonium lead trihalide perovskites are promising low-cost materials for use in solar cells and other optoelectronic applications. With a certified photovoltaic conversion efficiency record of 20.1%, scale-up for commercial purposes is already underway. However, preparation of large-area perovskite films remains a challenge, and films of perovskites on large electrodes suffer from non-uniform performance. Thus, production and characterization of the lateral uniformity of large-area films is a crucial step towards scale-up of devices. In this paper, we present a reproducible method for improving the lateral uniformity and performance of large-area perovskite solar cells (32 cm2). The method is based on methyl-ammonium iodide (MAI) vapor treatment as a new step in the sequential deposition of perovskite films. Following the MAI vapor treatment, we used high throughput techniques to map the photovoltaic performance throughout the large-area device. The lateral uniformity and performance of all photovoltaic parameters (Voc, Jsc, Fill Factor, Photo-conversion efficiency) increased, with an overall improved photo-conversion efficiency of ~100% following a vapor treatment at 140 °C. Based on XRD and photoluminescence measurements, We propose that the MAI treatment promotes a ``healing effect'' to the perovskite film which increases the lateral uniformity across the large-area solar cell. Thus, the straightforward MAI vapor treatment is highly beneficial for large scale commercialization of perovskite solar cells, regardless of the specific deposition method. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr08658b

  19. Possible Reasons for Students' Ineffective Reading of Their First-Year University Mathematics Textbooks. Technical Report. No. 2011-2

    ERIC Educational Resources Information Center

    Shepherd, Mary D.; Selden, Annie; Selden, John

    2011-01-01

    This paper reports the observed behaviors and difficulties that eleven precalculus and calculus students exhibited in reading new passages from their mathematics textbooks. To gauge the effectiveness of these students' reading, we asked them to attempt straightforward mathematical tasks, based directly on what they had just read. These …

  20. Discourse Analysis of Interpersonal Meaning to Understand the Discrepancy between Teacher Knowing and Practice

    ERIC Educational Resources Information Center

    Ilhan, Emine Gül Çelebi; Erbas, Ayhan Kürsat

    2016-01-01

    As is well known, bridging teacher knowledge or learning with practice is not a straightforward task. This paper aims to explore this discrepancy between a mathematics teacher's knowing and practices and to offer ways of alignment between the two based on the social/interpersonal meanings and their realization through teacher's discourse. In this…

  1. Accounting for imperfect detection in Hill numbers for biodiversity studies

    USGS Publications Warehouse

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2015-01-01

    The occupancy-based Hill number estimators are always at their asymptotic values (i.e. as if an infinite number of samples have been taken for the study region), therefore making it easy to compare biodiversity between different assemblages. In addition, the Hill numbers are computed as derived quantities within a Bayesian hierarchical model, allowing for straightforward inference.

  2. Statistical linearization for multi-input/multi-output nonlinearities

    NASA Technical Reports Server (NTRS)

    Lin, Ching-An; Cheng, Victor H. L.

    1991-01-01

    Formulas are derived for the computation of the random input-describing functions for MIMO nonlinearities; these straightforward and rigorous derivations are based on the optimal mean square linear approximation. The computations involve evaluations of multiple integrals. It is shown that, for certain classes of nonlinearities, multiple-integral evaluations are obviated and the computations are significantly simplified.

  3. A Household-Based Distribution-Sensitive Human Development Index: An Empirical Application to Mexico, Nicaragua and Peru

    ERIC Educational Resources Information Center

    Lopez-Calva, Luis F.; Ortiz-Juarez, Eduardo

    2012-01-01

    In measuring human development, one of the main concerns relates to the inclusion of a measure that penalizes inequalities in the distribution of achievements across the population. Using indicators from nationally representative household surveys and census data, this paper proposes a straightforward methodology to estimate a household-based…

  4. Repositioning the substrate activity screening (SAS) approach as a fragment-based method for identification of weak binders.

    PubMed

    Gladysz, Rafaela; Cleenewerck, Matthias; Joossens, Jurgen; Lambeir, Anne-Marie; Augustyns, Koen; Van der Veken, Pieter

    2014-10-13

    Fragment-based drug discovery (FBDD) has evolved into an established approach for "hit" identification. Typically, most applications of FBDD depend on specialised cost- and time-intensive biophysical techniques. The substrate activity screening (SAS) approach has been proposed as a relatively cheap and straightforward alternative for identification of fragments for enzyme inhibitors. We have investigated SAS for the discovery of inhibitors of oncology target urokinase (uPA). Although our results support the key hypotheses of SAS, we also encountered a number of unreported limitations. In response, we propose an efficient modified methodology: "MSAS" (modified substrate activity screening). MSAS circumvents the limitations of SAS and broadens its scope by providing additional fragments and more coherent SAR data. As well as presenting and validating MSAS, this study expands existing SAR knowledge for the S1 pocket of uPA and reports new reversible and irreversible uPA inhibitor scaffolds. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Photocatalytic activity of low temperature oxidized Ti-6Al-4V.

    PubMed

    Unosson, Erik; Persson, Cecilia; Welch, Ken; Engqvist, Håkan

    2012-05-01

    Numerous advanced surface modification techniques exist to improve bone integration and antibacterial properties of titanium based implants and prostheses. A simple and straightforward method of obtaining uniform and controlled TiO(2) coatings of devices with complex shapes is H(2)O(2)-oxidation and hot water aging. Based on the photoactivated bactericidal properties of TiO(2), this study was aimed at optimizing the treatment to achieve high photocatalytic activity. Ti-6Al-4V samples were H(2)O(2)-oxidized and hot water aged for up to 24 and 72 h, respectively. Degradation measurements of rhodamine B during UV-A illumination of samples showed a near linear relationship between photocatalytic activity and total treatment time, and a nanoporous coating was observed by scanning electron microscopy. Grazing incidence X-ray diffraction showed a gradual decrease in crystallinity of the surface layer, suggesting that the increase in surface area rather than anatase formation was responsible for the increase in photocatalytic activity.

  6. Fourier-interpolation superresolution optical fluctuation imaging (fSOFi) (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Enderlein, Joerg; Stein, Simon C.; Huss, Anja; Hähnel, Dirk; Gregor, Ingo

    2016-02-01

    Stochastic Optical Fluctuation Imaging (SOFI) is a superresolution fluorescence microscopy technique which allows to enhance the spatial resolution of an image by evaluating the temporal fluctuations of blinking fluorescent emitters. SOFI is not based on the identification and localization of single molecules such as in the widely used Photoactivation Localization Microsopy (PALM) or Stochastic Optical Reconstruction Microscopy (STORM), but computes a superresolved image via temporal cumulants from a recorded movie. A technical challenge hereby is that, when directly applying the SOFI algorithm to a movie of raw images, the pixel size of the final SOFI image is the same as that of the original images, which becomes problematic when the final SOFI resolution is much smaller than this value. In the past, sophisticated cross-correlation schemes have been used for tackling this problem. Here, we present an alternative, exact, straightforward, and simple solution using an interpolation scheme based on Fourier transforms. We exemplify the method on simulated and experimental data.

  7. Fitting Formulae and Constraints for the Existence of S-type and P-type Habitable Zones in Binary Systems

    NASA Astrophysics Data System (ADS)

    Wang, Zhaopeng; Cuntz, Manfred

    2017-10-01

    We derive fitting formulae for the quick determination of the existence of S-type and P-type habitable zones (HZs) in binary systems. Based on previous work, we consider the limits of the climatological HZ in binary systems (which sensitively depend on the system parameters) based on a joint constraint encompassing planetary orbital stability and a habitable region for a possible system planet. Additionally, we employ updated results on planetary climate models obtained by Kopparapu and collaborators. Our results are applied to four P-type systems (Kepler-34, Kepler-35, Kepler-413, and Kepler-1647) and two S-type systems (TrES-2 and KOI-1257). Our method allows us to gauge the existence of climatological HZs for these systems in a straightforward manner with detailed consideration of the observational uncertainties. Further applications may include studies of other existing systems as well as systems to be identified through future observational campaigns.

  8. Fitting Formulae and Constraints for the Existence of S-type and P-type Habitable Zones in Binary Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Zhaopeng; Cuntz, Manfred, E-mail: zhaopeng.wang@mavs.uta.edu, E-mail: cuntz@uta.edu

    We derive fitting formulae for the quick determination of the existence of S-type and P-type habitable zones (HZs) in binary systems. Based on previous work, we consider the limits of the climatological HZ in binary systems (which sensitively depend on the system parameters) based on a joint constraint encompassing planetary orbital stability and a habitable region for a possible system planet. Additionally, we employ updated results on planetary climate models obtained by Kopparapu and collaborators. Our results are applied to four P-type systems (Kepler-34, Kepler-35, Kepler-413, and Kepler-1647) and two S-type systems (TrES-2 and KOI-1257). Our method allows us tomore » gauge the existence of climatological HZs for these systems in a straightforward manner with detailed consideration of the observational uncertainties. Further applications may include studies of other existing systems as well as systems to be identified through future observational campaigns.« less

  9. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out.

  10. A plasmid-based lacZα gene assay for DNA polymerase fidelity measurement

    PubMed Central

    Keith, Brian J.; Jozwiakowski, Stanislaw K.; Connolly, Bernard A.

    2013-01-01

    A significantly improved DNA polymerase fidelity assay, based on a gapped plasmid containing the lacZα reporter gene in a single-stranded region, is described. Nicking at two sites flanking lacZα, and removing the excised strand by thermocycling in the presence of complementary competitor DNA, is used to generate the gap. Simple methods are presented for preparing the single-stranded competitor. The gapped plasmid can be purified, in high amounts and in a very pure state, using benzoylated–naphthoylated DEAE–cellulose, resulting in a low background mutation frequency (∼1 × 10−4). Two key parameters, the number of detectable sites and the expression frequency, necessary for measuring polymerase error rates have been determined. DNA polymerase fidelity is measured by gap filling in vitro, followed by transformation into Escherichia coli and scoring of blue/white colonies and converting the ratio to error rate. Several DNA polymerases have been used to fully validate this straightforward and highly sensitive system. PMID:23098700

  11. Paper-polymer composite devices with minimal fluorescence background.

    PubMed

    Wang, Chang-Ming; Chen, Chong-You; Liao, Wei-Ssu

    2017-04-22

    Polymer film incorporated paper-based devices show advantages in simplicity and rugged backing. However, their applications are restricted by the high fluorescence background interference of conventional laminating pouches. Herein, we report a straightforward approach for minimal fluorescence background device fabrication, in which filter paper was shaped and laminated in between two biaxially oriented polypropylene (OPP) and polyvinyl butyral (PVB) composite films. This composite film provides mechanical strength for enhanced device durability, protection from environmental contamination, and prevents reagent degradation. This approach was tested by the determination of copper ions with a fluorescent probe, while the detection of glucose was used to illustrate the improved device durability. Our results show that lamination by the polymer composite lengthens device lifetime, while allowing for fluorescence detection methods combination with greatly reduced fluorescent background widely present in commercially available lamination pouches. By the combination of rapid device prototyping with low cost materials, we believe that this composite design would further expand the potential of paper-based devices. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Cuff-less blood pressure measurement using pulse arrival time and a Kalman filter

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Chen, Xianxiang; Fang, Zhen; Xue, Yongjiao; Zhan, Qingyuan; Yang, Ting; Xia, Shanhong

    2017-02-01

    The present study designs an algorithm to increase the accuracy of continuous blood pressure (BP) estimation. Pulse arrival time (PAT) has been widely used for continuous BP estimation. However, because of motion artifact and physiological activities, PAT-based methods are often troubled with low BP estimation accuracy. This paper used a signal quality modified Kalman filter to track blood pressure changes. A Kalman filter guarantees that BP estimation value is optimal in the sense of minimizing the mean square error. We propose a joint signal quality indice to adjust the measurement noise covariance, pushing the Kalman filter to weigh more heavily on measurements from cleaner data. Twenty 2 h physiological data segments selected from the MIMIC II database were used to evaluate the performance. Compared with straightforward use of the PAT-based linear regression model, the proposed model achieved higher measurement accuracy. Due to low computation complexity, the proposed algorithm can be easily transplanted into wearable sensor devices.

  13. An efficient matrix product operator representation of the quantum chemical Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, Sebastian, E-mail: sebastian.keller@phys.chem.ethz.ch; Reiher, Markus, E-mail: markus.reiher@phys.chem.ethz.ch; Dolfi, Michele, E-mail: dolfim@phys.ethz.ch

    We describe how to efficiently construct the quantum chemical Hamiltonian operator in matrix product form. We present its implementation as a density matrix renormalization group (DMRG) algorithm for quantum chemical applications. Existing implementations of DMRG for quantum chemistry are based on the traditional formulation of the method, which was developed from the point of view of Hilbert space decimation and attained higher performance compared to straightforward implementations of matrix product based DMRG. The latter variationally optimizes a class of ansatz states known as matrix product states, where operators are correspondingly represented as matrix product operators (MPOs). The MPO construction schememore » presented here eliminates the previous performance disadvantages while retaining the additional flexibility provided by a matrix product approach, for example, the specification of expectation values becomes an input parameter. In this way, MPOs for different symmetries — abelian and non-abelian — and different relativistic and non-relativistic models may be solved by an otherwise unmodified program.« less

  14. Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque

    NASA Astrophysics Data System (ADS)

    Klaus, Leonard; Eichstädt, Sascha

    2018-04-01

    For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.

  15. Simple prescription for computing the interparticle potential energy for D-dimensional gravity systems

    NASA Astrophysics Data System (ADS)

    Accioly, Antonio; Helayël-Neto, José; Barone, F. E.; Herdy, Wallace

    2015-02-01

    A straightforward prescription for computing the D-dimensional potential energy of gravitational models, which is strongly based on the Feynman path integral, is built up. Using this method, the static potential energy for the interaction of two masses is found in the context of D-dimensional higher-derivative gravity models, and its behavior is analyzed afterwards in both ultraviolet and infrared regimes. As a consequence, two new gravity systems in which the potential energy is finite at the origin, respectively, in D = 5 and D = 6, are found. Since the aforementioned prescription is equivalent to that based on the marriage between quantum mechanics (to leading order, i.e., in the first Born approximation) and the nonrelativistic limit of quantum field theory, and bearing in mind that the latter relies basically on the calculation of the nonrelativistic Feynman amplitude ({{M}NR}), a trivial expression for computing {{M}NR} is obtained from our prescription as an added bonus.

  16. Sensor-less pseudo-sinusoidal drive for a permanent-magnet brushless ac motor

    NASA Astrophysics Data System (ADS)

    Liu, Li-Hsiang; Chern, Tzuen-Lih; Pan, Ping-Lung; Huang, Tsung-Mou; Tsay, Der-Min; Kuang, Jao-Hwa

    2012-04-01

    The precise rotor-position information is required for a permanent-magnet brushless ac motor (BLACM) drive. In the conventional sinusoidal drive method, either an encoder or a resolver is usually employed. For position sensor-less vector control schemes, the rotor flux estimation and torque components are obtained by complicated coordinate transformations. These computational intensive methods are susceptible to current distortions and parameter variations. To simplify the method complexity, this work presents a sensor-less pseudo-sinusoidal drive scheme with speed control for a three-phase BLACM. Based on the sinusoidal drive scheme, a floating period of each phase current is inserted for back electromotive force detection. The zero-crossing point is determined directly by the proposed scheme, and the rotor magnetic position and rotor speed can be estimated simultaneously. Several experiments for various active angle periods are undertaken. Furthermore, a current feedback control is included to minimize and compensate the torque fluctuation. The experimental results show that the proposed method has a competitive performance compared with the conventional drive manners for BLACM. The proposed scheme is straightforward, bringing the benefits of sensor-less drive and negating the need for coordinate transformations in the operating process.

  17. Frequency response function-based explicit framework for dynamic identification in human-structure systems

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Živanović, Stana

    2018-05-01

    The aim of this paper is to propose a novel theoretical framework for dynamic identification in a structure occupied by a single human. The framework enables the prediction of the dynamics of the human-structure system from the known properties of the individual system components, the identification of human body dynamics from the known dynamics of the empty structure and the human-structure system and the identification of the properties of the structure from the known dynamics of the human and the human-structure system. The novelty of the proposed framework is the provision of closed-form solutions in terms of frequency response functions obtained by curve fitting measured data. The advantages of the framework over existing methods are that there is neither need for nonlinear optimisation nor need for spatial/modal models of the empty structure and the human-structure system. In addition, the second-order perturbation method is employed to quantify the effect of uncertainties in human body dynamics on the dynamic identification of the empty structure and the human-structure system. The explicit formulation makes the method computationally efficient and straightforward to use. A series of numerical examples and experiments are provided to illustrate the working of the method.

  18. Cutoff Finder: A Comprehensive and Straightforward Web Application Enabling Rapid Biomarker Cutoff Optimization

    PubMed Central

    Budczies, Jan; Klauschen, Frederick; Sinn, Bruno V.; Győrffy, Balázs; Schmitt, Wolfgang D.; Darb-Esfahani, Silvia; Denkert, Carsten

    2012-01-01

    Gene or protein expression data are usually represented by metric or at least ordinal variables. In order to translate a continuous variable into a clinical decision, it is necessary to determine a cutoff point and to stratify patients into two groups each requiring a different kind of treatment. Currently, there is no standard method or standard software for biomarker cutoff determination. Therefore, we developed Cutoff Finder, a bundle of optimization and visualization methods for cutoff determination that is accessible online. While one of the methods for cutoff optimization is based solely on the distribution of the marker under investigation, other methods optimize the correlation of the dichotomization with respect to an outcome or survival variable. We illustrate the functionality of Cutoff Finder by the analysis of the gene expression of estrogen receptor (ER) and progesterone receptor (PgR) in breast cancer tissues. This distribution of these important markers is analyzed and correlated with immunohistologically determined ER status and distant metastasis free survival. Cutoff Finder is expected to fill a relevant gap in the available biometric software repertoire and will enable faster optimization of new diagnostic biomarkers. The tool can be accessed at http://molpath.charite.de/cutoff. PMID:23251644

  19. Review and future prospects for DNA barcoding methods in forensic palynology.

    PubMed

    Bell, Karen L; Burgess, Kevin S; Okamoto, Kazufusa C; Aranda, Roman; Brosi, Berry J

    2016-03-01

    Pollen can be a critical forensic marker in cases where determining geographic origin is important, including investigative leads, missing persons cases, and intelligence applications. However, its use has previously been limited by the need for a high level of specialization by expert palynologists, slow speeds of identification, and relatively poor taxonomic resolution (typically to the plant family or genus level). By contrast, identification of pollen through DNA barcoding has the potential to overcome all three of these limitations, and it may seem surprising that the method has not been widely implemented. Despite what might seem a straightforward application of DNA barcoding to pollen, there are technical issues that have delayed progress. However, recent developments of standard methods for DNA barcoding of pollen, along with improvements in high-throughput sequencing technology, have overcome most of these technical issues. Based on these recent methodological developments in pollen DNA barcoding, we believe that now is the time to start applying these techniques in forensic palynology. In this article, we discuss the potential for these methods, and outline directions for future research to further improve on the technology and increase its applicability to a broader range of situations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. A dynamic-solver-consistent minimum action method: With an application to 2D Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoliang; Yu, Haijun

    2017-02-01

    This paper discusses the necessity and strategy to unify the development of a dynamic solver and a minimum action method (MAM) for a spatially extended system when employing the large deviation principle (LDP) to study the effects of small random perturbations. A dynamic solver is used to approximate the unperturbed system, and a minimum action method is used to approximate the LDP, which corresponds to solving an Euler-Lagrange equation related to but more complicated than the unperturbed system. We will clarify possible inconsistencies induced by independent numerical approximations of the unperturbed system and the LDP, based on which we propose to define both the dynamic solver and the MAM on the same approximation space for spatial discretization. The semi-discrete LDP can then be regarded as the exact LDP of the semi-discrete unperturbed system, which is a finite-dimensional ODE system. We achieve this methodology for the two-dimensional Navier-Stokes equations using a divergence-free approximation space. The method developed can be used to study the nonlinear instability of wall-bounded parallel shear flows, and be generalized straightforwardly to three-dimensional cases. Numerical experiments are presented.

  1. Quantitative image analysis for evaluating the coating thickness and pore distribution in coated small particles.

    PubMed

    Laksmana, F L; Van Vliet, L J; Hartman Kok, P J A; Vromans, H; Frijlink, H W; Van der Voort Maarschalk, K

    2009-04-01

    This study aims to develop a characterization method for coating structure based on image analysis, which is particularly promising for the rational design of coated particles in the pharmaceutical industry. The method applies the MATLAB image processing toolbox to images of coated particles taken with Confocal Laser Scanning Microscopy (CSLM). The coating thicknesses have been determined along the particle perimeter, from which a statistical analysis could be performed to obtain relevant thickness properties, e.g. the minimum coating thickness and the span of the thickness distribution. The characterization of the pore structure involved a proper segmentation of pores from the coating and a granulometry operation. The presented method facilitates the quantification of porosity, thickness and pore size distribution of a coating. These parameters are considered the important coating properties, which are critical to coating functionality. Additionally, the effect of the coating process variations on coating quality can straight-forwardly be assessed. Enabling a good characterization of the coating qualities, the presented method can be used as a fast and effective tool to predict coating functionality. This approach also enables the influence of different process conditions on coating properties to be effectively monitored, which latterly leads to process tailoring.

  2. Real-Time XRD Studies of Li-O2 Electrochemical Reaction in Nonaqueous Lithium-Oxygen Battery.

    PubMed

    Lim, Hyunseob; Yilmaz, Eda; Byon, Hye Ryung

    2012-11-01

    Understanding of electrochemical process in rechargeable Li-O2 battery has suffered from lack of proper analytical tool, especially related to the identification of chemical species and number of electrons involved in the discharge/recharge process. Here we present a simple and straightforward analytical method for simultaneously attaining chemical and quantified information of Li2O2 (discharge product) and byproducts using in situ XRD measurement. By real-time monitoring of solid-state Li2O2 peak area, the accurate efficiency of Li2O2 formation and the number of electrons can be evaluated during full discharge. Furthermore, by observation of sequential area change of Li2O2 peak during recharge, we found nonlinearity of Li2O2 decomposition rate for the first time in ether-based electrolyte.

  3. Bayesian Estimation of Fish Disease Prevalence from Pooled Samples Incorporating Sensitivity and Specificity

    NASA Astrophysics Data System (ADS)

    Williams, Christopher J.; Moffitt, Christine M.

    2003-03-01

    An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.

  4. A Novel Partial Sequence Alignment Tool for Finding Large Deletions

    PubMed Central

    Aruk, Taner; Ustek, Duran; Kursun, Olcay

    2012-01-01

    Finding large deletions in genome sequences has become increasingly more useful in bioinformatics, such as in clinical research and diagnosis. Although there are a number of publically available next generation sequencing mapping and sequence alignment programs, these software packages do not correctly align fragments containing deletions larger than one kb. We present a fast alignment software package, BinaryPartialAlign, that can be used by wet lab scientists to find long structural variations in their experiments. For BinaryPartialAlign, we make use of the Smith-Waterman (SW) algorithm with a binary-search-based approach for alignment with large gaps that we called partial alignment. BinaryPartialAlign implementation is compared with other straight-forward applications of SW. Simulation results on mtDNA fragments demonstrate the effectiveness (runtime and accuracy) of the proposed method. PMID:22566777

  5. An Index and Test of Linear Moderated Mediation.

    PubMed

    Hayes, Andrew F

    2015-01-01

    I describe a test of linear moderated mediation in path analysis based on an interval estimate of the parameter of a function linking the indirect effect to values of a moderator-a parameter that I call the index of moderated mediation. This test can be used for models that integrate moderation and mediation in which the relationship between the indirect effect and the moderator is estimated as linear, including many of the models described by Edwards and Lambert ( 2007 ) and Preacher, Rucker, and Hayes ( 2007 ) as well as extensions of these models to processes involving multiple mediators operating in parallel or in serial. Generalization of the method to latent variable models is straightforward. Three empirical examples describe the computation of the index and the test, and its implementation is illustrated using Mplus and the PROCESS macro for SPSS and SAS.

  6. Glycofunctionalization of Poly(lactic- co-glycolic acid) Polymers: Building Blocks for the Generation of Defined Sugar-Coated Nanoparticles.

    PubMed

    Palmioli, Alessandro; La Ferla, Barbara

    2018-06-15

    A set of poly(lactic- co-glycolic acid) polymers functionalized with different monosaccharides as well as glycodendrimers and surface-decorated nanoparticles (NPs) were synthesized and characterized. The functionalization of the polymer was carried out through amide bond formation with amino-modified sugar monomers and through a biocompatible chemoselective method exploiting the reducing end of a free sugar. The assemblage of the NPs adopting a nanoprecipitation method was straightforward and allowed the preparation of sugars/sugar dendrimer coated NPs.

  7. Optimization of Statistical Methods Impact on Quantitative Proteomics Data.

    PubMed

    Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L

    2015-10-02

    As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.

  8. Copper-Catalyzed, Directing Group-Assisted Fluorination of Arene and Heteroarene C-H Bonds

    PubMed Central

    Truong, Thanh; Klimovica, Kristine; Daugulis, Olafs

    2013-01-01

    We have developed a method for direct, copper-catalyzed, auxiliary-assisted fluorination of β-sp2 C-H bonds of benzoic acid derivatives and γ-sp2 C-H bonds of α,α-disubstituted benzylamine derivatives. The reaction employs CuI catalyst, AgF fluoride source, and DMF, pyridine, or DMPU solvent at moderately elevated temperatures. Selective mono- or difluorination can be achieved by simply changing reaction conditions. The method shows excellent functional group tolerance and provides a straightforward way for the preparation of ortho-fluorinated benzoic acids. PMID:23758609

  9. Synthetic Method for Oligonucleotide Block by Using Alkyl-Chain-Soluble Support.

    PubMed

    Matsuno, Yuki; Shoji, Takao; Kim, Shokaku; Chiba, Kazuhiro

    2016-02-19

    A straightforward method for the synthesis of oligonucleotide blocks using a Cbz-type alkyl-chain-soluble support (Z-ACSS) attached to the 3'-OH group of 3'-terminal nucleosides was developed. The Z-ACSS allowed for the preparation of fully protected deoxyribo- and ribo-oligonucleotides without chromatographic purification and released dimer- to tetramer-size oligonucleotide blocks via hydrogenation using a Pd/C catalyst without significant loss or migration of protective groups such as 5'-end 4,4'-dimethoxtrityl, 2-cyanoethyl on internucleotide bonds, or 2'-TBS.

  10. MUSTA fluxes for systems of conservation laws

    NASA Astrophysics Data System (ADS)

    Toro, E. F.; Titarev, V. A.

    2006-08-01

    This paper is about numerical fluxes for hyperbolic systems and we first present a numerical flux, called GFORCE, that is a weighted average of the Lax-Friedrichs and Lax-Wendroff fluxes. For the linear advection equation with constant coefficient, the new flux reduces identically to that of the Godunov first-order upwind method. Then we incorporate GFORCE in the framework of the MUSTA approach [E.F. Toro, Multi-Stage Predictor-Corrector Fluxes for Hyperbolic Equations. Technical Report NI03037-NPA, Isaac Newton Institute for Mathematical Sciences, University of Cambridge, UK, 17th June, 2003], resulting in a version that we call GMUSTA. For non-linear systems this gives results that are comparable to those of the Godunov method in conjunction with the exact Riemann solver or complete approximate Riemann solvers, noting however that in our approach, the solution of the Riemann problem in the conventional sense is avoided. Both the GFORCE and GMUSTA fluxes are extended to multi-dimensional non-linear systems in a straightforward unsplit manner, resulting in linearly stable schemes that have the same stability regions as the straightforward multi-dimensional extension of Godunov's method. The methods are applicable to general meshes. The schemes of this paper share with the family of centred methods the common properties of being simple and applicable to a large class of hyperbolic systems, but the schemes of this paper are distinctly more accurate. Finally, we proceed to the practical implementation of our numerical fluxes in the framework of high-order finite volume WENO methods for multi-dimensional non-linear hyperbolic systems. Numerical results are presented for the Euler equations and for the equations of magnetohydrodynamics.

  11. Retrieval of Snow and Rain From Combined X- and W-B and Airborne Radar Measurements

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert; Tian, Lin; Heymsfield, Gerald M.

    2008-01-01

    Two independent airborne dual-wavelength techniques, based on nadir measurements of radar reflectivity factors and Doppler velocities, respectively, are investigated with respect to their capability of estimating microphysical properties of hydrometeors. The data used to investigate the methods are taken from the ER-2 Doppler radar (X-band) and Cloud Radar System (W-band) airborne Doppler radars during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers-Florida Area Cirrus Experiment campaign in 2002. Validity is assessed by the degree to which the methods produce consistent retrievals of the microphysics. For deriving snow parameters, the reflectivity-based technique has a clear advantage over the Doppler-velocity-based approach because of the large dynamic range in the dual-frequency ratio (DFR) with respect to the median diameter Do and the fact that the difference in mean Doppler velocity at the two frequencies, i.e., the differential Doppler velocity (DDV), in snow is small relative to the measurement errors and is often not uniquely related to Do. The DFR and DDV can also be used to independently derive Do in rain. At W-band, the DFR-based algorithms are highly sensitive to attenuation from rain, cloud water, and water vapor. Thus, the retrieval algorithms depend on various assumptions regarding these components, whereas the DDV-based approach is unaffected by attenuation. In view of the difficulties and ambiguities associated with the attenuation correction at W-band, the DDV approach in rain is more straightforward and potentially more accurate than the DFR method.

  12. Activity-based costing: a practical model for cost calculation in radiotherapy.

    PubMed

    Lievens, Yolande; van den Bogaert, Walter; Kesteloot, Katrien

    2003-10-01

    The activity-based costing method was used to compute radiotherapy costs. This report describes the model developed, the calculated costs, and possible applications for the Leuven radiotherapy department. Activity-based costing is an advanced cost calculation technique that allocates resource costs to products based on activity consumption. In the Leuven model, a complex allocation principle with a large diversity of cost drivers was avoided by introducing an extra allocation step between activity groups and activities. A straightforward principle of time consumption, weighed by some factors of treatment complexity, was used. The model was developed in an iterative way, progressively defining the constituting components (costs, activities, products, and cost drivers). Radiotherapy costs are predominantly determined by personnel and equipment cost. Treatment-related activities consume the greatest proportion of the resource costs, with treatment delivery the most important component. This translates into products that have a prolonged total or daily treatment time being the most costly. The model was also used to illustrate the impact of changes in resource costs and in practice patterns. The presented activity-based costing model is a practical tool to evaluate the actual cost structure of a radiotherapy department and to evaluate possible resource or practice changes.

  13. Arterial input function of an optical tracer for dynamic contrast enhanced imaging can be determined from pulse oximetry oxygen saturation measurements

    NASA Astrophysics Data System (ADS)

    Elliott, Jonathan T.; Wright, Eric A.; Tichauer, Kenneth M.; Diop, Mamadou; Morrison, Laura B.; Pogue, Brian W.; Lee, Ting-Yim; St. Lawrence, Keith

    2012-12-01

    In many cases, kinetic modeling requires that the arterial input function (AIF)—the time-dependent arterial concentration of a tracer—be characterized. A straightforward method to measure the AIF of red and near-infrared optical dyes (e.g., indocyanine green) using a pulse oximeter is presented. The method is motivated by the ubiquity of pulse oximeters used in both preclinical and clinical applications, as well as the gap in currently available technologies to measure AIFs in small animals. The method is based on quantifying the interference that is observed in the derived arterial oxygen saturation (SaO2) following a bolus injection of a light-absorbing dye. In other words, the change in SaO2 can be converted into dye concentration knowing the chromophore-specific extinction coefficients, the true arterial oxygen saturation, and total hemoglobin concentration. A simple error analysis was performed to highlight potential limitations of the approach, and a validation of the method was conducted in rabbits by comparing the pulse oximetry method with the AIF acquired using a pulse dye densitometer. Considering that determining the AIF is required for performing quantitative tracer kinetics, this method provides a flexible tool for measuring the arterial dye concentration that could be used in a variety of applications.

  14. Arterial input function of an optical tracer for dynamic contrast enhanced imaging can be determined from pulse oximetry oxygen saturation measurements.

    PubMed

    Elliott, Jonathan T; Wright, Eric A; Tichauer, Kenneth M; Diop, Mamadou; Morrison, Laura B; Pogue, Brian W; Lee, Ting-Yim; St Lawrence, Keith

    2012-12-21

    In many cases, kinetic modeling requires that the arterial input function (AIF)--the time-dependent arterial concentration of a tracer--be characterized. A straightforward method to measure the AIF of red and near-infrared optical dyes (e.g., indocyanine green) using a pulse oximeter is presented. The method is motivated by the ubiquity of pulse oximeters used in both preclinical and clinical applications, as well as the gap in currently available technologies to measure AIFs in small animals. The method is based on quantifying the interference that is observed in the derived arterial oxygen saturation (SaO₂) following a bolus injection of a light-absorbing dye. In other words, the change in SaO₂ can be converted into dye concentration knowing the chromophore-specific extinction coefficients, the true arterial oxygen saturation, and total hemoglobin concentration. A simple error analysis was performed to highlight potential limitations of the approach, and a validation of the method was conducted in rabbits by comparing the pulse oximetry method with the AIF acquired using a pulse dye densitometer. Considering that determining the AIF is required for performing quantitative tracer kinetics, this method provides a flexible tool for measuring the arterial dye concentration that could be used in a variety of applications.

  15. Multiobjective decision-making in integrated water management

    NASA Astrophysics Data System (ADS)

    Pouwels, I. H. M.; Wind, H. G.; Witter, V. J.

    1995-08-01

    Traditionally, decision-making by water authorities in the Netherlands is largely based on intuition. Their tasks were, after all, relatively few and straight-forward. The growing number of tasks, together with the new integrated approach on water management issues, however, induces water authorities to rationalise their decision process. In order to choose the most effective water management measures, the external effects of these measures need to be taken into account. Therefore, methods have been developed to incorporate these effects in the decision-making phase. Using analytical evaluation methods, the effects of various measures on the water system (physical and chemical quality, ecology and quantity) can be taken into consideration. In this manner a more cognitive way of choosing between alternative measures can be obtained. This paper describes an application of such a decision method on a river basin scale. Main topics, in this paper, are the extent to which uncertainties (in technical information and deficiencies in the techniques applied) limit the usefulness of these methods, and also the question whether these techniques can really be used to select measures that give maximum environmental benefit for minimum cost. It is shown that the influence of these restrictions on the validity of the outcome of the decision methods can be profound. Using these results, improvement of the methods can be realised.

  16. Evaluation of a Method for Rapid Detection of Listeria monocytogenes in Dry-Cured Ham Based on Impedanciometry Combined with Chromogenic Agar.

    PubMed

    Labrador, Mirian; Rota, María C; Pérez, Consuelo; Herrera, Antonio; Bayarri, Susana

    2018-05-01

    The food industry is in need of rapid, reliable methodologies for the detection of Listeria monocytogenes in ready-to-eat products, as an alternative to the International Organization of Standardization (ISO) 11290-1 reference method. The aim of this study was to evaluate impedanciometry combined with chromogenic agar culture for the detection of L. monocytogenes in dry-cured ham. The experimental setup consisted in assaying four strains of L. monocytogenes and two strains of Listeria innocua in pure culture. The method was evaluated according to the ISO 16140:2003 standard through a comparative study with the ISO reference method with 119 samples of dry-cured ham. Significant determination coefficients ( R 2 of up to 0.99) for all strains assayed in pure culture were obtained. The comparative study results had 100% accuracy, 100% specificity, and 100% sensitivity. Impedanciometry followed by chromogenic agar culture was capable of detecting 1 CFU/25 g of food. L. monocytogenes was not detected in the 65 commercial samples tested. The method evaluated herein represents a promising alternative for the food industry in its efforts to control L. monocytogenes. Overall analysis time is shorter and the method permits a straightforward analysis of a large number of samples with reliable results.

  17. The historical bases of the Rayleigh and Ritz methods

    NASA Astrophysics Data System (ADS)

    Leissa, A. W.

    2005-11-01

    Rayleigh's classical book Theory of Sound was first published in 1877. In it are many examples of calculating fundamental natural frequencies of free vibration of continuum systems (strings, bars, beams, membranes, plates) by assuming the mode shape, and setting the maximum values of potential and kinetic energy in a cycle of motion equal to each other. This procedure is well known as "Rayleigh's Method." In 1908, Ritz laid out his famous method for determining frequencies and mode shapes, choosing multiple admissible displacement functions, and minimizing a functional involving both potential and kinetic energies. He then demonstrated it in detail in 1909 for the completely free square plate. In 1911, Rayleigh wrote a paper congratulating Ritz on his work, but stating that he himself had used Ritz's method in many places in his book and in another publication. Subsequently, hundreds of research articles and many books have appeared which use the method, some calling it the "Ritz method" and others the "Rayleigh-Ritz method." The present article examines the method in detail, as Ritz presented it, and as Rayleigh claimed to have used it. It concludes that, although Rayleigh did solve a few problems which involved minimization of a frequency, these solutions were not by the straightforward, direct method presented by Ritz and used subsequently by others. Therefore, Rayleigh's name should not be attached to the method.

  18. Statistical methods to estimate treatment effects from multichannel electroencephalography (EEG) data in clinical trials.

    PubMed

    Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir

    2010-07-15

    With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.

  19. Design of Energy Storage Reactors for Dc-To-Dc Converters. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, D. Y.

    1975-01-01

    Two methodical approaches to the design of energy-storage reactors for a group of widely used dc-to-dc converters are presented. One of these approaches is based on a steady-state time-domain analysis of piecewise-linearized circuit models of the converters, while the other approach is based on an analysis of the same circuit models, but from an energy point of view. The design procedure developed from the first approach includes a search through a stored data file of magnetic core characteristics and results in a list of usable reactor designs which meet a particular converter's requirements. Because of the complexity of this procedure, a digital computer usually is used to implement the design algorithm. The second approach, based on a study of the storage and transfer of energy in the magnetic reactors, leads to a straightforward design procedure which can be implemented with hand calculations. An equation to determine the lower-bound volume of workable cores for given converter design specifications is derived. Using this computer lower-bound volume, a comparative evaluation of various converter configurations is presented.

  20. Study of hypervelocity projectile impact on thick metal plates

    DOE PAGES

    Roy, Shawoon K.; Trabia, Mohamed; O’Toole, Brendan; ...

    2016-01-01

    Hypervelocity impacts generate extreme pressure and shock waves in impacted targets that undergo severe localized deformation within a few microseconds. These impact experiments pose unique challenges in terms of obtaining accurate measurements. Similarly, simulating these experiments is not straightforward. This paper proposed an approach to experimentally measure the velocity of the back surface of an A36 steel plate impacted by a projectile. All experiments used a combination of a two-stage light-gas gun and the photonic Doppler velocimetry (PDV) technique. The experimental data were used to benchmark and verify computational studies. Two different finite-element methods were used to simulate the experiments:more » Lagrangian-based smooth particle hydrodynamics (SPH) and Eulerian-based hydrocode. Both codes used the Johnson-Cook material model and the Mie-Grüneisen equation of state. Experiments and simulations were compared based on the physical damage area and the back surface velocity. Finally, the results of this study showed that the proposed simulation approaches could be used to reduce the need for expensive experiments.« less

  1. Self-Supporting, Hydrophobic, Ionic Liquid-Based Reference Electrodes Prepared by Polymerization-Induced Microphase Separation.

    PubMed

    Chopade, Sujay A; Anderson, Evan L; Schmidt, Peter W; Lodge, Timothy P; Hillmyer, Marc A; Bühlmann, Philippe

    2017-10-27

    Interfaces of ionic liquids and aqueous solutions exhibit stable electrical potentials over a wide range of aqueous electrolyte concentrations. This makes ionic liquids suitable as bridge materials that separate in electroanalytical measurements the reference electrode from samples with low and/or unknown ionic strengths. However, methods for the preparation of ionic liquid-based reference electrodes have not been explored widely. We have designed a convenient and reliable synthesis of ionic liquid-based reference electrodes by polymerization-induced microphase separation. This technique allows for a facile, single-pot synthesis of ready-to-use reference electrodes that incorporate ion conducting nanochannels filled with either 1-octyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide or 1-dodecyl-3-methylimidazolium bis(trifluoromethyl sulfonyl)imide as ionic liquid, supported by a mechanically robust cross-linked polystyrene phase. This synthesis procedure allows for the straightforward design of various reference electrode geometries. These reference electrodes exhibit a low resistance as well as good reference potential stability and reproducibility when immersed into aqueous solutions varying from deionized, purified water to 100 mM KCl, while requiring no correction for liquid junction potentials.

  2. A High Performance Impedance-based Platform for Evaporation Rate Detection.

    PubMed

    Chou, Wei-Lung; Lee, Pee-Yew; Chen, Cheng-You; Lin, Yu-Hsin; Lin, Yung-Sheng

    2016-10-17

    This paper describes the method of a novel impedance-based platform for the detection of the evaporation rate. The model compound hyaluronic acid was employed here for demonstration purposes. Multiple evaporation tests on the model compound as a humectant with various concentrations in solutions were conducted for comparison purposes. A conventional weight loss approach is known as the most straightforward, but time-consuming, measurement technique for evaporation rate detection. Yet, a clear disadvantage is that a large volume of sample is required and multiple sample tests cannot be conducted at the same time. For the first time in literature, an electrical impedance sensing chip is successfully applied to a real-time evaporation investigation in a time sharing, continuous and automatic manner. Moreover, as little as 0.5 ml of test samples is required in this impedance-based apparatus, and a large impedance variation is demonstrated among various dilute solutions. The proposed high-sensitivity and fast-response impedance sensing system is found to outperform a conventional weight loss approach in terms of evaporation rate detection.

  3. Deriving field-based species sensitivity distributions (f-SSDs) from stacked species distribution models (S-SDMs).

    PubMed

    Schipper, Aafke M; Posthuma, Leo; de Zwart, Dick; Huijbregts, Mark A J

    2014-12-16

    Quantitative relationships between species richness and single environmental factors, also called species sensitivity distributions (SSDs), are helpful to understand and predict biodiversity patterns, identify environmental management options and set environmental quality standards. However, species richness is typically dependent on a variety of environmental factors, implying that it is not straightforward to quantify SSDs from field monitoring data. Here, we present a novel and flexible approach to solve this, based on the method of stacked species distribution modeling. First, a species distribution model (SDM) is established for each species, describing its probability of occurrence in relation to multiple environmental factors. Next, the predictions of the SDMs are stacked along the gradient of each environmental factor with the remaining environmental factors at fixed levels. By varying those fixed levels, our approach can be used to investigate how field-based SSDs for a given environmental factor change in relation to changing confounding influences, including for example optimal, typical, or extreme environmental conditions. This provides an asset in the evaluation of potential management measures to reach good ecological status.

  4. Study of carbon nanotube-rich impedimetric recognition electrode for ultra-low determination of polycyclic aromatic hydrocarbons in water.

    PubMed

    Muñoz, Jose; Navarro-Senent, Cristina; Crivillers, Nuria; Mas-Torrent, Marta

    2018-04-14

    Carbon nanotubes (CNTs) have been studied as an electrochemical recognition element for the impedimetric determination of priority polycyclic aromatic hydrocarbons (PAHs) in water, using hexocyanoferrate as a redox probe. For this goal, an indium tin oxide (ITO) electrode functionalized with a silane-based self-assembled monolayer carrying CNTs has been engineered. The electroanalytical method, which is similar to an antibody-antigen assay, is straightforward and exploits the high CNT-PAH affinity obtained via π-interactions. After optimizing the experimental conditions, the resulting CNT-based impedimetric recognition platform exhibits ultra-low detection limits (1.75 ± 0.04 ng·L -1 ) for the sum of PAHs tested, which was also validated by using a certified reference PAH mixture. Graphical abstract Schematic of an indium-tin-oxide (ITO) electrode functionalized with a silane-based self-assembled monolayer carrying carbon nanotubes (CNTs) as a recognition platform for the ultra-low determination of total polycyclic aromatic hydrocarbons (PAHs) in water via π-interactions using Electrochemical Impedance Spectroscopy (EIS).

  5. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  6. Organometallic Palladium Reagents for Cysteine Bioconjugation

    PubMed Central

    Vinogradova, Ekaterina V.; Zhang, Chi; Spokoyny, Alexander M.; Pentelute, Bradley L.; Buchwald, Stephen L.

    2015-01-01

    Transition-metal based reactions have found wide use in organic synthesis and are used frequently to functionalize small molecules.1,2 However, there are very few reports of using transition-metal based reactions to modify complex biomolecules3,4, which is due to the need for stringent reaction conditions (for example, aqueous media, low temperature, and mild pH) and the existence of multiple, reactive functional groups found in biopolymers. Here we report that palladium(II) complexes can be used for efficient and highly selective cysteine conjugation reactions. The bioconjugation reaction is rapid and robust under a range of biocompatible reaction conditions. The straightforward synthesis of the palladium reagents from diverse and easily accessible aryl halide and trifluoromethanesulfonate precursors makes the method highly practical, providing access to a large structural space for protein modification. The resulting aryl bioconjugates are stable towards acids, bases, oxidants, and external thiol nucleophiles. The broad utility of the new bioconjugation platform was further corroborated by the synthesis of new classes of stapled peptides and antibody-drug conjugates. These palladium complexes show potential as a new set of benchtop reagents for diverse bioconjugation applications. PMID:26511579

  7. Unveiling combinatorial regulation through the combination of ChIP information and in silico cis-regulatory module detection

    PubMed Central

    Sun, Hong; Guns, Tias; Fierro, Ana Carolina; Thorrez, Lieven; Nijssen, Siegfried; Marchal, Kathleen

    2012-01-01

    Computationally retrieving biologically relevant cis-regulatory modules (CRMs) is not straightforward. Because of the large number of candidates and the imperfection of the screening methods, many spurious CRMs are detected that are as high scoring as the biologically true ones. Using ChIP-information allows not only to reduce the regions in which the binding sites of the assayed transcription factor (TF) should be located, but also allows restricting the valid CRMs to those that contain the assayed TF (here referred to as applying CRM detection in a query-based mode). In this study, we show that exploiting ChIP-information in a query-based way makes in silico CRM detection a much more feasible endeavor. To be able to handle the large datasets, the query-based setting and other specificities proper to CRM detection on ChIP-Seq based data, we developed a novel powerful CRM detection method ‘CPModule’. By applying it on a well-studied ChIP-Seq data set involved in self-renewal of mouse embryonic stem cells, we demonstrate how our tool can recover combinatorial regulation of five known TFs that are key in the self-renewal of mouse embryonic stem cells. Additionally, we make a number of new predictions on combinatorial regulation of these five key TFs with other TFs documented in TRANSFAC. PMID:22422841

  8. Stereo-tomography in triangulated models

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai

    2018-04-01

    Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.

  9. On the wide-range bias dependence of transistor d.c. and small-signal current gain factors.

    NASA Technical Reports Server (NTRS)

    Schmidt, P.; Das, M. B.

    1972-01-01

    Critical reappraisal of the bias dependence of the dc and small-signal ac current gain factors of planar bipolar transistors over a wide range of currents. This is based on a straightforward consideration of the three basic components of the dc base current arising due to emitter-to-base injected minority carrier transport, base-to-emitter carrier injection, and emitter-base surface depletion layer recombination effects. Experimental results on representative n-p-n and p-n-p silicon devices are given which support most of the analytical findings.

  10. Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth

    NASA Astrophysics Data System (ADS)

    Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.

    2017-12-01

    We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.

  11. Surface-from-gradients without discrete integrability enforcement: A Gaussian kernel approach.

    PubMed

    Ng, Heung-Sun; Wu, Tai-Pang; Tang, Chi-Keung

    2010-11-01

    Representative surface reconstruction algorithms taking a gradient field as input enforce the integrability constraint in a discrete manner. While enforcing integrability allows the subsequent integration to produce surface heights, existing algorithms have one or more of the following disadvantages: They can only handle dense per-pixel gradient fields, smooth out sharp features in a partially integrable field, or produce severe surface distortion in the results. In this paper, we present a method which does not enforce discrete integrability and reconstructs a 3D continuous surface from a gradient or a height field, or a combination of both, which can be dense or sparse. The key to our approach is the use of kernel basis functions, which transfer the continuous surface reconstruction problem into high-dimensional space, where a closed-form solution exists. By using the Gaussian kernel, we can derive a straightforward implementation which is able to produce results better than traditional techniques. In general, an important advantage of our kernel-based method is that the method does not suffer discretization and finite approximation, both of which lead to surface distortion, which is typical of Fourier or wavelet bases widely adopted by previous representative approaches. We perform comparisons with classical and recent methods on benchmark as well as challenging data sets to demonstrate that our method produces accurate surface reconstruction that preserves salient and sharp features. The source code and executable of the system are available for downloading.

  12. Thermally induced chain orientation for improved thermal conductivity of P(VDF-TrFE) thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Junnan; Tan, Aaron C.; Green, Peter F.

    2017-01-01

    A large increase in thermal conductivityκwas observed in a P(VDF-TrFE) thin film annealed above melting temperature due to extensive ordering of polymer backbone chains perpendicular to the substrate after recrystallization from the melt. This finding may lay out a straightforward method to improve the thin filmκof semicrystalline polymers whose chain orientation is sensitive to thermal annealing.

  13. Direct Synthesis of 5-Aryl Barbituric Acids by Rhodium(II)-Catalyzed Reactions of Arenes with Diazo Compounds**

    PubMed Central

    Best, Daniel; Burns, David J; Lam, Hon Wai

    2015-01-01

    A commercially available rhodium(II) complex catalyzes the direct arylation of 5-diazobarbituric acids with arenes, allowing straightforward access to 5-aryl barbituric acids. Free N—H groups are tolerated on the barbituric acid, with no complications arising from N—H insertion processes. This method was applied to the concise synthesis of a potent matrix metalloproteinase (MMP) inhibitor. PMID:25959544

  14. Data normalization in biosurveillance: an information-theoretic approach.

    PubMed

    Peter, William; Najmi, Amir H; Burkom, Howard

    2007-10-11

    An approach to identifying public health threats by characterizing syndromic surveillance data in terms of its surprisability is discussed. Surprisability in our model is measured by assigning a probability distribution to a time series, and then calculating its entropy, leading to a straightforward designation of an alert. Initial application of our method is to investigate the applicability of using suitably-normalized syndromic counts (i.e., proportions) to improve early event detection.

  15. Turbulent Dispersion Modelling in a Complex Urban Environment - Data Analysis and Model Development

    DTIC Science & Technology

    2010-02-01

    Technology Laboratory (Dstl) is used as a benchmark for comparison. Comparisons are also made with some more practically oriented computational fluid dynamics...predictions. To achieve clarity in the range of approaches available for practical models of con- taminant dispersion in urban areas, an overview of...complexity of those methods is simplified to a degree that allows straightforward practical implementation and application. Using these results as a

  16. Parallelization of Lower-Upper Symmetric Gauss-Seidel Method for Chemically Reacting Flow

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Jost, Gabriele; Chang, Sherry

    2005-01-01

    Development of technologies for exploration of the solar system has revived an interest in computational simulation of chemically reacting flows since planetary probe vehicles exhibit non-equilibrium phenomena during the atmospheric entry of a planet or a moon as well as the reentry to the Earth. Stability in combustion is essential for new propulsion systems. Numerical solution of real-gas flows often increases computational work by an order-of-magnitude compared to perfect gas flow partly because of the increased complexity of equations to solve. Recently, as part of Project Columbia, NASA has integrated a cluster of interconnected SGI Altix systems to provide a ten-fold increase in current supercomputing capacity that includes an SGI Origin system. Both the new and existing machines are based on cache coherent non-uniform memory access architecture. Lower-Upper Symmetric Gauss-Seidel (LU-SGS) relaxation method has been implemented into both perfect and real gas flow codes including Real-Gas Aerodynamic Simulator (RGAS). However, the vectorized RGAS code runs inefficiently on cache-based shared-memory machines such as SGI system. Parallelization of a Gauss-Seidel method is nontrivial due to its sequential nature. The LU-SGS method has been vectorized on an oblique plane in INS3D-LU code that has been one of the base codes for NAS Parallel benchmarks. The oblique plane has been called a hyperplane by computer scientists. It is straightforward to parallelize a Gauss-Seidel method by partitioning the hyperplanes once they are formed. Another way of parallelization is to schedule processors like a pipeline using software. Both hyperplane and pipeline methods have been implemented using openMP directives. The present paper reports the performance of the parallelized RGAS code on SGI Origin and Altix systems.

  17. Regression without truth with Markov chain Monte-Carlo

    NASA Astrophysics Data System (ADS)

    Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga

    2017-03-01

    Regression without truth (RWT) is a statistical technique for estimating error model parameters of each method in a group of methods used for measurement of a certain quantity. A very attractive aspect of RWT is that it does not rely on a reference method or "gold standard" data, which is otherwise difficult RWT was used for a reference-free performance comparison of several methods for measuring left ventricular ejection fraction (EF), i.e. a percentage of blood leaving the ventricle each time the heart contracts, and has since been applied for various other quantitative imaging biomarkerss (QIBs). Herein, we show how Markov chain Monte-Carlo (MCMC), a computational technique for drawing samples from a statistical distribution with probability density function known only up to a normalizing coefficient, can be used to augment RWT to gain a number of important benefits compared to the original approach based on iterative optimization. For instance, the proposed MCMC-based RWT enables the estimation of joint posterior distribution of the parameters of the error model, straightforward quantification of uncertainty of the estimates, estimation of true value of the measurand and corresponding credible intervals (CIs), does not require a finite support for prior distribution of the measureand generally has a much improved robustness against convergence to non-global maxima. The proposed approach is validated using synthetic data that emulate the EF data for 45 patients measured with 8 different methods. The obtained results show that 90% CI of the corresponding parameter estimates contain the true values of all error model parameters and the measurand. A potential real-world application is to take measurements of a certain QIB several different methods and then use the proposed framework to compute the estimates of the true values and their uncertainty, a vital information for diagnosis based on QIB.

  18. Modeling of proton-induced radioactivation background in hard X-ray telescopes: Geant4-based simulation and its demonstration by Hitomi's measurement in a low Earth orbit

    NASA Astrophysics Data System (ADS)

    Odaka, Hirokazu; Asai, Makoto; Hagino, Kouichi; Koi, Tatsumi; Madejski, Greg; Mizuno, Tsunefumi; Ohno, Masanori; Saito, Shinya; Sato, Tamotsu; Wright, Dennis H.; Enoto, Teruaki; Fukazawa, Yasushi; Hayashi, Katsuhiro; Kataoka, Jun; Katsuta, Junichiro; Kawaharada, Madoka; Kobayashi, Shogo B.; Kokubun, Motohide; Laurent, Philippe; Lebrun, Francois; Limousin, Olivier; Maier, Daniel; Makishima, Kazuo; Mimura, Taketo; Miyake, Katsuma; Mori, Kunishiro; Murakami, Hiroaki; Nakamori, Takeshi; Nakano, Toshio; Nakazawa, Kazuhiro; Noda, Hirofumi; Ohta, Masayuki; Ozaki, Masanobu; Sato, Goro; Sato, Rie; Tajima, Hiroyasu; Takahashi, Hiromitsu; Takahashi, Tadayuki; Takeda, Shin'ichiro; Tanaka, Takaaki; Tanaka, Yasuyuki; Terada, Yukikatsu; Uchiyama, Hideki; Uchiyama, Yasunobu; Watanabe, Shin; Yamaoka, Kazutaka; Yasuda, Tetsuya; Yatsu, Yoichi; Yuasa, Takayuki; Zoglauer, Andreas

    2018-05-01

    Hard X-ray astronomical observatories in orbit suffer from a significant amount of background due to radioactivation induced by cosmic-ray protons and/or geomagnetically trapped protons. Within the framework of a full Monte Carlo simulation, we present modeling of in-orbit instrumental background which is dominated by radioactivation. To reduce the computation time required by straightforward simulations of delayed emissions from activated isotopes, we insert a semi-analytical calculation that converts production probabilities of radioactive isotopes by interaction of the primary protons into decay rates at measurement time of all secondary isotopes. Therefore, our simulation method is separated into three steps: (1) simulation of isotope production, (2) semi-analytical conversion to decay rates, and (3) simulation of decays of the isotopes at measurement time. This method is verified by a simple setup that has a CdTe semiconductor detector, and shows a 100-fold improvement in efficiency over the straightforward simulation. To demonstrate its experimental performance, the simulation framework was tested against data measured with a CdTe sensor in the Hard X-ray Imager onboard the Hitomi X-ray Astronomy Satellite, which was put into a low Earth orbit with an altitude of 570 km and an inclination of 31°, and thus experienced a large amount of irradiation from geomagnetically trapped protons during its passages through the South Atlantic Anomaly. The simulation is able to treat full histories of the proton irradiation and multiple measurement windows. The simulation results agree very well with the measured data, showing that the measured background is well described by the combination of proton-induced radioactivation of the CdTe detector itself and thick Bi4Ge3O12 scintillator shields, leakage of cosmic X-ray background and albedo gamma-ray radiation, and emissions from naturally contaminated isotopes in the detector system.

  19. Modeling of proton-induced radioactivation background in hard X-ray telescopes: Geant4-based simulation and its demonstration by Hitomi ’s measurement in a low Earth orbit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odaka, Hirokazu; Asai, Makoto; Hagino, Kouichi

    Hard X-ray astronomical observatories in orbit suffer from a significant amount of background due to radioactivation induced by cosmic-ray protons and/or geomagnetically trapped protons. Within the framework of a full Monte Carlo simulation, we present modeling of in-orbit instrumental background which is dominated by radioactivation. To reduce the computation time required by straightforward simulations of delayed emissions from activated isotopes, we insert a semi-analytical calculation that converts production probabilities of radioactive isotopes by interaction of the primary protons into decay rates at measurement time of all secondary isotopes. Therefore, our simulation method is separated into three steps: (1) simulation ofmore » isotope production, (2) semi-analytical conversion to decay rates, and (3) simulation of decays of the isotopes at measurement time. This method is verified by a simple setup that has a CdTe semiconductor detector, and shows a 100-fold improvement in efficiency over the straightforward simulation. To demonstrate its experimental performance, the simulation framework was tested against data measured with a CdTe sensor in the Hard X-ray Imager onboard the Hitomi X-ray Astronomy Satellite, which was put into a low Earth orbit with an altitude of 570 km and an inclination of 31°, and thus experienced a large amount of irradiation from geomagnetically trapped protons during its passages through the South Atlantic Anomaly. The simulation is able to treat full histories of the proton irradiation and multiple measurement windows. As a result, the simulation results agree very well with the measured data, showing that the measured background is well described by the combination of proton-induced radioactivation of the CdTe detector itself and thick Bi 4Ge 3O 12 scintillator shields, leakage of cosmic X-ray background and albedo gamma-ray radiation, and emissions from naturally contaminated isotopes in the detector system.« less

  20. Modeling of proton-induced radioactivation background in hard X-ray telescopes: Geant4-based simulation and its demonstration by Hitomi ’s measurement in a low Earth orbit

    DOE PAGES

    Odaka, Hirokazu; Asai, Makoto; Hagino, Kouichi; ...

    2018-02-19

    Hard X-ray astronomical observatories in orbit suffer from a significant amount of background due to radioactivation induced by cosmic-ray protons and/or geomagnetically trapped protons. Within the framework of a full Monte Carlo simulation, we present modeling of in-orbit instrumental background which is dominated by radioactivation. To reduce the computation time required by straightforward simulations of delayed emissions from activated isotopes, we insert a semi-analytical calculation that converts production probabilities of radioactive isotopes by interaction of the primary protons into decay rates at measurement time of all secondary isotopes. Therefore, our simulation method is separated into three steps: (1) simulation ofmore » isotope production, (2) semi-analytical conversion to decay rates, and (3) simulation of decays of the isotopes at measurement time. This method is verified by a simple setup that has a CdTe semiconductor detector, and shows a 100-fold improvement in efficiency over the straightforward simulation. To demonstrate its experimental performance, the simulation framework was tested against data measured with a CdTe sensor in the Hard X-ray Imager onboard the Hitomi X-ray Astronomy Satellite, which was put into a low Earth orbit with an altitude of 570 km and an inclination of 31°, and thus experienced a large amount of irradiation from geomagnetically trapped protons during its passages through the South Atlantic Anomaly. The simulation is able to treat full histories of the proton irradiation and multiple measurement windows. As a result, the simulation results agree very well with the measured data, showing that the measured background is well described by the combination of proton-induced radioactivation of the CdTe detector itself and thick Bi 4Ge 3O 12 scintillator shields, leakage of cosmic X-ray background and albedo gamma-ray radiation, and emissions from naturally contaminated isotopes in the detector system.« less

Top