Sample records for accurate fast simple

  1. The Field Assessment Stroke Triage for Emergency Destination (FAST-ED): a Simple and Accurate Pre-Hospital Scale to Detect Large Vessel Occlusion Strokes

    PubMed Central

    Lima, Fabricio O.; Silva, Gisele S.; Furie, Karen L.; Frankel, Michael R.; Lev, Michael H.; Camargo, Érica CS; Haussen, Diogo C.; Singhal, Aneesh B.; Koroshetz, Walter J.; Smith, Wade S.; Nogueira, Raul G.

    2016-01-01

    Background and Purpose Patients with large vessel occlusion strokes (LVOS) may be better served by direct transfer to endovascular capable centers avoiding hazardous delays between primary and comprehensive stroke centers. However, accurate stroke field triage remains challenging. We aimed to develop a simple field scale to identify LVOS. Methods The FAST-ED scale was based on items of the NIHSS with higher predictive value for LVOS and tested in the STOPStroke cohort, in which patients underwent CT angiography within the first 24 hours of stroke onset. LVOS were defined by total occlusions involving the intracranial-ICA, MCA-M1, MCA-2, or basilar arteries. Patients with partial, bi-hemispheric, and/or anterior + posterior circulation occlusions were excluded. Receiver operating characteristic (ROC) curve, sensitivity, specificity, positive (PPV) and negative predictive values (NPV) of FAST-ED were compared with the NIHSS, Rapid Arterial oCclusion Evaluation (RACE) scale and Cincinnati Prehospital Stroke Severity Scale (CPSSS). Results LVO was detected in 240 of the 727 qualifying patients (33%). FAST-ED had comparable accuracy to predict LVO to the NIHSS and higher accuracy than RACE and CPSS (area under the ROC curve: FAST-ED=0.81 as reference; NIHSS=0.80, p=0.28; RACE=0.77, p=0.02; and CPSS=0.75, p=0.002). A FAST-ED ≥4 had sensitivity of 0.60, specificity 0.89, PPV 0.72, and NPV 0.82 versus RACE ≥5 of 0.55, 0.87, 0.68, 0.79 and CPSS ≥2 of 0.56, 0.85, 0.65, 0.78, respectively. Conclusions FAST-ED is a simple scale that if successfully validated in the field may be used by medical emergency professionals to identify LVOS in the pre-hospital setting enabling rapid triage of patients. PMID:27364531

  2. Simple and accurate sum rules for highly relativistic systems

    NASA Astrophysics Data System (ADS)

    Cohen, Scott M.

    2005-03-01

    In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.

  3. Reverse radiance: a fast accurate method for determining luminance

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay

    2012-10-01

    Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.

  4. Fast and accurate computation of projected two-point functions

    NASA Astrophysics Data System (ADS)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithmOur code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  5. Field Assessment Stroke Triage for Emergency Destination: A Simple and Accurate Prehospital Scale to Detect Large Vessel Occlusion Strokes.

    PubMed

    Lima, Fabricio O; Silva, Gisele S; Furie, Karen L; Frankel, Michael R; Lev, Michael H; Camargo, Érica C S; Haussen, Diogo C; Singhal, Aneesh B; Koroshetz, Walter J; Smith, Wade S; Nogueira, Raul G

    2016-08-01

    Patients with large vessel occlusion strokes (LVOS) may be better served by direct transfer to endovascular capable centers avoiding hazardous delays between primary and comprehensive stroke centers. However, accurate stroke field triage remains challenging. We aimed to develop a simple field scale to identify LVOS. The Field Assessment Stroke Triage for Emergency Destination (FAST-ED) scale was based on items of the National Institutes of Health Stroke Scale (NIHSS) with higher predictive value for LVOS and tested in the Screening Technology and Outcomes Project in Stroke (STOPStroke) cohort, in which patients underwent computed tomographic angiography within the first 24 hours of stroke onset. LVOS were defined by total occlusions involving the intracranial internal carotid artery, middle cerebral artery-M1, middle cerebral artery-2, or basilar arteries. Patients with partial, bihemispheric, and anterior+posterior circulation occlusions were excluded. Receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value of FAST-ED were compared with the NIHSS, Rapid Arterial Occlusion Evaluation (RACE) scale, and Cincinnati Prehospital Stroke Severity (CPSS) scale. LVO was detected in 240 of the 727 qualifying patients (33%). FAST-ED had comparable accuracy to predict LVO to the NIHSS and higher accuracy than RACE and CPSS (area under the receiver operating characteristic curve: FAST-ED=0.81 as reference; NIHSS=0.80, P=0.28; RACE=0.77, P=0.02; and CPSS=0.75, P=0.002). A FAST-ED ≥4 had sensitivity of 0.60, specificity of 0.89, positive predictive value of 0.72, and negative predictive value of 0.82 versus RACE ≥5 of 0.55, 0.87, 0.68, and 0.79, and CPSS ≥2 of 0.56, 0.85, 0.65, and 0.78, respectively. FAST-ED is a simple scale that if successfully validated in the field, it may be used by medical emergency professionals to identify LVOS in the prehospital setting enabling rapid triage of patients. © 2016

  6. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  7. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    PubMed

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  8. A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry

    NASA Astrophysics Data System (ADS)

    Stamer, Torsten; Inutsuka, Shu-ichiro

    2018-06-01

    We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.

  9. Accurate screening for insulin resistance in PCOS women using fasting insulin concentrations.

    PubMed

    Lunger, Fabian; Wildt, Ludwig; Seeber, Beata

    2013-06-01

    The aims of this cross-sectional study were to evaluate the relative agreement of both static and dynamic methods of diagnosing IR in women with polycystic ovary syndrome (PCOS) and to suggest a simple screening method for IR. All participants underwent serial blood draws for hormonal profiling and lipid assessment, a 3 h, 75 g load oral glucose tolerance test (OGTT) with every 15 min measurements of glucose and insulin, and an ACTH stimulation test. The prevalence of IR ranged from 12.2% to 60.5%, depending on the IR index used. Based on largest area under the curve on receiver operating curve (ROC) analyses, the dynamic indices outperformed the static indices with glucose to insulin ratio and fasting insulin (fInsulin) demonstrating the best diagnostic properties. Applying two cut-offs representing fInsulin extremes (<7 and >13 mIU/l, respectively) gave the diagnosis in 70% of the patients with high accuracy. Currently utilized indices for assessing IR give highly variable results in women with PCOS. The most accurate indices based on dynamic testing can be time-consuming and labor-intensive. We suggest the use of fInsulin as a simple screening test, which can reduce the number of OGTTs needed to routinely assess insulin resistance in women with PCOS.

  10. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less

  11. Fast and accurate mock catalogue generation for low-mass galaxies

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe

    2016-06-01

    We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.

  12. A fast and accurate surface plasmon resonance system

    NASA Astrophysics Data System (ADS)

    Espinosa Sánchez, Y. M.; Luna Moreno, D.; Noé Arias, E.; Garnica Campos, G.

    2012-10-01

    In this work we propose a Surface Plasmon Resonance (SPR) system driven by Labview software which produces a fast, simple and accuracy measurements of samples. The system takes 2000 data in a range of 20 degrees in 20 seconds and 0.01 degrees of resolution. All the information is sent from the computer to the microcontroller as an array of bytes in hexadecimal format to be analyzed. Besides to using the system in SPR measurement is possible to make measurement of the critic angle, and Brewster angle using the Abeles method.

  13. Flight Research into Simple Adaptive Control on the NASA FAST Aircraft

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.

    2011-01-01

    A series of simple adaptive controllers with varying levels of complexity were designed, implemented and flight tested on the NASA Full-Scale Advanced Systems Testbed (FAST) aircraft. Lessons learned from the development and flight testing are presented.

  14. An accurate, fast, and scalable solver for high-frequency wave propagation

    NASA Astrophysics Data System (ADS)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and

  15. Genuine Onion: Simple, Fast, Flexible, and Cheap Website Authentication

    DTIC Science & Technology

    2015-05-21

    Genuine onion : Simple, Fast, Flexible, and Cheap Website Authentication Paul Syverson U.S. Naval Research Laboratory paul.syverson@nrl.navy.mil...access to Internet websites. Tor is also used to access sites on the . onion virtual domain. The focus of . onion use and discussion has traditionally... onion system can be used to provide an entirely separate benefit: basic website authentication. We also argue that not only can onionsites provide

  16. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  17. Correlative imaging across microscopy platforms using the fast and accurate relocation of microscopic experimental regions (FARMER) method

    NASA Astrophysics Data System (ADS)

    Huynh, Toan; Daddysman, Matthew K.; Bao, Ying; Selewa, Alan; Kuznetsov, Andrey; Philipson, Louis H.; Scherer, Norbert F.

    2017-05-01

    Imaging specific regions of interest (ROIs) of nanomaterials or biological samples with different imaging modalities (e.g., light and electron microscopy) or at subsequent time points (e.g., before and after off-microscope procedures) requires relocating the ROIs. Unfortunately, relocation is typically difficult and very time consuming to achieve. Previously developed techniques involve the fabrication of arrays of features, the procedures for which are complex, and the added features can interfere with imaging the ROIs. We report the Fast and Accurate Relocation of Microscopic Experimental Regions (FARMER) method, which only requires determining the coordinates of 3 (or more) conspicuous reference points (REFs) and employs an algorithm based on geometric operators to relocate ROIs in subsequent imaging sessions. The 3 REFs can be quickly added to various regions of a sample using simple tools (e.g., permanent markers or conductive pens) and do not interfere with the ROIs. The coordinates of the REFs and the ROIs are obtained in the first imaging session (on a particular microscope platform) using an accurate and precise encoded motorized stage. In subsequent imaging sessions, the FARMER algorithm finds the new coordinates of the ROIs (on the same or different platforms), using the coordinates of the manually located REFs and the previously recorded coordinates. FARMER is convenient, fast (3-15 min/session, at least 10-fold faster than manual searches), accurate (4.4 μm average error on a microscope with a 100x objective), and precise (almost all errors are <8 μm), even with deliberate rotating and tilting of the sample well beyond normal repositioning accuracy. We demonstrate this versatility by imaging and re-imaging a diverse set of samples and imaging methods: live mammalian cells at different time points; fixed bacterial cells on two microscopes with different imaging modalities; and nanostructures on optical and electron microscopes. FARMER can be readily

  18. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  19. Learning fast accurate movements requires intact frontostriatal circuits

    PubMed Central

    Shabbott, Britne; Ravindran, Roshni; Schumacher, Joseph W.; Wasserman, Paula B.; Marder, Karen S.; Mazzoni, Pietro

    2013-01-01

    The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound), and difficulties in distinguishing learning deficits from execution impairments (performance confound). We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements. We addressed the definition and performance confounds by: (1) focusing on an operationally defined core element of motor skill learning (speed-accuracy learning), and (2) using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington's disease (HD), a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate) than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning. PMID:24312037

  20. The Subread aligner: fast, accurate and scalable read mapping by seed-and-vote

    PubMed Central

    Liao, Yang; Smyth, Gordon K.; Shi, Wei

    2013-01-01

    Read alignment is an ongoing challenge for the analysis of data from sequencing technologies. This article proposes an elegantly simple multi-seed strategy, called seed-and-vote, for mapping reads to a reference genome. The new strategy chooses the mapped genomic location for the read directly from the seeds. It uses a relatively large number of short seeds (called subreads) extracted from each read and allows all the seeds to vote on the optimal location. When the read length is <160 bp, overlapping subreads are used. More conventional alignment algorithms are then used to fill in detailed mismatch and indel information between the subreads that make up the winning voting block. The strategy is fast because the overall genomic location has already been chosen before the detailed alignment is done. It is sensitive because no individual subread is required to map exactly, nor are individual subreads constrained to map close by other subreads. It is accurate because the final location must be supported by several different subreads. The strategy extends easily to find exon junctions, by locating reads that contain sets of subreads mapping to different exons of the same gene. It scales up efficiently for longer reads. PMID:23558742

  1. SpotCaliper: fast wavelet-based spot detection with accurate size estimation.

    PubMed

    Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael

    2016-04-15

    SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. BLESS 2: accurate, memory-efficient and fast error correction method.

    PubMed

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Concept of a Fast and Simple Atmospheric Radiative Transfer Model for Aerosol Retrieval

    NASA Astrophysics Data System (ADS)

    Seidel, Felix; Kokhanovsky, Alexander A.

    2010-05-01

    Radiative transfer modelling (RTM) is an indispensable tool for a number of applications, including astrophysics, climate studies and quantitative remote sensing. It simulates the attenuation of light through a translucent medium. Here, we look at the scattering and absorption of solar light on its way to the Earth's surface and back to space or back into a remote sensing instrument. RTM is regularly used in the framework of the so-called atmospheric correction to find properties of the surface. Further, RTM can be inverted to retrieve features of the atmosphere, such as the aerosol optical depth (AOD), for instance. Present-day RTM, such as 6S, MODTRAN, SHARM, RT3, SCIATRAN or RTMOM have errors of only a few percent, however they are rather slow and often not easy to use. We present here a concept for a fast and simple RTM model in the visible spectral range. It is using a blend of different existing RTM approaches with a special emphasis on fast approximative analytical equations and parametrizations. This concept may be helpful for efficient retrieval algorithms, which do not have to rely on the classic look-up-tables (LUT) approach. For example, it can be used to retrieve AOD without complex inversion procedures including multiple iterations. Naturally, there is always a trade-off between speed and modelling accuracy. The code can be run therefore in two different modes. The regular mode provides a reasonable ratio between speed and accuracy, while the optional mode is very fast but less accurate. The normal mode approximates the diffuse scattered light by calculating the first (single scattering) and second order of scattering according to the classical method of successive orders of scattering. The very fast mode calculates only the single scattering approximation, which does not need any slow numerical integration procedure, and uses a simple correction factor to account for multiple scattering. This factor is a parametrization of MODTRAN results, which

  4. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  5. A simple, fast and accurate in-situ method to measure the rate of transport of redox species through membranes for lithium batteries

    NASA Astrophysics Data System (ADS)

    Meddings, Nina; Owen, John R.; Garcia-Araez, Nuria

    2017-10-01

    Lithium ion conducting membranes are important to protect the lithium metal electrode and act as a barrier to crossover species such as polysulphides in Li-S systems, redox mediators in Li-O2 cells or dissolved cathode species or electrolyte oxidation products in high voltage Li-ion batteries. We present an in-situ method for measuring permeability of membranes to crossover redox species. The method employs a 'Swagelok' cell design equipped with a glassy carbon working electrode, in which redox species are placed initially in the counter electrode compartment only. Permeability through the membrane, which separates working and counter electrodes, is determined using a square wave voltammetry technique that allows the concentration of crossover redox species to be evaluated over time with very high precision. We test the method using a model and well-behaved electrochemical system to demonstrate its sensitivity, reproducibility and reliability relative to alternative approaches. This new method offers advantages in terms of small electrolyte volume, and simple, fast, quantitative and in-situ measurement.

  6. Learning accurate very fast decision trees from uncertain data streams

    NASA Astrophysics Data System (ADS)

    Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo

    2015-12-01

    Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.

  7. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  8. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  9. A fast and accurate dihedral interpolation loop subdivision scheme

    NASA Astrophysics Data System (ADS)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  10. Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates.

    PubMed

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-14

    The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.

  11. Simple, Fast, and Sensitive Method for Quantification of Tellurite in Culture Media▿

    PubMed Central

    Molina, Roberto C.; Burra, Radhika; Pérez-Donoso, José M.; Elías, Alex O.; Muñoz, Claudia; Montes, Rebecca A.; Chasteen, Thomas G.; Vásquez, Claudio C.

    2010-01-01

    A fast, simple, and reliable chemical method for tellurite quantification is described. The procedure is based on the NaBH4-mediated reduction of TeO32− followed by the spectrophotometric determination of elemental tellurium in solution. The method is highly reproducible, is stable at different pH values, and exhibits linearity over a broad range of tellurite concentrations. PMID:20525868

  12. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  13. Fast and accurate Voronoi density gridding from Lagrangian hydrodynamics data

    NASA Astrophysics Data System (ADS)

    Petkova, Maya A.; Laibe, Guillaume; Bonnell, Ian A.

    2018-01-01

    Voronoi grids have been successfully used to represent density structures of gas in astronomical hydrodynamics simulations. While some codes are explicitly built around using a Voronoi grid, others, such as Smoothed Particle Hydrodynamics (SPH), use particle-based representations and can benefit from constructing a Voronoi grid for post-processing their output. So far, calculating the density of each Voronoi cell from SPH data has been done numerically, which is both slow and potentially inaccurate. This paper proposes an alternative analytic method, which is fast and accurate. We derive an expression for the integral of a cubic spline kernel over the volume of a Voronoi cell and link it to the density of the cell. Mass conservation is ensured rigorously by the procedure. The method can be applied more broadly to integrate a spherically symmetric polynomial function over the volume of a random polyhedron.

  14. Fast and accurate grid representations for atom-based docking with partner flexibility.

    PubMed

    de Vries, Sjoerd J; Zacharias, Martin

    2017-06-30

    Macromolecular docking methods can broadly be divided into geometric and atom-based methods. Geometric methods use fast algorithms that operate on simplified, grid-like molecular representations, while atom-based methods are more realistic and flexible, but far less efficient. Here, a hybrid approach of grid-based and atom-based docking is presented, combining precalculated grid potentials with neighbor lists for fast and accurate calculation of atom-based intermolecular energies and forces. The grid representation is compatible with simultaneous multibody docking and can tolerate considerable protein flexibility. When implemented in our docking method ATTRACT, grid-based docking was found to be ∼35x faster. With the OPLSX forcefield instead of the ATTRACT coarse-grained forcefield, the average speed improvement was >100x. Grid-based representations may allow atom-based docking methods to explore large conformational spaces with many degrees of freedom, such as multiple macromolecules including flexibility. This increases the domain of biological problems to which docking methods can be applied. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Flexible, fast and accurate sequence alignment profiling on GPGPU with PaSWAS.

    PubMed

    Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J L; Nap, Jan Peter

    2015-01-01

    To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation.

  16. Simple and accurate wavemeter implemented with a polarization interferometer.

    PubMed

    Dimmick, T E

    1997-12-20

    A simple and accurate wavemeter for measuring the wavelength of monochromatic light is described. The device uses the wavelength-dependent phase lag between principal polarization states of a length of birefringent material (retarder) as the basis for the measurement of the optical wavelength. The retarder is sandwiched between a polarizer and a polarizing beam splitter and is oriented such that its principal axes are 45 deg to the axis of the polarizer and the principal axes of the beam splitter. As a result of the disparity in propagation velocities between the principal polarization states of the retarder, the ratio of the optical power exiting the two ports of the polarizing beam splitter is wavelength dependent. If the input wavelength is known to be within a specified range, the measurement of the power ratio uniquely determines the input wavelength. The device offers the advantage of trading wavelength coverage for increased resolution simply through the choice of the retarder length. Implementations of the device employing both bulk-optic components and fiber-optic components are described, and the results of a laboratory test of a fiber-optic prototype are presented. The prototype had a wavelength accuracy of +/-0.03 nm.

  17. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  18. Introducing GAMER: A Fast and Accurate Method for Ray-tracing Galaxies Using Procedural Noise

    NASA Astrophysics Data System (ADS)

    Groeneboom, N. E.; Dahle, H.

    2014-03-01

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  19. Compression-based distance (CBD): a simple, rapid, and accurate method for microbiota composition comparison

    PubMed Central

    2013-01-01

    Background Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. Results We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. Conclusion CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets. PMID:23617892

  20. Compression-based distance (CBD): a simple, rapid, and accurate method for microbiota composition comparison.

    PubMed

    Yang, Fang; Chia, Nicholas; White, Bryan A; Schook, Lawrence B

    2013-04-23

    Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets.

  1. A Simple yet Accurate Method for Students to Determine Asteroid Rotation Periods from Fragmented Light Curve Data

    ERIC Educational Resources Information Center

    Beare, R. A.

    2008-01-01

    Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…

  2. Fast and accurate preparation fatty acid methyl esters by microwave-assisted derivatization in the yeast Saccharomyces cerevisiae.

    PubMed

    Khoomrung, Sakda; Chumnanpuen, Pramote; Jansa-ard, Suwanee; Nookaew, Intawat; Nielsen, Jens

    2012-06-01

    We present a fast and accurate method for preparation of fatty acid methyl esters (FAMEs) using microwave-assisted derivatization of fatty acids present in yeast samples. The esterification of free/bound fatty acids to FAMEs was completed within 5 min, which is 24 times faster than with conventional heating methods. The developed method was validated in two ways: (1) through comparison with a conventional method (hot plate) and (2) through validation with the standard reference material (SRM) 3275-2 omega-3 and omega-6 fatty acids in fish oil (from the Nation Institute of Standards and Technology, USA). There were no significant differences (P>0.05) in yields of FAMEs with both validations. By performing a simple modification of closed-vessel microwave heating, it was possible to carry out the esterification in Pyrex glass tubes kept inside the closed vessel. Hereby, we are able to increase the number of sample preparations to several hundred samples per day as the time for preparation of reused vessels was eliminated. Pretreated cell disruption steps are not required, since the direct FAME preparation provides equally quantitative results. The new microwave-assisted derivatization method facilitates the preparation of FAMEs directly from yeast cells, but the method is likely to also be applicable for other biological samples.

  3. Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groeneboom, N. E.; Dahle, H., E-mail: nicolaag@astro.uio.no

    2014-03-10

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images thatmore » can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.« less

  4. Fast and accurate implementation of Fourier spectral approximations of nonlocal diffusion operators and its applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Qiang, E-mail: jyanghkbu@gmail.com; Yang, Jiang, E-mail: qd2125@columbia.edu

    This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simplemore » ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.« less

  5. Accurate evaluation of fast threshold voltage shift for SiC MOS devices under various gate bias stress conditions

    NASA Astrophysics Data System (ADS)

    Sometani, Mitsuru; Okamoto, Mitsuo; Hatakeyama, Tetsuo; Iwahashi, Yohei; Hayashi, Mariko; Okamoto, Dai; Yano, Hiroshi; Harada, Shinsuke; Yonezawa, Yoshiyuki; Okumura, Hajime

    2018-04-01

    We investigated methods of measuring the threshold voltage (V th) shift of 4H-silicon carbide (SiC) metal–oxide–semiconductor field-effect transistors (MOSFETs) under positive DC, negative DC, and AC gate bias stresses. A fast measurement method for V th shift under both positive and negative DC stresses revealed the existence of an extremely large V th shift in the short-stress-time region. We then examined the effect of fast V th shifts on drain current (I d) changes within a pulse under AC operation. The fast V th shifts were suppressed by nitridation. However, the I d change within one pulse occurred even in commercially available SiC MOSFETs. The correlation between I d changes within one pulse and V th shifts measured by a conventional method is weak. Thus, a fast and in situ measurement method is indispensable for the accurate evaluation of I d changes under AC operation.

  6. Highly accurate and fast optical penetration-based silkworm gender separation system

    NASA Astrophysics Data System (ADS)

    Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Chanhorm, Sataporn

    2015-07-01

    Based on our research work in the last five years, this paper highlights our innovative optical sensing system that can identify and separate silkworm gender highly suitable for sericulture industry. The key idea relies on our proposed optical penetration concepts and once combined with simple image processing operations leads to high accuracy in identifying of silkworm gender. Inside the system, there are electronic and mechanical parts that assist in controlling the overall system operation, processing the optical signal, and separating the female from male silkworm pupae. With current system performance, we achieve a very highly accurate more than 95% in identifying gender of silkworm pupae with an average system operational speed of 30 silkworm pupae/minute. Three of our systems are already in operation at Thailand's Queen Sirikit Sericulture Centers.

  7. Fast and accurate phylogeny reconstruction using filtered spaced-word matches

    PubMed Central

    Sohrabi-Jahromi, Salma; Morgenstern, Burkhard

    2017-01-01

    Abstract Motivation: Word-based or ‘alignment-free’ algorithms are increasingly used for phylogeny reconstruction and genome comparison, since they are much faster than traditional approaches that are based on full sequence alignments. Existing alignment-free programs, however, are less accurate than alignment-based methods. Results: We propose Filtered Spaced Word Matches (FSWM), a fast alignment-free approach to estimate phylogenetic distances between large genomic sequences. For a pre-defined binary pattern of match and don’t-care positions, FSWM rapidly identifies spaced word-matches between input sequences, i.e. gap-free local alignments with matching nucleotides at the match positions and with mismatches allowed at the don’t-care positions. We then estimate the number of nucleotide substitutions per site by considering the nucleotides aligned at the don’t-care positions of the identified spaced-word matches. To reduce the noise from spurious random matches, we use a filtering procedure where we discard all spaced-word matches for which the overall similarity between the aligned segments is below a threshold. We show that our approach can accurately estimate substitution frequencies even for distantly related sequences that cannot be analyzed with existing alignment-free methods; phylogenetic trees constructed with FSWM distances are of high quality. A program run on a pair of eukaryotic genomes of a few hundred Mb each takes a few minutes. Availability and Implementation: The program source code for FSWM including a documentation, as well as the software that we used to generate artificial genome sequences are freely available at http://fswm.gobics.de/ Contact: chris.leimeister@stud.uni-goettingen.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28073754

  8. Fast and accurate phylogeny reconstruction using filtered spaced-word matches.

    PubMed

    Leimeister, Chris-André; Sohrabi-Jahromi, Salma; Morgenstern, Burkhard

    2017-04-01

    Word-based or 'alignment-free' algorithms are increasingly used for phylogeny reconstruction and genome comparison, since they are much faster than traditional approaches that are based on full sequence alignments. Existing alignment-free programs, however, are less accurate than alignment-based methods. We propose Filtered Spaced Word Matches (FSWM) , a fast alignment-free approach to estimate phylogenetic distances between large genomic sequences. For a pre-defined binary pattern of match and don't-care positions, FSWM rapidly identifies spaced word-matches between input sequences, i.e. gap-free local alignments with matching nucleotides at the match positions and with mismatches allowed at the don't-care positions. We then estimate the number of nucleotide substitutions per site by considering the nucleotides aligned at the don't-care positions of the identified spaced-word matches. To reduce the noise from spurious random matches, we use a filtering procedure where we discard all spaced-word matches for which the overall similarity between the aligned segments is below a threshold. We show that our approach can accurately estimate substitution frequencies even for distantly related sequences that cannot be analyzed with existing alignment-free methods; phylogenetic trees constructed with FSWM distances are of high quality. A program run on a pair of eukaryotic genomes of a few hundred Mb each takes a few minutes. The program source code for FSWM including a documentation, as well as the software that we used to generate artificial genome sequences are freely available at http://fswm.gobics.de/. chris.leimeister@stud.uni-goettingen.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  9. Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes

    DOE PAGES

    Sjostrom, Travis; Daligault, Jerome

    2014-10-10

    Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lowermore » temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.« less

  10. Simple and fast PO-CL method for the evaluation of antioxidant capacity of hydrophilic and hydrophobic antioxidants.

    PubMed

    Zargoosh, Kiomars; Ghayeb, Yousef; Azmoon, Behnaz; Qandalee, Mohammad

    2013-08-01

    A simple and fast procedure is described for evaluating the antioxidant activity of hydrophilic and hydrophobic compounds by using the peroxyoxalate-chemiluminescence (PO-CL) reaction of Bis(2,4,6-trichlorophenyl) oxalate (TCPO) with hydrogen peroxide in the presence of di(tert-butyl)2-(tert-butylamino)-5-[(E)-2-phenyl-1-ethenyl]3,4-furandicarboxylate as a highly fluorescent fluorophore. The IC50 values of the well-known antioxidants were calculated and the results were expressed as gallic equivalent antioxidant capacity (GEAC). It was found that the proposed method is free of physical quenching and oxidant interference, for this reason, proposed method is able to determine the accurate scavenging activity of the antioxidants to the free radicals. Finally, the proposed method was applied to the evaluation of antioxidant activity of complex real samples such as soybean oil and sunflower oil (as hydrophobic samples) and honey (as hydrophilic sample). To the best of our knowledge, this is the first time that total antioxidant activity can be determined directly in soybean oil, sunflower oil and honey (not in their extracts) using PO-CL reactions. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Simple and fast screening of G-quadruplex ligands with electrochemical detection system.

    PubMed

    Fan, Qiongxuan; Li, Chao; Tao, Yaqin; Mao, Xiaoxia; Li, Genxi

    2016-11-01

    Small molecules that may facilitate and stabilize the formation of G-quadruplexes can be used for cancer treatments, because the G-quadruplex structure can inhibit the activity of telomerase, an enzyme over-expressed in many cancer cells. Therefore, there is considerable interest in developing a simple and high-performance method for screening small molecules binding to G-quadruplex. Here, we have designed a simple electrochemical approach to screen such ligands based on the fact that the formation and stabilization of G-quadruplex by ligand may inhibit electron transfer of redox species to electrode surface. As a proof-of-concept study, two types of classical G-quadruplex ligands, TMPyP4 and BRACO-19, are studied in this work, which demonstrates that this method is fast and robust and it may be applied to screen G-quadruplex ligands for anticancer drugs testing and design in the future. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  13. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  14. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    PubMed

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  15. Coarse-Graining Polymer Field Theory for Fast and Accurate Simulations of Directed Self-Assembly

    NASA Astrophysics Data System (ADS)

    Liu, Jimmy; Delaney, Kris; Fredrickson, Glenn

    To design effective manufacturing processes using polymer directed self-assembly (DSA), the semiconductor industry benefits greatly from having a complete picture of stable and defective polymer configurations. Field-theoretic simulations are an effective way to study these configurations and predict defect populations. Self-consistent field theory (SCFT) is a particularly successful theory for studies of DSA. Although other models exist that are faster to simulate, these models are phenomenological or derived through asymptotic approximations, often leading to a loss of accuracy relative to SCFT. In this study, we employ our recently-developed method to produce an accurate coarse-grained field theory for diblock copolymers. The method uses a force- and stress-matching strategy to map output from SCFT simulations into parameters for an optimized phase field model. This optimized phase field model is just as fast as existing phenomenological phase field models, but makes more accurate predictions of polymer self-assembly, both in bulk and in confined systems. We study the performance of this model under various conditions, including its predictions of domain spacing, morphology and defect formation energies. Samsung Electronics.

  16. A simple and fast representation space for classifying complex time series

    NASA Astrophysics Data System (ADS)

    Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.

    2017-03-01

    In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease.

  17. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  18. Toward accurate and fast iris segmentation for iris biometrics.

    PubMed

    He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao

    2009-09-01

    Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.

  19. Simple and fast orotracheal intubation procedure in rats.

    PubMed

    Tomasello, Giovanni; Damiani, Francesco; Cassata, Giovanni; Palumbo, Vincenzo Davide; Sinagra, Emanuele; Damiani, Provvidenza; Bruno, Antonino; Cicero, Luca; Cupido, Francesco; Carini, Francesco; Lo Monte, Attilio Ignazio

    2016-05-06

    Endotracheal intubation in the rat is difficult because of the extremely small size of anatomical structures (oral cavity, epiglottis and vocal cords), small inlet for an endotracheal tube and the lack of proper technical instruments. In this study we used seventy rats weighting 400-500 g. The equipment needed for the intubation was an operating table, a longish of cotton, a cotton tip, orotracheal tube, neonatal laryngoscope blades, KTR4 small animal ventilator and isoflurane for inhalation anaesthesia. Premedication was carried out by medetomidine hydrochloride 1 mg/mL; then, thanks to a closed glass chamber, a mixture of oxygen and isoflurane was administered. By means of a neonatal laryngoscope the orotracheal tube was advanced into the oral cavity until the wire guide was visualized trough the vocal cords; then it was passed through them. The tube was introduced directly into the larynx over the wire guide; successively, the guide was removed and the tube placed into the trachea. Breathing was confirmed using a glove, cut at the end of a finger, simulating a small balloon. We achieved a fast and simple orotracheal intubation in all animals employed. We believe that our procedure is easier and faster than those previously reported in scientific literature.

  20. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    PubMed

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  1. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  2. Simple and accurate method for determining dissolved inorganic carbon in environmental water by reaction headspace gas chromatography.

    PubMed

    Xie, Wei-Qi; Gong, Yi-Xian; Yu, Kong-Xian

    2018-03-01

    We investigate a simple and accurate method for quantitatively analyzing dissolved inorganic carbon in environmental water by reaction headspace gas chromatography. The neutralization reaction between the inorganic carbon species (i.e. bicarbonate ions and carbonate ions) in environmental water and hydrochloric acid is carried out in a sealed headspace vial, and the carbon dioxide formed from the neutralization reaction, the self-decomposition of carbonic acid, and dissolved carbon dioxide in environmental water is then analyzed by headspace gas chromatography. The data show that the headspace gas chromatography method has good precision (relative standard deviation ≤ 1.63%) and accuracy (relative differences ≤ 5.81% compared with the coulometric titration technique). The headspace gas chromatography method is simple, reliable, and can be well applied in the dissolved inorganic carbon detection in environmental water. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Accurate van der Waals coefficients from density functional theory

    PubMed Central

    Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn

    2012-01-01

    The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765

  4. A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras

    NASA Astrophysics Data System (ADS)

    Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen

    2017-11-01

    Measuring the depth-of-interaction (DOI) of gamma photons enables increasing the resolution of emission imaging systems. Several design variants of DOI-sensitive detectors have been recently introduced to improve the performance of scanners for positron emission tomography (PET). However, the accurate characterization of the response of DOI detectors, necessary to accurately measure the DOI, remains an unsolved problem. Numerical simulations are, at the state of the art, imprecise, while measuring directly the characteristics of DOI detectors experimentally is hindered by the impossibility to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite mixture model. The method is purely data-driven, not requiring simulations, and is applicable to a wide range of detector types. The proposed method was evaluated both in a simulation study and with data acquired using a monolithic gamma camera designed for PET (the cMiCE detector), demonstrating the accurate recovery of the DOI characteristics. The combination of the proposed calibration technique with maximum- a posteriori estimation of the coordinates of interaction provided a depth resolution of  ≈1.14 mm for the simulated PET detector and  ≈1.74 mm for the cMiCE detector. The software and experimental data are made available at http://occiput.mgh.harvard.edu/depthembedding/.

  5. FastChem: An ultra-fast equilibrium chemistry

    NASA Astrophysics Data System (ADS)

    Kitzmann, Daniel; Stock, Joachim

    2018-04-01

    FastChem is an equilibrium chemistry code that calculates the chemical composition of the gas phase for given temperatures and pressures. Written in C++, it is based on a semi-analytic approach, and is optimized for extremely fast and accurate calculations.

  6. A fast and simple bonding method for low cost microfluidic chip fabrication

    NASA Astrophysics Data System (ADS)

    Yin, Zhifu; Zou, Helin

    2018-01-01

    With the development of the microstructure fabrication technique, microfluidic chips are widely used in biological and medical researchers. Future advances in their commercial applications depend on the mass bonding of microfluidic chip. In this study we are presenting a simple, low cost and fast way of bonding microfluidic chips at room temperature. The influence of the bonding pressure on the deformation of the microchannel and adhesive tape was analyzed by numerical simulation. By this method, the microfluidic chip can be fully sealed at low temperature and pressure without using any equipment. The dye water and gas leakage test indicated that the microfluidic chip can be bonded without leakage or block and its bonding strength can up to 0.84 MPa.

  7. Fast algorithms for transforming back and forth between a signed permutation and its equivalent simple permutation.

    PubMed

    Gog, Simon; Bader, Martin

    2008-10-01

    The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.

  8. Accurate analysis and visualization of cardiac (11)C-PIB uptake in amyloidosis with semiautomatic software.

    PubMed

    Kero, Tanja; Lindsjö, Lars; Sörensen, Jens; Lubberink, Mark

    2016-08-01

    (11)C-PIB PET is a promising non-invasive diagnostic tool for cardiac amyloidosis. Semiautomatic analysis of PET data is now available but it is not known how accurate these methods are for amyloid imaging. The aim of this study was to evaluate the feasibility of one semiautomatic software tool for analysis and visualization of (11)C-PIB left ventricular retention index (RI) in cardiac amyloidosis. Patients with systemic amyloidosis and cardiac involvement (n = 10) and healthy controls (n = 5) were investigated with dynamic (11)C-PIB PET. Two observers analyzed the PET studies with semiautomatic software to calculate the left ventricular RI of (11)C-PIB and to create parametric images. The mean RI at 15-25 min from the semiautomatic analysis was compared with RI based on manual analysis and showed comparable values (0.056 vs 0.054 min(-1) for amyloidosis patients and 0.024 vs 0.025 min(-1) in healthy controls; P = .78) and the correlation was excellent (r = 0.98). Inter-reader reproducibility also was excellent (intraclass correlation coefficient, ICC > 0.98). Parametric polarmaps and histograms made visual separation of amyloidosis patients and healthy controls fast and simple. Accurate semiautomatic analysis of cardiac (11)C-PIB RI in amyloidosis patients is feasible. Parametric polarmaps and histograms make visual interpretation fast and simple.

  9. A Simple Transmission Electron Microscopy Method for Fast Thickness Characterization of Suspended Graphene and Graphite Flakes.

    PubMed

    Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus

    2016-02-01

    We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images.

  10. Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle

    NASA Astrophysics Data System (ADS)

    Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.

    2017-04-01

    Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.

  11. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    NASA Astrophysics Data System (ADS)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  12. A simple accurate chest-compression depth gauge using magnetic coils during cardiopulmonary resuscitation

    NASA Astrophysics Data System (ADS)

    Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio

    2015-12-01

    This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).

  13. Simple tunnel diode circuit for accurate zero crossing timing

    NASA Technical Reports Server (NTRS)

    Metz, A. J.

    1969-01-01

    Tunnel diode circuit, capable of timing the zero crossing point of bipolar pulses, provides effective design for a fast crossing detector. It combines a nonlinear load line with the diode to detect the zero crossing of a wide range of input waveshapes.

  14. Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies

    PubMed Central

    Zhang, Yu; Liu, Jun S.

    2011-01-01

    Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288

  15. Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping

    NASA Astrophysics Data System (ADS)

    Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.

    2016-06-01

    High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.

  16. Hi-Plex for Simple, Accurate, and Cost-Effective Amplicon-based Targeted DNA Sequencing.

    PubMed

    Pope, Bernard J; Hammet, Fleur; Nguyen-Dumont, Tu; Park, Daniel J

    2018-01-01

    Hi-Plex is a suite of methods to enable simple, accurate, and cost-effective highly multiplex PCR-based targeted sequencing (Nguyen-Dumont et al., Biotechniques 58:33-36, 2015). At its core is the principle of using gene-specific primers (GSPs) to "seed" (or target) the reaction and universal primers to "drive" the majority of the reaction. In this manner, effects on amplification efficiencies across the target amplicons can, to a large extent, be restricted to early seeding cycles. Product sizes are defined within a relatively narrow range to enable high-specificity size selection, replication uniformity across target sites (including in the context of fragmented input DNA such as that derived from fixed tumor specimens (Nguyen-Dumont et al., Biotechniques 55:69-74, 2013; Nguyen-Dumont et al., Anal Biochem 470:48-51, 2015), and application of high-specificity genetic variant calling algorithms (Pope et al., Source Code Biol Med 9:3, 2014; Park et al., BMC Bioinformatics 17:165, 2016). Hi-Plex offers a streamlined workflow that is suitable for testing large numbers of specimens without the need for automation.

  17. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  18. Fast and simple high-capacity quantum cryptography with error detection

    PubMed Central

    Lai, Hong; Luo, Ming-Xing; Pieprzyk, Josef; Zhang, Jun; Pan, Lei; Li, Shudong; Orgun, Mehmet A.

    2017-01-01

    Quantum cryptography is commonly used to generate fresh secure keys with quantum signal transmission for instant use between two parties. However, research shows that the relatively low key generation rate hinders its practical use where a symmetric cryptography component consumes the shared key. That is, the security of the symmetric cryptography demands frequent rate of key updates, which leads to a higher consumption of the internal one-time-pad communication bandwidth, since it requires the length of the key to be as long as that of the secret. In order to alleviate these issues, we develop a matrix algorithm for fast and simple high-capacity quantum cryptography. Our scheme can achieve secure private communication with fresh keys generated from Fibonacci- and Lucas- valued orbital angular momentum (OAM) states for the seed to construct recursive Fibonacci and Lucas matrices. Moreover, the proposed matrix algorithm for quantum cryptography can ultimately be simplified to matrix multiplication, which is implemented and optimized in modern computers. Most importantly, considerably information capacity can be improved effectively and efficiently by the recursive property of Fibonacci and Lucas matrices, thereby avoiding the restriction of physical conditions, such as the communication bandwidth. PMID:28406240

  19. Fast and simple high-capacity quantum cryptography with error detection.

    PubMed

    Lai, Hong; Luo, Ming-Xing; Pieprzyk, Josef; Zhang, Jun; Pan, Lei; Li, Shudong; Orgun, Mehmet A

    2017-04-13

    Quantum cryptography is commonly used to generate fresh secure keys with quantum signal transmission for instant use between two parties. However, research shows that the relatively low key generation rate hinders its practical use where a symmetric cryptography component consumes the shared key. That is, the security of the symmetric cryptography demands frequent rate of key updates, which leads to a higher consumption of the internal one-time-pad communication bandwidth, since it requires the length of the key to be as long as that of the secret. In order to alleviate these issues, we develop a matrix algorithm for fast and simple high-capacity quantum cryptography. Our scheme can achieve secure private communication with fresh keys generated from Fibonacci- and Lucas- valued orbital angular momentum (OAM) states for the seed to construct recursive Fibonacci and Lucas matrices. Moreover, the proposed matrix algorithm for quantum cryptography can ultimately be simplified to matrix multiplication, which is implemented and optimized in modern computers. Most importantly, considerably information capacity can be improved effectively and efficiently by the recursive property of Fibonacci and Lucas matrices, thereby avoiding the restriction of physical conditions, such as the communication bandwidth.

  20. Fast and simple high-capacity quantum cryptography with error detection

    NASA Astrophysics Data System (ADS)

    Lai, Hong; Luo, Ming-Xing; Pieprzyk, Josef; Zhang, Jun; Pan, Lei; Li, Shudong; Orgun, Mehmet A.

    2017-04-01

    Quantum cryptography is commonly used to generate fresh secure keys with quantum signal transmission for instant use between two parties. However, research shows that the relatively low key generation rate hinders its practical use where a symmetric cryptography component consumes the shared key. That is, the security of the symmetric cryptography demands frequent rate of key updates, which leads to a higher consumption of the internal one-time-pad communication bandwidth, since it requires the length of the key to be as long as that of the secret. In order to alleviate these issues, we develop a matrix algorithm for fast and simple high-capacity quantum cryptography. Our scheme can achieve secure private communication with fresh keys generated from Fibonacci- and Lucas- valued orbital angular momentum (OAM) states for the seed to construct recursive Fibonacci and Lucas matrices. Moreover, the proposed matrix algorithm for quantum cryptography can ultimately be simplified to matrix multiplication, which is implemented and optimized in modern computers. Most importantly, considerably information capacity can be improved effectively and efficiently by the recursive property of Fibonacci and Lucas matrices, thereby avoiding the restriction of physical conditions, such as the communication bandwidth.

  1. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    PubMed

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  2. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms

    PubMed Central

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  3. Two fast and accurate heuristic RBF learning rules for data classification.

    PubMed

    Rouhani, Modjtaba; Javan, Dawood S

    2016-03-01

    This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. RICO: A NEW APPROACH FOR FAST AND ACCURATE REPRESENTATION OF THE COSMOLOGICAL RECOMBINATION HISTORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fendt, W. A.; Wandelt, B. D.; Chluba, J.

    2009-04-15

    We present RICO, a code designed to compute the ionization fraction of the universe during the epoch of hydrogen and helium recombination with an unprecedented combination of speed and accuracy. This is accomplished by training the machine learning code PICO on the calculations of a multilevel cosmological recombination code which self-consistently includes several physical processes that were neglected previously. After training, RICO is used to fit the free electron fraction as a function of the cosmological parameters. While, for example, at low redshifts (z {approx}< 900), much of the net change in the ionization fraction can be captured by loweringmore » the hydrogen fudge factor in RECFAST by about 3%, RICO provides a means of effectively using the accurate ionization history of the full recombination code in the standard cosmological parameter estimation framework without the need to add new or refined fudge factors or functions to a simple recombination model. Within the new approach presented here, it is easy to update RICO whenever a more accurate full recombination code becomes available. Once trained, RICO computes the cosmological ionization history with negligible fitting error in {approx}10 ms, a speedup of at least 10{sup 6} over the full recombination code that was used here. Also RICO is able to reproduce the ionization history of the full code to a level well below 0.1%, thereby ensuring that the theoretical power spectra of cosmic microwave background (CMB) fluctuations can be computed to sufficient accuracy and speed for analysis from upcoming CMB experiments like Planck. Furthermore, it will enable cross-checking different recombination codes across cosmological parameter space, a comparison that will be very important in order to assure the accurate interpretation of future CMB data.« less

  5. Fast learning of simple perceptual discriminations reduces brain activation in working memory and in high-level auditory regions.

    PubMed

    Daikhin, Luba; Ahissar, Merav

    2015-07-01

    Introducing simple stimulus regularities facilitates learning of both simple and complex tasks. This facilitation may reflect an implicit change in the strategies used to solve the task when successful predictions regarding incoming stimuli can be formed. We studied the modifications in brain activity associated with fast perceptual learning based on regularity detection. We administered a two-tone frequency discrimination task and measured brain activation (fMRI) under two conditions: with and without a repeated reference tone. Although participants could not explicitly tell the difference between these two conditions, the introduced regularity affected both performance and the pattern of brain activation. The "No-Reference" condition induced a larger activation in frontoparietal areas known to be part of the working memory network. However, only the condition with a reference showed fast learning, which was accompanied by a reduction of activity in two regions: the left intraparietal area, involved in stimulus retention, and the posterior superior-temporal area, involved in representing auditory regularities. We propose that this joint reduction reflects a reduction in the need for online storage of the compared tones. We further suggest that this change reflects an implicit strategic shift "backwards" from reliance mainly on working memory networks in the "No-Reference" condition to increased reliance on detected regularities stored in high-level auditory networks.

  6. Fast and accurate enzyme activity measurements using a chip-based microfluidic calorimeter.

    PubMed

    van Schie, Morten M C H; Ebrahimi, Kourosh Honarmand; Hagen, Wilfred R; Hagedoorn, Peter-Leon

    2018-03-01

    Recent developments in microfluidic and nanofluidic technologies have resulted in development of new chip-based microfluidic calorimeters with potential use in different fields. One application would be the accurate high-throughput measurement of enzyme activity. Calorimetry is a generic way to measure activity of enzymes, but unlike conventional calorimeters, chip-based calorimeters can be easily automated and implemented in high-throughput screening platforms. However, application of chip-based microfluidic calorimeters to measure enzyme activity has been limited due to problems associated with miniaturization such as incomplete mixing and a decrease in volumetric heat generated. To address these problems we introduced a calibration method and devised a convenient protocol for using a chip-based microfluidic calorimeter. Using the new calibration method, the progress curve of alkaline phosphatase, which has product inhibition for phosphate, measured by the calorimeter was the same as that recorded by UV-visible spectroscopy. Our results may enable use of current chip-based microfluidic calorimeters in a simple manner as a tool for high-throughput screening of enzyme activity with potential applications in drug discovery and enzyme engineering. Copyright © 2017. Published by Elsevier Inc.

  7. Accurate prediction of pregnancy viability by means of a simple scoring system.

    PubMed

    Bottomley, Cecilia; Van Belle, Vanya; Kirk, Emma; Van Huffel, Sabine; Timmerman, Dirk; Bourne, Tom

    2013-01-01

    What is the performance of a simple scoring system to predict whether women will have an ongoing viable intrauterine pregnancy beyond the first trimester? A simple scoring system using demographic and initial ultrasound variables accurately predicts pregnancy viability beyond the first trimester with an area under the curve (AUC) in a receiver operating characteristic curve of 0.924 [95% confidence interval (CI) 0.900-0.947] on an independent test set. Individual demographic and ultrasound factors, such as maternal age, vaginal bleeding and gestational sac size, are strong predictors of miscarriage. Previous mathematical models have combined individual risk factors with reasonable performance. A simple scoring system derived from a mathematical model that can be easily implemented in clinical practice has not previously been described for the prediction of ongoing viability. This was a prospective observational study in a single early pregnancy assessment centre during a 9-month period. A cohort of 1881 consecutive women undergoing transvaginal ultrasound scan at a gestational age <84 days were included. Women were excluded if the first trimester outcome was not known. Demographic features, symptoms and ultrasound variables were tested for their influence on ongoing viability. Logistic regression was used to determine the influence on first trimester viability from demographics and symptoms alone, ultrasound findings alone and then from all the variables combined. Each model was developed on a training data set, and a simple scoring system was derived from this. This scoring system was tested on an independent test data set. The final outcome based on a total of 1435 participants was an ongoing viable pregnancy in 885 (61.7%) and early pregnancy loss in 550 (38.3%) women. The scoring system using significant demographic variables alone (maternal age and amount of bleeding) to predict ongoing viability gave an AUC of 0.724 (95% CI = 0.692-0.756) in the training set

  8. A Simple and Accurate Method for Measuring Enzyme Activity.

    ERIC Educational Resources Information Center

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  9. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    PubMed

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.

  10. Massively Parallel Processing for Fast and Accurate Stamping Simulations

    NASA Astrophysics Data System (ADS)

    Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu

    2005-08-01

    The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.

  11. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  12. An accurate model for the computation of the dose of protons in water.

    PubMed

    Embriaco, A; Bellinzona, V E; Fontana, A; Rotondi, A

    2017-06-01

    The accurate and fast calculation of the dose in proton radiation therapy is an essential ingredient for successful treatments. We propose a novel approach with a minimal number of parameters. The approach is based on the exact calculation of the electromagnetic part of the interaction, namely the Molière theory of the multiple Coulomb scattering for the transversal 1D projection and the Bethe-Bloch formula for the longitudinal stopping power profile, including a gaussian energy straggling. To this e.m. contribution the nuclear proton-nucleus interaction is added with a simple two-parameter model. Then, the non gaussian lateral profile is used to calculate the radial dose distribution with a method that assumes the cylindrical symmetry of the distribution. The results, obtained with a fast C++ based computational code called MONET (MOdel of ioN dosE for Therapy), are in very good agreement with the FLUKA MC code, within a few percent in the worst case. This study provides a new tool for fast dose calculation or verification, possibly for clinical use. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. A simple fast pulse gas valve using a dynamic pressure differential as the primary closing mechanism

    NASA Astrophysics Data System (ADS)

    Thomas, J. C.; Hwang, D. Q.; Horton, R. D.; Rogers, J. H.; Raman, R.

    1993-06-01

    In this article we describe a simple fast pulse gas valve developed for use in a plasma discharge experiment. The valve delivers 1017-1019 molecules per pulse varied by changing the voltage on the electromagnetic driver power supply. Valve pulse widths are observed to be less than 300 μs full width at half maximum with a rise time of less than 100 μs resulting in a maximum gas flow rate of ˜1022 molecules per second. An optical transmission technique was used to determine the mechanical opening and closing characteristics of the valve piston. A fast ionization gauge (FIG) was used for diagnosis of the temporal character of the gas pulse while the total gas throughput was determined by measuring the change in pressure per pulse in a small test chamber with a convectron tube gauge. Calibration of the FIG was accomplished by comparing the net change in pressure in a large chamber as measured by the FIG to the net change in pressure in a small test chamber as measured by the convectron tube gauge.

  14. Fast and Simple Discriminative Analysis of Anthocyanins-Containing Berries Using LC/MS Spectral Data.

    PubMed

    Yang, Heejung; Kim, Hyun Woo; Kwon, Yong Soo; Kim, Ho Kyong; Sung, Sang Hyun

    2017-09-01

    Anthocyanins are potent antioxidant agents that protect against many degenerative diseases; however, they are unstable because they are vulnerable to external stimuli including temperature, pH and light. This vulnerability hinders the quality control of anthocyanin-containing berries using classical high-performance liquid chromatography (HPLC) analytical methodologies based on UV or MS chromatograms. To develop an alternative approach for the quality assessment and discrimination of anthocyanin-containing berries, we used MS spectral data acquired in a short analytical time rather than UV or MS chromatograms. Mixtures of anthocyanins were separated from other components in a short gradient time (5 min) due to their higher polarity, and the representative MS spectrum was acquired from the MS chromatogram corresponding to the mixture of anthocyanins. The chemometric data from the representative MS spectra contained reliable information for the identification and relative quantification of anthocyanins in berries with good precision and accuracy. This fast and simple methodology, which consists of a simple sample preparation method and short gradient analysis, could be applied to reliably discriminate the species and geographical origins of different anthocyanin-containing berries. These features make the technique useful for the food industry. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. A Fast and Accurate Sparse Continuous Signal Reconstruction by Homotopy DCD with Non-Convex Regularization

    PubMed Central

    Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong

    2014-01-01

    In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758

  16. An All-Fragments Grammar for Simple and Accurate Parsing

    DTIC Science & Technology

    2012-03-21

    Tsujii. Probabilistic CFG with latent annotations. In Proceedings of ACL, 2005. Slav Petrov and Dan Klein. Improved Inference for Unlexicalized Parsing. In...Proceedings of NAACL-HLT, 2007. Slav Petrov and Dan Klein. Sparse Multi-Scale Grammars for Discriminative Latent Variable Parsing. In Proceedings of...EMNLP, 2008. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings

  17. An unexpected way forward: towards a more accurate and rigorous protein-protein binding affinity scoring function by eliminating terms from an already simple scoring function.

    PubMed

    Swanson, Jon; Audie, Joseph

    2018-01-01

    A fundamental and unsolved problem in biophysical chemistry is the development of a computationally simple, physically intuitive, and generally applicable method for accurately predicting and physically explaining protein-protein binding affinities from protein-protein interaction (PPI) complex coordinates. Here, we propose that the simplification of a previously described six-term PPI scoring function to a four term function results in a simple expression of all physically and statistically meaningful terms that can be used to accurately predict and explain binding affinities for a well-defined subset of PPIs that are characterized by (1) crystallographic coordinates, (2) rigid-body association, (3) normal interface size, and hydrophobicity and hydrophilicity, and (4) high quality experimental binding affinity measurements. We further propose that the four-term scoring function could be regarded as a core expression for future development into a more general PPI scoring function. Our work has clear implications for PPI modeling and structure-based drug design.

  18. Simple heuristics in over-the-counter drug choices: a new hint for medical education and practice.

    PubMed

    Riva, Silvia; Monti, Marco; Antonietti, Alessandro

    2011-01-01

    Over-the-counter (OTC) drugs are widely available and often purchased by consumers without advice from a health care provider. Many people rely on self-management of medications to treat common medical conditions. Although OTC medications are regulated by the National and the International Health and Drug Administration, many people are unaware of proper dosing, side effects, adverse drug reactions, and possible medication interactions. This study examined how subjects make their decisions to select an OTC drug, evaluating the role of cognitive heuristics which are simple and adaptive rules that help the decision-making process of people in everyday contexts. By analyzing 70 subjects' information-search and decision-making behavior when selecting OTC drugs, we examined the heuristics they applied in order to assess whether simple decision-making processes were also accurate and relevant. Subjects were tested with a sequence of two experimental tests based on a computerized Java system devised to analyze participants' choices in a virtual environment. We found that subjects' information-search behavior reflected the use of fast and frugal heuristics. In addition, although the heuristics which correctly predicted subjects' decisions implied significantly fewer cues on average than the subjects did in the information-search task, they were accurate in describing order of information search. A simple combination of a fast and frugal tree and a tallying rule predicted more than 78% of subjects' decisions. The current emphasis in health care is to shift some responsibility onto the consumer through expansion of self medication. To know which cognitive mechanisms are behind the choice of OTC drugs is becoming a relevant purpose of current medical education. These findings have implications both for the validity of simple heuristics describing information searches in the field of OTC drug choices and for current medical education, which has to prepare competent health

  19. Simple, Fast and Accurate Implementation of the Diffusion Approximation Algorithm for Stochastic Ion Channels with Multiple States

    PubMed Central

    Orio, Patricio; Soudry, Daniel

    2012-01-01

    Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when

  20. FAMBE-pH: A Fast and Accurate Method to Compute the Total Solvation Free Energies of Proteins

    PubMed Central

    Vorobjev, Yury N.; Vila, Jorge A.

    2009-01-01

    A fast and accurate method to compute the total solvation free energies of proteins as a function of pH is presented. The method makes use of a combination of approaches, some of which have already appeared in the literature; (i) the Poisson equation is solved with an optimized fast adaptive multigrid boundary element (FAMBE) method; (ii) the electrostatic free energies of the ionizable sites are calculated for their neutral and charged states by using a detailed model of atomic charges; (iii) a set of optimal atomic radii is used to define a precise dielectric surface interface; (iv) a multilevel adaptive tessellation of this dielectric surface interface is achieved by using multisized boundary elements; and (v) 1:1 salt effects are included. The equilibrium proton binding/release is calculated with the Tanford–Schellman integral if the proteins contain more than ∼20–25 ionizable groups; for a smaller number of ionizable groups, the ionization partition function is calculated directly. The FAMBE method is tested as a function of pH (FAMBE-pH) with three proteins, namely, bovine pancreatic trypsin inhibitor (BPTI), hen egg white lysozyme (HEWL), and bovine pancreatic ribonuclease A (RNaseA). The results are (a) the FAMBE-pH method reproduces the observed pKa's of the ionizable groups of these proteins within an average absolute value of 0.4 pK units and a maximum error of 1.2 pK units and (b) comparison of the calculated total pH-dependent solvation free energy for BPTI, between the exact calculation of the ionization partition function and the Tanford–Schellman integral method, shows agreement within 1.2 kcal/mol. These results indicate that calculation of total solvation free energies with the FAMBE-pH method can provide an accurate prediction of protein conformational stability at a given fixed pH and, if coupled with molecular mechanics or molecular dynamics methods, can also be used for more realistic studies of protein folding, unfolding, and dynamics

  1. Simple, Fast and Effective Correction for Irradiance Spatial Nonuniformity in Measurement of IVs of Large Area Cells at NREL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriarty, Tom

    The NREL cell measurement lab measures the IV parameters of cells of multiple sizes and configurations. A large contributing factor to errors and uncertainty in Jsc, Imax, Pmax and efficiency can be the irradiance spatial nonuniformity. Correcting for this nonuniformity through its precise and frequent measurement can be very time consuming. This paper explains a simple, fast and effective method based on bicubic interpolation for determining and correcting for spatial nonuniformity and verification of the method's efficacy.

  2. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    PubMed

    El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  3. Therapeutic Drug Monitoring of Phenytoin by Simple, Rapid, Accurate, Highly Sensitive and Novel Method and Its Clinical Applications.

    PubMed

    Shaikh, Abdul S; Guo, Ruichen

    2017-01-01

    Phenytoin has very challenging pharmacokinetic properties. To prevent its toxicity and ensure efficacy, continuous therapeutic monitoring is required. It is hard to get a simple, accurate, rapid, easily available, economical and highly sensitive assay in one method for therapeutic monitoring of phenytoin. The present study is directed towards establishing and validating a simpler, rapid, an accurate, highly sensitive, novel and environment friendly liquid chromatography/mass spectrometry (LC/MS) method for offering rapid and reliable TDM results of phenytoin in epileptic patients to physicians and clinicians for making immediate and rational decision. 27 epileptics patients with uncontrolled seizures or suspected of non-compliance or toxicity of phenytoin were selected and advised for TDM of phenytoin by neurologists of Qilu Hospital Jinan, China. The LC/MS assay was used for performing of therapeutic monitoring of phenytoin. The Agilent 1100 LC/MS system was used for TDM. The mixture of Ammonium acetate 5mM: Methanol at (35: 65 v/v) was used for the composition of mobile phase. The Diamonsil C18 (150mm×4.6mm, 5μm) column was used for the extraction of analytes in plasma. The samples were prepared with one step simple protein precipitation method. The technique was validated with the guidelines of International Conference on Harmonisation (ICH). The calibration curve demonstrated decent linearity within (0.2-20 µg/mL) concentration range with linearity equation, y= 0.0667855 x +0.00241785 and correlation coefficient (R2) of 0.99928. The specificity, recovery, linearity, accuracy, precision and stability results were within the accepted limits. The concentration of 0.2 µg/mL was observed as lower limit of quantitation (LLOQ), which is 12.5 times lower than the currently available enzyme-multiplied immunoassay technique (EMIT) for measurement of phenytoin in epilepsy patients. A rapid, simple, economical, precise, highly sensitive and novel LC/MS assay has been

  4. A simple and fast detection method for bovine milk residues in foods: a 2-site monoclonal antibody immunochromatography assay.

    PubMed

    Xuli, Wu; Weiyi, He; Ji, Kunmei; Wenpu, Wan; Dongsheng, Hu; Hui, Wu; Xinpin, Luo; Zhigang, Liu

    2013-03-01

    The ingredient declaration on food labels assumes paramount importance in the protection of food-allergic consumers. China has not implemented Food allergen labeling. A gold immunochromatography assay (GICA) was developed using 2 monoclonal antibodies (mAb) against the milk allergen β-lactoglobulin in this study. The GICA was specific for pure milk samples with a sensitivity of 0.2 ng/mL. Milk protein traces extracted from 110 food products were detected by this method. The labels of 106 were confirmed by our GICA method: 57 food samples originally labeled as containing milk were positive for β-lactoglobulin and 49 food samples labeled as not containing milk were negative for β-lactoglobulin. However, 3 food samples falsely labeled as containing milk were found to contain no β-lactoglobulin whereas 1 food sample labeled as not containing milk actually contained β-lactoglobulin. First, these negatives could be because of the addition of a casein fraction. Second, some countries demand that food manufacturers label all ingredients derived from milk as "containing milk" even though the ingredients contain no detectable milk protein by any method. Our GICA method could thus provide a fast and simple method for semiquantitatation of β-lactoglobulin in foods. The present method provides a fast, simple, semiquantitative method for the determination of milk allergens in foods. © 2013 Institute of Food Technologists®

  5. A fast and simple dose-calibrator-based quality control test for the radionuclidic purity of cyclotron-produced (99m)Tc.

    PubMed

    Tanguay, J; Hou, X; Esquinas, P; Vuckovic, M; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A

    2015-11-07

    Cyclotron production of 99mTc through the (100)Mo(p,2n)99mTc reaction channel is actively being investigated as an alternative to reactor-based (99)Mo generation by nuclear fission of (235)U. Like most radioisotope production methods, cyclotron production of 99mTc will result in creation of unwanted impurities, including Tc and non-Tc isotopes. It is important to measure the amounts of these impurities for release of cyclotron-produced 99mTc (CPTc) for clinical use. Detection of radioactive impurities will rely on measurements of their gamma (γ) emissions. Gamma spectroscopy is not suitable for this purpose because the overwhelming presence of 99mTc and the count-rate limitations of γ spectroscopy systems preclude fast and accurate measurement of small amounts of impurities. In this article we describe a simple and fast method for measuring γ emission rates from radioactive impurities in CPTc. The proposed method is similar to that used to identify (99)Mo breakthrough in generator-produced 99mTc: one dose calibrator (DC) reading of a CPTc source placed in a lead shield is followed by a second reading of the same source in air. Our experimental and theoretical analysis show that the ratio of DC readings in lead to those in air are linearly related to γ emission rates from impurities per MBq of 99mTc over a large range of clinically-relevant production conditions. We show that estimates of the γ emission rates from Tc impurities per MBq of 99mTc can be used to estimate increases in radiation dose (relative to pure 99mTc) to patients injected with CPTc-based radiopharmaceuticals. This enables establishing dosimetry-based clinical-release criteria that can be tested using commercially-available dose calibrators. We show that our approach is highly sensitive to the presence of 93gTc, 93mTc, 94gTc, 94mTc, 95mTc, 95gTc, and 96gTc, in addition to a number of non-Tc impurities.

  6. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  7. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.

  8. A Simple and Accurate Rate-Driven Infiltration Model

    NASA Astrophysics Data System (ADS)

    Cui, G.; Zhu, J.

    2017-12-01

    In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.

  9. FAST: FAST Analysis of Sequences Toolbox

    PubMed Central

    Lawrence, Travis J.; Kauffman, Kyle T.; Amrine, Katherine C. H.; Carper, Dana L.; Lee, Raymond S.; Becich, Peter J.; Canales, Claudia J.; Ardell, David H.

    2015-01-01

    FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought. PMID:26042145

  10. Fast and accurate edge orientation processing during object manipulation

    PubMed Central

    Flanagan, J Randall; Johansson, Roland S

    2018-01-01

    Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804

  11. Sorting protein lists with nwCompare: a simple and fast algorithm for n-way comparison of proteomic data files.

    PubMed

    Pont, Frédéric; Fournié, Jean Jacques

    2010-03-01

    MS, the reference technology for proteomics, routinely produces large numbers of protein lists whose fast comparison would prove very useful. Unfortunately, most softwares only allow comparisons of two to three lists at once. We introduce here nwCompare, a simple tool for n-way comparison of several protein lists without any query language, and exemplify its use with differential and shared cancer cell proteomes. As the software compares character strings, it can be applied to any type of data mining, such as genomic or metabolomic datalists.

  12. A fast algorithm for determining bounds and accurate approximate p-values of the rank product statistic for replicate experiments.

    PubMed

    Heskes, Tom; Eisinga, Rob; Breitling, Rainer

    2014-11-21

    The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .

  13. Fast and accurate determination of arsenobetaine in fish tissues using accelerated solvent extraction and HPLC-ICP-MS determination.

    PubMed

    Wahlen, Raimund

    2004-04-01

    A high-performance liquid chromatography-inductively coupled plasma-mass spectrometry (HPLC-ICP-MS) method has been developed for the fast and accurate analysis of arsenobetaine (AsB) in fish samples extracted by accelerated solvent extraction. The combined extraction and analysis approach is validated using certified reference materials for AsB in fish and during a European intercomparison exercise with a blind sample. Up to six species of arsenic (As) can be separated and quantitated in the extracts within a 10-min isocratic elution. The method is optimized so as to minimize time-consuming sample preparation steps and allow for automated extraction and analysis of large sample batches. A comparison of standard addition and external calibration show no significant difference in the results obtained, which indicates that the LC-ICP-MS method is not influenced by severe matrix effects. The extraction procedure can process up to 24 samples in an automated manner, yet the robustness of the developed HPLC-ICP-MS approach is highlighted by the capability to run more than 50 injections per sequence, which equates to a total run-time of more than 12 h. The method can therefore be used to rapidly and accurately assess the proportion of nontoxic AsB in fish samples with high total As content during toxicological screening studies.

  14. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    PubMed

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  15. Simple heuristics in over-the-counter drug choices: a new hint for medical education and practice

    PubMed Central

    Riva, Silvia; Monti, Marco; Antonietti, Alessandro

    2011-01-01

    Introduction Over-the-counter (OTC) drugs are widely available and often purchased by consumers without advice from a health care provider. Many people rely on self-management of medications to treat common medical conditions. Although OTC medications are regulated by the National and the International Health and Drug Administration, many people are unaware of proper dosing, side effects, adverse drug reactions, and possible medication interactions. Purpose This study examined how subjects make their decisions to select an OTC drug, evaluating the role of cognitive heuristics which are simple and adaptive rules that help the decision-making process of people in everyday contexts. Subjects and methods By analyzing 70 subjects’ information-search and decision-making behavior when selecting OTC drugs, we examined the heuristics they applied in order to assess whether simple decision-making processes were also accurate and relevant. Subjects were tested with a sequence of two experimental tests based on a computerized Java system devised to analyze participants’ choices in a virtual environment. Results We found that subjects’ information-search behavior reflected the use of fast and frugal heuristics. In addition, although the heuristics which correctly predicted subjects’ decisions implied significantly fewer cues on average than the subjects did in the information-search task, they were accurate in describing order of information search. A simple combination of a fast and frugal tree and a tallying rule predicted more than 78% of subjects’ decisions. Conclusion The current emphasis in health care is to shift some responsibility onto the consumer through expansion of self medication. To know which cognitive mechanisms are behind the choice of OTC drugs is becoming a relevant purpose of current medical education. These findings have implications both for the validity of simple heuristics describing information searches in the field of OTC drug choices and

  16. A Simple and Accurate Analysis of Conductivity Loss in Millimeter-Wave Helical Slow-Wave Structures

    NASA Astrophysics Data System (ADS)

    Datta, S. K.; Kumar, Lalit; Basu, B. N.

    2009-04-01

    Electromagnetic field analysis of a helix slow-wave structure was carried out and a closed form expression was derived for the inductance per unit length of the transmission-line equivalent circuit of the structure, taking into account the actual helix tape dimensions and surface current on the helix over the actual metallic area of the tape. The expression of the inductance per unit length, thus obtained, was used for estimating the increment in the inductance per unit length caused due to penetration of the magnetic flux into the conducting surfaces following Wheeler’s incremental inductance rule, which was subsequently interpreted for the attenuation constant of the propagating structure. The analysis was computationally simple and accurate, and accrues the accuracy of 3D electromagnetic analysis by allowing the use of dispersion characteristics obtainable from any standard electromagnetic modeling. The approach was benchmarked against measurement for two practical structures, and excellent agreement was observed. The analysis was subsequently applied to demonstrate the effects of conductivity on the attenuation constant of a typical broadband millimeter-wave helical slow-wave structure with respect to helix materials and copper plating on the helix, surface finish of the helix, dielectric loading effect and effect of high temperature operation - a comparative study of various such aspects are covered.

  17. Factors That Influence Fast Mapping in Children Exposed to Spanish and English

    PubMed Central

    Alt, Mary; Meyers, Christina; Figueroa, Cecilia

    2015-01-01

    Purpose The purpose of this study was to determine if children exposed to two languages would benefit from the phonotactic probability cues of a single language in the same way as monolingual peers and to determine if cross-linguistic influence would be present in a fast mapping task. Method Two groups of typically-developing children (monolingual English and bilingual Spanish-English) took part in a computer-based fast mapping task which manipulated phonotactic probability. Children were preschool-aged (N = 50) or school-aged (N = 34). Fast mapping was assessed through name identification and naming tasks. Data were analyzed using mixed ANOVAs with post-hoc testing and simple regression. Results Bilingual and monolingual preschoolers showed sensitivity to English phonotactic cues in both tasks, but bilingual preschoolers were less accurate than monolingual peers in the naming task. School-aged bilingual children had nearly identical performance to monolingual peers. Conclusions Knowing that children exposed to two languages can benefit from the statistical cues of a single language can help inform ideas about instruction and assessment for bilingual learners. PMID:23816663

  18. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  19. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Constructing simple yet accurate potentials for describing the solvation of HCl/water clusters in bulk helium and nanodroplets.

    PubMed

    Boese, A Daniel; Forbert, Harald; Masia, Marco; Tekin, Adem; Marx, Dominik; Jansen, Georg

    2011-08-28

    The infrared spectroscopy of molecules, complexes, and molecular aggregates dissolved in superfluid helium clusters, commonly called HElium NanoDroplet Isolation (HENDI) spectroscopy, is an established, powerful experimental technique for extracting high resolution ro-vibrational spectra at ultra-low temperatures. Realistic quantum simulations of such systems, in particular in cases where the solute is undergoing a chemical reaction, require accurate solute-helium potentials which are also simple enough to be efficiently evaluated over the vast number of steps required in typical Monte Carlo or molecular dynamics sampling. This precludes using global potential energy surfaces as often parameterized for small complexes in the realm of high-resolution spectroscopic investigations that, in view of the computational effort imposed, are focused on the intermolecular interaction of rigid molecules with helium. Simple Lennard-Jones-like pair potentials, on the other hand, fall short in providing the required flexibility and accuracy in order to account for chemical reactions of the solute molecule. Here, a general scheme of constructing sufficiently accurate site-site potentials for use in typical quantum simulations is presented. This scheme employs atom-based grids, accounts for local and global minima, and is applied to the special case of a HCl(H(2)O)(4) cluster solvated by helium. As a first step, accurate interaction energies of a helium atom with a set of representative configurations sampled from a trajectory following the dissociation of the HCl(H(2)O)(4) cluster were computed using an efficient combination of density functional theory and symmetry-adapted perturbation theory, i.e. the DFT-SAPT approach. For each of the sampled cluster configurations, a helium atom was placed at several hundred positions distributed in space, leading to an overall number of about 400,000 such quantum chemical calculations. The resulting total interaction energies, decomposed into

  1. A fast and simple dose-calibrator-based quality control test for the radionuclidic purity of cyclotron-produced 99mTc

    NASA Astrophysics Data System (ADS)

    Tanguay, J.; Hou, X.; Esquinas, P.; Vuckovic, M.; Buckley, K.; Schaffer, P.; Bénard, F.; Ruth, T. J.; Celler, A.

    2015-11-01

    Cyclotron production of {{}99\\text{m}} Tc through the 100Mo(p,2n){{}99\\text{m}} Tc reaction channel is actively being investigated as an alternative to reactor-based 99Mo generation by nuclear fission of 235U. Like most radioisotope production methods, cyclotron production of {{}99\\text{m}} Tc will result in creation of unwanted impurities, including Tc and non-Tc isotopes. It is important to measure the amounts of these impurities for release of cyclotron-produced {{}99\\text{m}} Tc (CPTc) for clinical use. Detection of radioactive impurities will rely on measurements of their gamma (γ) emissions. Gamma spectroscopy is not suitable for this purpose because the overwhelming presence of {{}99\\text{m}} Tc and the count-rate limitations of γ spectroscopy systems preclude fast and accurate measurement of small amounts of impurities. In this article we describe a simple and fast method for measuring γ emission rates from radioactive impurities in CPTc. The proposed method is similar to that used to identify 99Mo breakthrough in generator-produced {{}99\\text{m}} Tc: one dose calibrator (DC) reading of a CPTc source placed in a lead shield is followed by a second reading of the same source in air. Our experimental and theoretical analysis show that the ratio of DC readings in lead to those in air are linearly related to γ emission rates from impurities per MBq of {{}99\\text{m}} Tc over a large range of clinically-relevant production conditions. We show that estimates of the γ emission rates from Tc impurities per MBq of {{}99\\text{m}} Tc can be used to estimate increases in radiation dose (relative to pure {{}99\\text{m}} Tc) to patients injected with CPTc-based radiopharmaceuticals. This enables establishing dosimetry-based clinical-release criteria that can be tested using commercially-available dose calibrators. We show that our approach is highly sensitive to the presence of {{}93\\text{g}} Tc, {{}93\\text{m}} Tc, {{}94\\text{g}} Tc, {{}94\\text{m}} Tc

  2. Development and validation of a fast and simple multi-analyte procedure for quantification of 40 drugs relevant to emergency toxicology using GC-MS and one-point calibration.

    PubMed

    Meyer, Golo M J; Weber, Armin A; Maurer, Hans H

    2014-05-01

    Diagnosis and prognosis of poisonings should be confirmed by comprehensive screening and reliable quantification of xenobiotics, for example by gas chromatography-mass spectrometry (GC-MS) or liquid chromatography-mass spectrometry (LC-MS). The turnaround time should be short enough to have an impact on clinical decisions. In emergency toxicology, quantification using full-scan acquisition is preferable because this allows screening and quantification of expected and unexpected drugs in one run. Therefore, a multi-analyte full-scan GC-MS approach was developed and validated with liquid-liquid extraction and one-point calibration for quantification of 40 drugs relevant to emergency toxicology. Validation showed that 36 drugs could be determined quickly, accurately, and reliably in the range of upper therapeutic to toxic concentrations. Daily one-point calibration with calibrators stored for up to four weeks reduced workload and turn-around time to less than 1 h. In summary, the multi-analyte approach with simple liquid-liquid extraction, GC-MS identification, and quantification over fast one-point calibration could successfully be applied to proficiency tests and real case samples. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Culturing In Vivo-like Murine Astrocytes Using the Fast, Simple, and Inexpensive AWESAM Protocol.

    PubMed

    Wolfes, Anne C; Dean, Camin

    2018-01-10

    The AWESAM (a low-cost easy stellate astrocyte method) protocol entails a fast, simple, and inexpensive way to generate large quantities of in vivo-like mouse and rat astrocyte monocultures: Brain cells can be isolated from different brain regions, and after a week of cell culture, non-astrocytic cells are shaken off by placing the culture dishes on a shaker for 6 h in the incubator. The remaining astrocytes are then passaged into new plates with an astrocyte-specific medium (termed NB+H). NB+H contains low concentrations of heparin-binding EGF-like growth factor (HBEGF), which is used in place of serum in medium. After growing in NB+H, AWESAM astrocytes have a stellate morphology and feature fine processes. Moreover, these astrocytes have more in vivo-like gene expression than astrocytes generated by previously published methods. Ca 2+ imaging, vesicle dynamics, and other events close to the membrane can thus be studied in the fine astrocytic processes in vitro, e.g., using live cell confocal or TIRF microscopy. Notably, AWESAM astrocytes also exhibit spontaneous Ca 2+ signaling similar to astrocytes in vivo.

  4. Simple and fast multiplex PCR method for detection of species origin in meat products.

    PubMed

    Izadpanah, Mehrnaz; Mohebali, Nazanin; Elyasi Gorji, Zahra; Farzaneh, Parvaneh; Vakhshiteh, Faezeh; Shahzadeh Fazeli, Seyed Abolhassan

    2018-02-01

    Identification of animal species is one of the major concerns in food regulatory control and quality assurance system. Different approaches have been used for species identification in animal origin of feedstuff. This study aimed to develop a multiplex PCR approach to detect the origin of meat and meat products. Specific primers were designed based on the conserved region of mitochondrial Cytochrome C Oxidase subunit I ( COX1 ) gene. This method could successfully distinguish the origin of the pig, camel, sheep, donkey, goat, cow, and chicken in one single reaction. Since PCR products derived from each species represent unique molecular weight, the amplified products could be identified by electrophoresis and analyzed based on their size. Due to the synchronized amplification of segments within a single PCR reaction, multiplex PCR is considered to be a simple, fast, and inexpensive technique that can be applied for identification of meat products in food industries. Nowadays, this technique has been considered as a practical method to identify the species origin, which could further applied for animal feedstuffs identification.

  5. Fast imaging of live organisms with sculpted light sheets

    NASA Astrophysics Data System (ADS)

    Chmielewski, Aleksander K.; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T.; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F.

    2015-04-01

    Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed.

  6. Fast Solvers for Moving Material Interfaces

    DTIC Science & Technology

    2008-01-01

    interface method—with the semi-Lagrangian contouring method developed in References [16–20]. We are now finalizing portable C / C ++ codes for fast adaptive ...stepping scheme couples a CIR predictor with a trapezoidal corrector using the velocity evaluated from the CIR approximation. It combines the...formula with efficient geometric algorithms and fast accurate contouring techniques. A modular adaptive implementation with fast new geometry modules

  7. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    PubMed Central

    Wilson, Lydia J; Newhauser, Wayne D

    2015-01-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833

  8. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    PubMed

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  9. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  10. A simple risk score for identifying individuals with impaired fasting glucose in the Southern Chinese population.

    PubMed

    Wang, Hui; Liu, Tao; Qiu, Quan; Ding, Peng; He, Yan-Hui; Chen, Wei-Qing

    2015-01-23

    This study aimed to develop and validate a simple risk score for detecting individuals with impaired fasting glucose (IFG) among the Southern Chinese population. A sample of participants aged ≥20 years and without known diabetes from the 2006-2007 Guangzhou diabetes cross-sectional survey was used to develop separate risk scores for men and women. The participants completed a self-administered structured questionnaire and underwent simple clinical measurements. The risk scores were developed by multiple logistic regression analysis. External validation was performed based on three other studies: the 2007 Zhuhai rural population-based study, the 2008-2010 Guangzhou diabetes cross-sectional study and the 2007 Tibet population-based study. Performance of the scores was measured with the Hosmer-Lemeshow goodness-of-fit test and ROC c-statistic. Age, waist circumference, body mass index and family history of diabetes were included in the risk score for both men and women, with the additional factor of hypertension for men. The ROC c-statistic was 0.70 for both men and women in the derivation samples. Risk scores of ≥28 for men and ≥18 for women showed respective sensitivity, specificity, positive predictive value and negative predictive value of 56.6%, 71.7%, 13.0% and 96.0% for men and 68.7%, 60.2%, 11% and 96.0% for women in the derivation population. The scores performed comparably with the Zhuhai rural sample and the 2008-2010 Guangzhou urban samples but poorly in the Tibet sample. The performance of pre-existing USA, Shanghai, and Chengdu risk scores was poorer in our population than in their original study populations. The results suggest that the developed simple IFG risk scores can be generalized in Guangzhou city and nearby rural regions and may help primary health care workers to identify individuals with IFG in their practice.

  11. A Simple Risk Score for Identifying Individuals with Impaired Fasting Glucose in the Southern Chinese Population

    PubMed Central

    Wang, Hui; Liu, Tao; Qiu, Quan; Ding, Peng; He, Yan-Hui; Chen, Wei-Qing

    2015-01-01

    This study aimed to develop and validate a simple risk score for detecting individuals with impaired fasting glucose (IFG) among the Southern Chinese population. A sample of participants aged ≥20 years and without known diabetes from the 2006–2007 Guangzhou diabetes cross-sectional survey was used to develop separate risk scores for men and women. The participants completed a self-administered structured questionnaire and underwent simple clinical measurements. The risk scores were developed by multiple logistic regression analysis. External validation was performed based on three other studies: the 2007 Zhuhai rural population-based study, the 2008–2010 Guangzhou diabetes cross-sectional study and the 2007 Tibet population-based study. Performance of the scores was measured with the Hosmer-Lemeshow goodness-of-fit test and ROC c-statistic. Age, waist circumference, body mass index and family history of diabetes were included in the risk score for both men and women, with the additional factor of hypertension for men. The ROC c-statistic was 0.70 for both men and women in the derivation samples. Risk scores of ≥28 for men and ≥18 for women showed respective sensitivity, specificity, positive predictive value and negative predictive value of 56.6%, 71.7%, 13.0% and 96.0% for men and 68.7%, 60.2%, 11% and 96.0% for women in the derivation population. The scores performed comparably with the Zhuhai rural sample and the 2008–2010 Guangzhou urban samples but poorly in the Tibet sample. The performance of pre-existing USA, Shanghai, and Chengdu risk scores was poorer in our population than in their original study populations. The results suggest that the developed simple IFG risk scores can be generalized in Guangzhou city and nearby rural regions and may help primary health care workers to identify individuals with IFG in their practice. PMID:25625405

  12. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.

  13. A deep learning approach to estimate stress distribution: a fast and accurate surrogate of finite-element analysis.

    PubMed

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-01-01

    Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).

  14. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies.

    PubMed

    Essa, Khalid S

    2014-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.

  15. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies

    PubMed Central

    Essa, Khalid S.

    2013-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472

  16. Fast and Accurate Construction of Ultra-Dense Consensus Genetic Maps Using Evolution Strategy Optimization

    PubMed Central

    Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham

    2015-01-01

    Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943

  17. A Simple and Accurate Network for Hydrogen and Carbon Chemistry in the Interstellar Medium

    NASA Astrophysics Data System (ADS)

    Gong, Munan; Ostriker, Eve C.; Wolfire, Mark G.

    2017-07-01

    Chemistry plays an important role in the interstellar medium (ISM), regulating the heating and cooling of the gas and determining abundances of molecular species that trace gas properties in observations. Although solving the time-dependent equations is necessary for accurate abundances and temperature in the dynamic ISM, a full chemical network is too computationally expensive to incorporate into numerical simulations. In this paper, we propose a new simplified chemical network for hydrogen and carbon chemistry in the atomic and molecular ISM. We compare results from our chemical network in detail with results from a full photodissociation region (PDR) code, and also with the Nelson & Langer (NL99) network previously adopted in the simulation literature. We show that our chemical network gives similar results to the PDR code in the equilibrium abundances of all species over a wide range of densities, temperature, and metallicities, whereas the NL99 network shows significant disagreement. Applying our network to 1D models, we find that the CO-dominated regime delimits the coldest gas and that the corresponding temperature tracks the cosmic-ray ionization rate in molecular clouds. We provide a simple fit for the locus of CO-dominated regions as a function of gas density and column. We also compare with observations of diffuse and translucent clouds. We find that the CO, {{CH}}x, and {{OH}}x abundances are consistent with equilibrium predictions for densities n=100{--}1000 {{cm}}-3, but the predicted equilibrium C abundance is higher than that seen in observations, signaling the potential importance of non-equilibrium/dynamical effects.

  18. Simple but accurate GCM-free approach for quantifying anthropogenic climate change

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.

    2014-12-01

    We are so used to analysing the climate with the help of giant computer models (GCM's) that it is easy to get the impression that they are indispensable. Yet anthropogenic warming is so large (roughly 0.9oC) that it turns out that it is straightforward to quantify it with more empirically based methodologies that can be readily understood by the layperson. The key is to use the CO2 forcing as a linear surrogate for all the anthropogenic effects from 1880 to the present (implicitly including all effects due to Greenhouse Gases, aerosols and land use changes). To a good approximation, double the economic activity, double the effects. The relationship between the forcing and global mean temperature is extremely linear as can be seen graphically and understood without fancy statistics, [Lovejoy, 2014a] (see the attached figure and http://www.physics.mcgill.ca/~gang/Lovejoy.htm). To an excellent approximation, the deviations from the linear forcing - temperature relation can be interpreted as the natural variability. For example, this direct - yet accurate approach makes it graphically obvious that the "pause" or "hiatus" in the warming since 1998 is simply a natural cooling event that has roughly offset the anthropogenic warming [Lovejoy, 2014b]. Rather than trying to prove that the warming is anthropogenic, with a little extra work (and some nonlinear geophysics theory and pre-industrial multiproxies) we can disprove the competing theory that it is natural. This approach leads to the estimate that the probability of the industrial scale warming being a giant natural fluctuation is ≈0.1%: it can be dismissed. This destroys the last climate skeptic argument - that the models are wrong and the warming is natural. It finally allows for a closure of the debate. In this talk we argue that this new, direct, simple, intuitive approach provides an indispensable tool for communicating - and convincing - the public of both the reality and the amplitude of anthropogenic warming

  19. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  20. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  1. Validation of simple indexes to assess insulin sensitivity during pregnancy in Wistar and Sprague-Dawley rats.

    PubMed

    Cacho, J; Sevillano, J; de Castro, J; Herrera, E; Ramos, M P

    2008-11-01

    Insulin resistance plays a role in the pathogenesis of diabetes, including gestational diabetes. The glucose clamp is considered the gold standard for determining in vivo insulin sensitivity, both in human and in animal models. However, the clamp is laborious, time consuming and, in animals, requires anesthesia and collection of multiple blood samples. In human studies, a number of simple indexes, derived from fasting glucose and insulin levels, have been obtained and validated against the glucose clamp. However, these indexes have not been validated in rats and their accuracy in predicting altered insulin sensitivity remains to be established. In the present study, we have evaluated whether indirect estimates based on fasting glucose and insulin levels are valid predictors of insulin sensitivity in nonpregnant and 20-day-pregnant Wistar and Sprague-Dawley rats. We have analyzed the homeostasis model assessment of insulin resistance (HOMA-IR), the quantitative insulin sensitivity check index (QUICKI), and the fasting glucose-to-insulin ratio (FGIR) by comparing them with the insulin sensitivity (SI(Clamp)) values obtained during the hyperinsulinemic-isoglycemic clamp. We have performed a calibration analysis to evaluate the ability of these indexes to accurately predict insulin sensitivity as determined by the reference glucose clamp. Finally, to assess the reliability of these indexes for the identification of animals with impaired insulin sensitivity, performance of the indexes was analyzed by receiver operating characteristic (ROC) curves in Wistar and Sprague-Dawley rats. We found that HOMA-IR, QUICKI, and FGIR correlated significantly with SI(Clamp), exhibited good sensitivity and specificity, accurately predicted SI(Clamp), and yielded lower insulin sensitivity in pregnant than in nonpregnant rats. Together, our data demonstrate that these indexes provide an easy and accurate measure of insulin sensitivity during pregnancy in the rat.

  2. A Simple and Fast Method for the Production and Characterization of Methylic and Ethylic Biodiesels from Tucum Oil via an Alkaline Route

    PubMed Central

    de Oliveira, Marcelo Firmino; Vieira, Andressa Tironi; Batista, Antônio Carlos Ferreira; Rodrigues, Hugo de Souza; Stradiotto, Nelson Ramos

    2011-01-01

    A simple, fast, and complete route for the production of methylic and ethylic biodiesel from tucum oil is described. Aliquots of the oil obtained directly from pressed tucum (pulp and almonds) were treated with potassium methoxide or ethoxide at 40°C for 40 min. The biodiesel form was removed from the reactor and washed with 0.1 M HCl aqueous solution. A simple distillation at 100°C was carried out in order to remove water and alcohol species from the biodiesel. The oxidative stability index was obtained for the tucum oil as well as the methylic and ethylic biodiesel at 6.13, 2.90, and 2.80 h, for storage times higher than 8 days. Quality control of the original oil and of the methylic and ethylic biodiesels, such as the amount of glycerin produced during the transesterification process, was accomplished by the TLC, GC-MS, and FT-IR techniques. The results obtained in this study indicate a potential biofuel production by simple treatment of tucum, an important Amazonian fruit. PMID:21629751

  3. A Fast, Accurate and Sensitive GC-FID Method for the Analyses of Glycols in Water and Urine

    NASA Technical Reports Server (NTRS)

    Kuo, C. Mike; Alverson, James T.; Gazda, Daniel B.

    2017-01-01

    Glycols, specifically ethylene glycol and 1,2-propanediol, are some of the major organic compounds found in the humidity condensate samples collected on the International Space Station. The current analytical method for glycols is a GC/MS method with direct sample injection. This method is simple and fast, but it is not very sensitive. Reporting limits for ethylene glycol and 1,2-propanediol are only 1 ppm. A much more sensitive GC/FID method was developed, in which glycols were derivatized with benzoyl chloride for 10 minutes before being extracted with hexane. Using 1,3-propanediol as an internal standard, the detection limits for the GC/FID method was determined to be 50 ppb and the analysis only takes 7 minutes. Data from the GC/MS and the new GC/FID methods shows excellent agreement with each other. Factors affecting the sensitivity, including sample volume, NaOH concentration and volume, volume of benzoyl chloride, reaction time and temperature, were investigated. Interferences during derivatization and possible method to reduce interferences were also investigated.

  4. Accurate Encoding and Decoding by Single Cells: Amplitude Versus Frequency Modulation

    PubMed Central

    Micali, Gabriele; Aquino, Gerardo; Richards, David M.; Endres, Robert G.

    2015-01-01

    Cells sense external concentrations and, via biochemical signaling, respond by regulating the expression of target proteins. Both in signaling networks and gene regulation there are two main mechanisms by which the concentration can be encoded internally: amplitude modulation (AM), where the absolute concentration of an internal signaling molecule encodes the stimulus, and frequency modulation (FM), where the period between successive bursts represents the stimulus. Although both mechanisms have been observed in biological systems, the question of when it is beneficial for cells to use either AM or FM is largely unanswered. Here, we first consider a simple model for a single receptor (or ion channel), which can either signal continuously whenever a ligand is bound, or produce a burst in signaling molecule upon receptor binding. We find that bursty signaling is more accurate than continuous signaling only for sufficiently fast dynamics. This suggests that modulation based on bursts may be more common in signaling networks than in gene regulation. We then extend our model to multiple receptors, where continuous and bursty signaling are equivalent to AM and FM respectively, finding that AM is always more accurate. This implies that the reason some cells use FM is related to factors other than accuracy, such as the ability to coordinate expression of multiple genes or to implement threshold crossing mechanisms. PMID:26030820

  5. Simple and fast polydimethylsiloxane (PDMS) patterning using a cutting plotter and vinyl adhesives to achieve etching results.

    PubMed

    Hyun Kim; Sun-Young Yoo; Ji Sung Kim; Zihuan Wang; Woon Hee Lee; Kyo-In Koo; Jong-Mo Seo; Dong-Il Cho

    2017-07-01

    Inhibition of polydimethylsiloxane (PDMS) polymerization could be observed when spin-coated over vinyl substrates. The degree of polymerization, partially curing or fully curing, depended on the PDMS thickness coated over the vinyl substrate. This characteristic was exploited to achieve simple and fast PDMS patterning method using a vinyl adhesive layer patterned through a cutting plotter. The proposed patterning method showed results resembling PDMS etching. Therefore, patterning PDMS over PDMS, glass, silicon, and gold substrates were tested to compare the results with conventional etching methods. Vinyl stencils with widths ranging from 200μm to 1500μm were used for the procedure. To evaluate the accuracy of the cutting plotter, stencil designed on the AutoCAD software and the actual stencil widths were compared. Furthermore, this method's accuracy was also evaluated by comparing the widths of the actual stencils and etched PDMS results.

  6. Fast and Accurate Microplate Method (Biolog MT2) for Detection of Fusarium Fungicides Resistance/Sensitivity.

    PubMed

    Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia

    2016-01-01

    The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (Biolog(TM)) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole

  7. Fast and Accurate Microplate Method (Biolog MT2) for Detection of Fusarium Fungicides Resistance/Sensitivity

    PubMed Central

    Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia

    2016-01-01

    The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (BiologTM) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole

  8. Simple prediction scores predict good and devastating outcomes after stroke more accurately than physicians.

    PubMed

    Reid, John Michael; Dai, Dingwei; Delmonte, Susanna; Counsell, Carl; Phillips, Stephen J; MacLeod, Mary Joan

    2017-05-01

    physicians are often asked to prognosticate soon after a patient presents with stroke. This study aimed to compare two outcome prediction scores (Five Simple Variables [FSV] score and the PLAN [Preadmission comorbidities, Level of consciousness, Age, and focal Neurologic deficit]) with informal prediction by physicians. demographic and clinical variables were prospectively collected from consecutive patients hospitalised with acute ischaemic or haemorrhagic stroke (2012-13). In-person or telephone follow-up at 6 months established vital and functional status (modified Rankin score [mRS]). Area under the receiver operating curves (AUC) was used to establish prediction score performance. five hundred and seventy-five patients were included; 46% female, median age 76 years, 88% ischaemic stroke. Six months after stroke, 47% of patients had a good outcome (alive and independent, mRS 0-2) and 26% a devastating outcome (dead or severely dependent, mRS 5-6). The FSV and PLAN scores were superior to physician prediction (AUCs of 0.823-0.863 versus 0.773-0.805, P < 0.0001) for good and devastating outcomes. The FSV score was superior to the PLAN score for predicting good outcomes and vice versa for devastating outcomes (P < 0.001). Outcome prediction was more accurate for those with later presentations (>24 hours from onset). the FSV and PLAN scores are validated in this population for outcome prediction after both ischaemic and haemorrhagic stroke. The FSV score is the least complex of all developed scores and can assist outcome prediction by physicians. © The Author 2016. Published by Oxford University Press on behalf of the British Geriatrics Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  9. The NAFLD Index: A Simple and Accurate Screening Tool for the Prediction of Non-Alcoholic Fatty Liver Disease.

    PubMed

    Ichino, Naohiro; Osakabe, Keisuke; Sugimoto, Keiko; Suzuki, Koji; Yamada, Hiroya; Takai, Hiroji; Sugiyama, Hiroko; Yukitake, Jun; Inoue, Takashi; Ohashi, Koji; Hata, Tadayoshi; Hamajima, Nobuyuki; Nishikawa, Toru; Hashimoto, Senju; Kawabe, Naoto; Yoshioka, Kentaro

    2015-01-01

    Non-alcoholic fatty liver disease (NAFLD) is a common debilitating condition in many industrialized countries that increases the risk of cardiovascular disease. The aim of this study was to derive a simple and accurate screening tool for the prediction of NAFLD in the Japanese population. A total of 945 participants, 279 men and 666 women living in Hokkaido, Japan, were enrolled among residents who attended a health check-up program from 2010 to 2014. Participants with an alcohol consumption > 20 g/day and/or a chronic liver disease, such as chronic hepatitis B, chronic hepatitis C or autoimmune hepatitis, were excluded from this study. Clinical and laboratory data were examined to identify predictive markers of NAFLD. A new predictive index for NAFLD, the NAFLD index, was constructed for men and for women. The NAFLD index for men = -15.5693+0.3264 [BMI] +0.0134 [triglycerides (mg/dl)], and for women = -31.4686+0.3683 [BMI] +2.5699 [albumin (g/dl)] +4.6740[ALT/AST] -0.0379 [HDL cholesterol (mg/dl)]. The AUROC of the NAFLD index for men and for women was 0.87(95% CI 0.88-1.60) and 0.90 (95% CI 0.66-1.02), respectively. The cut-off point of -5.28 for men predicted NAFLD with an accuracy of 82.8%. For women, the cut-off point of -7.65 predicted NAFLD with an accuracy of 87.7%. A new index for the non-invasive prediction of NAFLD, the NAFLD index, was constructed using available clinical and laboratory data. This index is a simple screening tool to predict the presence of NAFLD.

  10. No Generalization of Practice for Nonzero Simple Addition

    ERIC Educational Resources Information Center

    Campbell, Jamie I. D.; Beech, Leah C.

    2014-01-01

    Several types of converging evidence have suggested recently that skilled adults solve very simple addition problems (e.g., 2 + 1, 4 + 2) using a fast, unconscious counting algorithm. These results stand in opposition to the long-held assumption in the cognitive arithmetic literature that such simple addition problems normally are solved by fact…

  11. A simple and inclusive method to determine the habit plane in transmission electron microscope based on accurate measurement of foil thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, Dong, E-mail: d.qiu@uq.edu.au; Zhang, Mingxing

    2014-08-15

    A simple and inclusive method is proposed for accurate determination of the habit plane between bicrystals in transmission electron microscope. Whilst this method can be regarded as a variant of surface trace analysis, the major innovation lies in the improved accuracy and efficiency of foil thickness measurement, which involves a simple tilt of the thin foil about a permanent tilting axis of the specimen holder, rather than cumbersome tilt about the surface trace of the habit plane. Experimental study has been done to validate this proposed method in determining the habit plane between lamellar α{sub 2} plates and γ matrixmore » in a Ti–Al–Nb alloy. Both high accuracy (± 1°) and high precision (± 1°) have been achieved by using the new method. The source of the experimental errors as well as the applicability of this method is discussed. Some tips to minimise the experimental errors are also suggested. - Highlights: • An improved algorithm is formulated to measure the foil thickness. • Habit plane can be determined with a single tilt holder based on the new algorithm. • Better accuracy and precision within ± 1° are achievable using the proposed method. • The data for multi-facet determination can be collected simultaneously.« less

  12. FAST (Four chamber view And Swing Technique) Echo: a Novel and Simple Algorithm to Visualize Standard Fetal Echocardiographic Planes

    PubMed Central

    Yeo, Lami; Romero, Roberto; Jodicke, Cristiano; Oggè, Giovanna; Lee, Wesley; Kusanovic, Juan Pedro; Vaisbuch, Edi; Hassan, Sonia S.

    2010-01-01

    Objective To describe a novel and simple algorithm (FAST Echo: Four chamber view And Swing Technique) to visualize standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). Methods We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) “swings” through the ductal arch image (“swing technique”), providing an infinite number of cardiac planes in sequence. Each line generated the following plane(s): 1) Line 1: three-vessels and trachea view; 2) Line 2: five-chamber view and long axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); 3) Line 3: four-chamber view; and 4) “Swing” line: three-vessels and trachea view, five-chamber view and/or long axis view of the aorta, four-chamber view, and stomach. The algorithm was then tested in 50 normal hearts (15.3 – 40 weeks of gestation) and visualization rates for cardiac diagnostic planes were calculated. To determine if the algorithm could identify planes that departed from the normal images, we tested the algorithm in 5 cases with proven congenital heart defects. Results In normal cases, the FAST Echo algorithm (3 locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long axis view of the aorta, four-chamber view): 1) individually in 100% of cases [except for the three-vessel and trachea view, which was seen in 98% (49/50)]; and 2) simultaneously in 98% (49/50). The “swing technique” was able to generate the three-vessels and trachea view, five

  13. Four-chamber view and 'swing technique' (FAST) echo: a novel and simple algorithm to visualize standard fetal echocardiographic planes.

    PubMed

    Yeo, L; Romero, R; Jodicke, C; Oggè, G; Lee, W; Kusanovic, J P; Vaisbuch, E; Hassan, S

    2011-04-01

    To describe a novel and simple algorithm (four-chamber view and 'swing technique' (FAST) echo) for visualization of standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) 'swings' through the ductal arch image (swing technique), providing an infinite number of cardiac planes in sequence. Each line generates the following plane(s): (a) Line 1: three-vessels and trachea view; (b) Line 2: five-chamber view and long-axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); (c) Line 3: four-chamber view; and (d) 'swing line': three-vessels and trachea view, five-chamber view and/or long-axis view of the aorta, four-chamber view and stomach. The algorithm was then tested in 50 normal hearts in fetuses at 15.3-40 weeks' gestation and visualization rates for cardiac diagnostic planes were calculated. To determine whether the algorithm could identify planes that departed from the normal images, we tested the algorithm in five cases with proven congenital heart defects. In normal cases, the FAST echo algorithm (three locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long-axis view of the aorta, four-chamber view) individually in 100% of cases (except for the three-vessels and trachea view, which was seen in 98% (49/50)) and simultaneously in 98% (49/50). The swing technique was able to generate the three-vessels and trachea view, five-chamber view and/or long

  14. NetCoffee: a fast and accurate global alignment approach to identify functionally conserved proteins in multiple networks.

    PubMed

    Hu, Jialu; Kehr, Birte; Reinert, Knut

    2014-02-15

    Owing to recent advancements in high-throughput technologies, protein-protein interaction networks of more and more species become available in public databases. The question of how to identify functionally conserved proteins across species attracts a lot of attention in computational biology. Network alignments provide a systematic way to solve this problem. However, most existing alignment tools encounter limitations in tackling this problem. Therefore, the demand for faster and more efficient alignment tools is growing. We present a fast and accurate algorithm, NetCoffee, which allows to find a global alignment of multiple protein-protein interaction networks. NetCoffee searches for a global alignment by maximizing a target function using simulated annealing on a set of weighted bipartite graphs that are constructed using a triplet approach similar to T-Coffee. To assess its performance, NetCoffee was applied to four real datasets. Our results suggest that NetCoffee remedies several limitations of previous algorithms, outperforms all existing alignment tools in terms of speed and nevertheless identifies biologically meaningful alignments. The source code and data are freely available for download under the GNU GPL v3 license at https://code.google.com/p/netcoffee/.

  15. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.

  16. A simple and efficient HPLC method for benznidazole dosage in human breast milk.

    PubMed

    Marson, María E; Padró, Juan M; Reta, Mario R; Altcheh, Jaime; García-Bournissen, Facundo; Mastrantonio, Guido

    2013-08-01

    Due to migration, Chagas disease is a significant public health problem in Latin America, and in other nonendemic regions. The 2 drugs currently available for the treatment, nifurtimox and benznidazole (BNZ), are associated with a high risk of toxicity in therapeutic doses. Excretion of drug into human breast milk is a potential source of unwanted exposure and pharmacologic effects in the nursing infant. However, this phenomenon was not evaluated until now, and measurement techniques for both drugs in milk were not developed. In this work, we described the development of a simple and fast method to quantify BNZ in human milk using a pretreatment that involves acid protein precipitation followed by tandem microfiltration, and reverse phase high-performance liquid chromatography/ultraviolet analysis. It is simple because it takes only 3 steps to obtain a clean extracted solution that is ready to inject into the high-performance liquid chromatography equipment. It is fast because a complete analysis of a sample takes only 36 minutes. Although the human breast milk composition is very variable, and lipids are one of the most difficult compounds to clean up on a milk sample, the procedure has proven to be robust and sensitive with a limit of detection of 0.3 μg/mL and quantization of 0.9 μg/mL. Despite a 70% recovery value, which could be considered a relatively low result, this recovery is reproducible (coefficient of variation <10%) and the analytical response under the linear range is very good (r = 0.9969 adjusted). Real samples of human breast milk from patients in treatment with BNZ were dosed to support the validation process of the method. The method described is fast, specific, accurate, precise, and sufficiently sensitive in the clinical context for the quantification of BNZ in human milk. For all these reasons, it is suitable for clinical risk evaluation studies.

  17. Simple and accurate methods for quantifying deformation, disruption, and development in biological tissues

    PubMed Central

    Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros

    2014-01-01

    When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601

  18. Fast protein folding kinetics

    PubMed Central

    Gelman, Hannah; Gruebele, Martin

    2014-01-01

    Fast folding proteins have been a major focus of computational and experimental study because they are accessible to both techniques: they are small and fast enough to be reasonably simulated with current computational power, but have dynamics slow enough to be observed with specially developed experimental techniques. This coupled study of fast folding proteins has provided insight into the mechanisms which allow some proteins to find their native conformation well less than 1 ms and has uncovered examples of theoretically predicted phenomena such as downhill folding. The study of fast folders also informs our understanding of even “slow” folding processes: fast folders are small, relatively simple protein domains and the principles that govern their folding also govern the folding of more complex systems. This review summarizes the major theoretical and experimental techniques used to study fast folding proteins and provides an overview of the major findings of fast folding research. Finally, we examine the themes that have emerged from studying fast folders and briefly summarize their application to protein folding in general as well as some work that is left to do. PMID:24641816

  19. Accurate, simple, and inexpensive assays to diagnose F8 gene inversion mutations in hemophilia A patients and carriers.

    PubMed

    Dutta, Debargh; Gunasekera, Devi; Ragni, Margaret V; Pratt, Kathleen P

    2016-12-27

    The most frequent mutations resulting in hemophilia A are an intron 22 or intron 1 gene inversion, which together cause ∼50% of severe hemophilia A cases. We report a simple and accurate RNA-based assay to detect these mutations in patients and heterozygous carriers. The assays do not require specialized equipment or expensive reagents; therefore, they may provide useful and economic protocols that could be standardized for central laboratory testing. RNA is purified from a blood sample, and reverse transcription nested polymerase chain reaction (RT-NPCR) reactions amplify DNA fragments with the F8 sequence spanning the exon 22 to 23 splice site (intron 22 inversion test) or the exon 1 to 2 splice site (intron 1 inversion test). These sequences will be amplified only from F8 RNA without an intron 22 or intron 1 inversion mutation, respectively. Additional RT-NPCR reactions are then carried out to amplify the inverted sequences extending from F8 exon 19 to the first in-frame stop codon within intron 22 or a chimeric transcript containing F8 exon 1 and the VBP1 gene. These latter 2 products are produced only by individuals with an intron 22 or intron 1 inversion mutation, respectively. The intron 22 inversion mutations may be further classified (eg, as type 1 or type 2, reflecting the specific homologous recombination sites) by the standard DNA-based "inverse-shifting" PCR assay if desired. Efficient Bcl I and T4 DNA ligase enzymes that cleave and ligate DNA in minutes were used, which is a substantial improvement over previous protocols that required overnight incubations. These protocols can accurately detect F8 inversion mutations via same-day testing of patient samples.

  20. Validation of a simple and fast method to quantify in vitro mineralization with fluorescent probes used in molecular imaging of bone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moester, Martiene J.C.; Schoeman, Monique A.E.; Oudshoorn, Ineke B.

    2014-01-03

    Highlights: •We validate a simple and fast method of quantification of in vitro mineralization. •Fluorescently labeled agents can detect calcium deposits in the mineralized matrix of cell cultures. •Fluorescent signals of the probes correlated with Alizarin Red S staining. -- Abstract: Alizarin Red S staining is the standard method to indicate and quantify matrix mineralization during differentiation of osteoblast cultures. KS483 cells are multipotent mouse mesenchymal progenitor cells that can differentiate into chondrocytes, adipocytes and osteoblasts and are a well-characterized model for the study of bone formation. Matrix mineralization is the last step of differentiation of bone cells and ismore » therefore a very important outcome measure in bone research. Fluorescently labelled calcium chelating agents, e.g. BoneTag and OsteoSense, are currently used for in vivo imaging of bone. The aim of the present study was to validate these probes for fast and simple detection and quantification of in vitro matrix mineralization by KS483 cells and thus enabling high-throughput screening experiments. KS483 cells were cultured under osteogenic conditions in the presence of compounds that either stimulate or inhibit osteoblast differentiation and thereby matrix mineralization. After 21 days of differentiation, fluorescence of stained cultures was quantified with a near-infrared imager and compared to Alizarin Red S quantification. Fluorescence of both probes closely correlated to Alizarin Red S staining in both inhibiting and stimulating conditions. In addition, both compounds displayed specificity for mineralized nodules. We therefore conclude that this method of quantification of bone mineralization using fluorescent compounds is a good alternative for the Alizarin Red S staining.« less

  1. The new ATLAS Fast Calorimeter Simulation

    NASA Astrophysics Data System (ADS)

    Schaarschmidt, J.; ATLAS Collaboration

    2017-10-01

    Current and future need for large scale simulated samples motivate the development of reliable fast simulation techniques. The new Fast Calorimeter Simulation is an improved parameterized response of single particles in the ATLAS calorimeter that aims to accurately emulate the key features of the detailed calorimeter response as simulated with Geant4, yet approximately ten times faster. Principal component analysis and machine learning techniques are used to improve the performance and decrease the memory need compared to the current version of the ATLAS Fast Calorimeter Simulation. A prototype of this new Fast Calorimeter Simulation is in development and its integration into the ATLAS simulation infrastructure is ongoing.

  2. A simple, fast and sensitive screening LC-ESI-MS/MS method for antibiotics in fish.

    PubMed

    Guidi, Letícia Rocha; Santos, Flávio Alves; Ribeiro, Ana Cláudia S R; Fernandes, Christian; Silva, Luiza H M; Gloria, Maria Beatriz A

    2017-01-15

    The objective of this study was to develop and validate a fast, sensitive and simple liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS/MS) method for the screening of six classes of antibiotics (aminoglycosides, beta-lactams, macrolides, quinolones, sulfonamides and tetracyclines) in fish. Samples were extracted with trichloroacetic acid. LC separation was achieved on a Zorbax Eclipse XDB C18 column and gradient elution using 0.1% heptafluorobutyric acid in water and acetonitrile as mobile phase. Analysis was carried out in multiple reaction monitoring mode via electrospray interface operated in the positive ionization mode, with sulfaphenazole as internal standard. The method was suitable for routine screening purposes of 40 antibiotics, according to EC Guidelines for the Validation of Screening Methods for Residues of Veterinary Medicines, taking into consideration threshold value, cut-off factor, detection capability, limit of detection, sensitivity and specificity. Real fish samples (n=193) from aquaculture were analyzed and 15% were positive for enrofloxacin (quinolone), one of them at a higher concentration than the level of interest (50µgkg -1 ), suggesting possible contamination or illegal use of that antibiotic. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A simple method for accurate endotracheal placement of an intubation tube in Guinea pigs to assess lung injury following chemical exposure.

    PubMed

    Nambiar, M P; Gordon, R K; Moran, T S; Richards, S M; Sciuto, A M

    2007-01-01

    ABSTRACT Guinea pigs are considered as the animal model of choice for toxicology and medical countermeasure studies against chemical warfare agents (CWAs) and toxic organophosphate pesticides because of the low levels of carboxylesterase compared to rats and mice. However, it is difficult to intubate guinea pigs without damaging the larynx to perform CWA inhalation experiments. We describe an easy technique of intubation of guinea pigs for accurate endotracheal placement of the intubation tube. The technique involves a speculum made by cutting the medium-size ear speculum in the midline leaving behind the intact circular connector to the otoscope. Guinea pigs were anesthetized with Telazol/meditomidine, the tongue was pulled using blunt forceps, and an otoscope attached with the specially prepared speculum was inserted gently. Insertion of the speculum raises the epiglottis and restrains the movements of vocal cord, which allows smooth insertion of the metal stylet-reinforced intubation tube. Accurate endotracheal placement of the intubation tube was achieved by measuring the length from the tracheal bifurcation to vocal cord and vocal cord to the upper front teeth. The average length of the trachea in guinea pigs (275 +/- 25 g) was 5.5 +/- 0.2 cm and the distance from the vocal cord to the front teeth was typically 3 cm. Coinciding an intubation tube marked at 6 cm with the upper front teeth accurately places the intubation tube 2.5 cm above the tracheal bifurcation. This simple method of intubation does not disturb the natural flora of the mouth and causes minimum laryngeal damage. It is rapid and reliable, and will be very valuable in inhalation exposure to chemical/biological warfare agents or toxic chemicals to assess respiratory toxicity and develop medical countermeasures.

  4. A simple and accurate rule-based modeling framework for simulation of autocrine/paracrine stimulation of glioblastoma cell motility and proliferation by L1CAM in 2-D culture.

    PubMed

    Caccavale, Justin; Fiumara, David; Stapf, Michael; Sweitzer, Liedeke; Anderson, Hannah J; Gorky, Jonathan; Dhurjati, Prasad; Galileo, Deni S

    2017-12-11

    Glioblastoma multiforme (GBM) is a devastating brain cancer for which there is no known cure. Its malignancy is due to rapid cell division along with high motility and invasiveness of cells into the brain tissue. Simple 2-dimensional laboratory assays (e.g., a scratch assay) commonly are used to measure the effects of various experimental perturbations, such as treatment with chemical inhibitors. Several mathematical models have been developed to aid the understanding of the motile behavior and proliferation of GBM cells. However, many are mathematically complicated, look at multiple interdependent phenomena, and/or use modeling software not freely available to the research community. These attributes make the adoption of models and simulations of even simple 2-dimensional cell behavior an uncommon practice by cancer cell biologists. Herein, we developed an accurate, yet simple, rule-based modeling framework to describe the in vitro behavior of GBM cells that are stimulated by the L1CAM protein using freely available NetLogo software. In our model L1CAM is released by cells to act through two cell surface receptors and a point of signaling convergence to increase cell motility and proliferation. A simple graphical interface is provided so that changes can be made easily to several parameters controlling cell behavior, and behavior of the cells is viewed both pictorially and with dedicated graphs. We fully describe the hierarchical rule-based modeling framework, show simulation results under several settings, describe the accuracy compared to experimental data, and discuss the potential usefulness for predicting future experimental outcomes and for use as a teaching tool for cell biology students. It is concluded that this simple modeling framework and its simulations accurately reflect much of the GBM cell motility behavior observed experimentally in vitro in the laboratory. Our framework can be modified easily to suit the needs of investigators interested in other

  5. Fast and Simple Microwave Synthesis of TiO2/Au Nanoparticles for Gas-Phase Photocatalytic Hydrogen Generation.

    PubMed

    May-Masnou, Anna; Soler, Lluís; Torras, Miquel; Salles, Pol; Llorca, Jordi; Roig, Anna

    2018-01-01

    The fabrication of small anatase titanium dioxide (TiO 2 ) nanoparticles (NPs) attached to larger anisotropic gold (Au) morphologies by a very fast and simple two-step microwave-assisted synthesis is presented. The TiO 2 /Au NPs are synthesized using polyvinylpyrrolidone (PVP) as reducing, capping and stabilizing agent through a polyol approach. To optimize the contact between the titania and the gold and facilitate electron transfer, the PVP is removed by calcination at mild temperatures. The nanocatalysts activity is then evaluated in the photocatalytic production of hydrogen from water/ethanol mixtures in gas-phase at ambient temperature. A maximum value of 5.3 mmol·[Formula: see text]h -1 (7.4 mmol·[Formula: see text]h -1 ) of hydrogen is recorded for the system with larger gold particles at an optimum calcination temperature of 450°C. Herein we demonstrate that TiO 2 -based photocatalysts with high Au loading and large Au particle size (≈50 nm) NPs have photocatalytic activity.

  6. Fast and accurate reference-free alignment of subtomograms.

    PubMed

    Chen, Yuxiang; Pfeffer, Stefan; Hrabe, Thomas; Schuller, Jan Michael; Förster, Friedrich

    2013-06-01

    In cryoelectron tomography alignment and averaging of subtomograms, each dnepicting the same macromolecule, improves the resolution compared to the individual subtomogram. Major challenges of subtomogram alignment are noise enhancement due to overfitting, the bias of an initial reference in the iterative alignment process, and the computational cost of processing increasingly large amounts of data. Here, we propose an efficient and accurate alignment algorithm via a generalized convolution theorem, which allows computation of a constrained correlation function using spherical harmonics. This formulation increases computational speed of rotational matching dramatically compared to rotation search in Cartesian space without sacrificing accuracy in contrast to other spherical harmonic based approaches. Using this sampling method, a reference-free alignment procedure is proposed to tackle reference bias and overfitting, which also includes contrast transfer function correction by Wiener filtering. Application of the method to simulated data allowed us to obtain resolutions near the ground truth. For two experimental datasets, ribosomes from yeast lysate and purified 20S proteasomes, we achieved reconstructions of approximately 20Å and 16Å, respectively. The software is ready-to-use and made public to the community. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Newly developed double neural network concept for reliable fast plasma position control

    NASA Astrophysics Data System (ADS)

    Jeon, Young-Mu; Na, Yong-Su; Kim, Myung-Rak; Hwang, Y. S.

    2001-01-01

    Neural network is considered as a parameter estimation tool in plasma controls for next generation tokamak such as ITER. The neural network has been reported to be so accurate and fast for plasma equilibrium identification that it may be applied to the control of complex tokamak plasmas. For this application, the reliability of the conventional neural network needs to be improved. In this study, a new idea of double neural network is developed to achieve this. The new idea has been applied to simple plasma position identification of KSTAR tokamak for feasibility test. Characteristics of the concept show higher reliability and fault tolerance even in severe faulty conditions, which may make neural network applicable to plasma control reliably and widely in future tokamaks.

  8. An automated, fast and accurate registration method to link stranded seeds in permanent prostate implants

    NASA Astrophysics Data System (ADS)

    Westendorp, Hendrik; Nuver, Tonnis T.; Moerland, Marinus A.; Minken, André W.

    2015-10-01

    The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant.

  9. Fast batch injection analysis system for on-site determination of ethanol in gasohol and fuel ethanol.

    PubMed

    Pereira, Polyana F; Marra, Mariana C; Munoz, Rodrigo A A; Richter, Eduardo M

    2012-02-15

    A simple, accurate and fast (180 injections h(-1)) batch injection analysis (BIA) system with multiple-pulse amperometric detection has been developed for selective determination of ethanol in gasohol and fuel ethanol. A sample aliquot (100 μL) was directly injected onto a gold electrode immersed in 0.5 mol L(-1) NaOH solution (unique reagent). The proposed BIA method requires minimal sample manipulation and can be easily used for on-site analysis. The results obtained with the BIA method were compared to those obtained by gas-chromatography and similar results were obtained (at 95% of confidence level). Published by Elsevier B.V.

  10. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1965-01-01

    Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.

  11. A fast numerical scheme for causal relativistic hydrodynamics with dissipation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takamoto, Makoto, E-mail: takamoto@tap.scphys.kyoto-u.ac.jp; Inutsuka, Shu-ichiro

    2011-08-01

    Highlights: {yields} We have developed a new multi-dimensional numerical scheme for causal relativistic hydrodynamics with dissipation. {yields} Our new scheme can calculate the evolution of dissipative relativistic hydrodynamics faster and more effectively than existing schemes. {yields} Since we use the Riemann solver for solving the advection steps, our method can capture shocks very accurately. - Abstract: In this paper, we develop a stable and fast numerical scheme for relativistic dissipative hydrodynamics based on Israel-Stewart theory. Israel-Stewart theory is a stable and causal description of dissipation in relativistic hydrodynamics although it includes relaxation process with the timescale for collision of constituentmore » particles, which introduces stiff equations and makes practical numerical calculation difficult. In our new scheme, we use Strang's splitting method, and use the piecewise exact solutions for solving the extremely short timescale problem. In addition, since we split the calculations into inviscid step and dissipative step, Riemann solver can be used for obtaining numerical flux for the inviscid step. The use of Riemann solver enables us to capture shocks very accurately. Simple numerical examples are shown. The present scheme can be applied to various high energy phenomena of astrophysics and nuclear physics.« less

  12. Hormonal profiling: Development of a simple method to extract and quantify phytohormones in complex matrices by UHPLC-MS/MS.

    PubMed

    Delatorre, Carolina; Rodríguez, Ana; Rodríguez, Lucía; Majada, Juan P; Ordás, Ricardo J; Feito, Isabel

    2017-01-01

    Plant growth regulators (PGRs) are very different chemical compounds that play essential roles in plant development and the regulation of physiological processes. They exert their functions by a mechanism called cross-talk (involving either synergistic or antagonistic actions) thus; it is for great interest to study as many PGRs as possible to obtain accurate information about plant status. Much effort has been applied to develop methods capable of analyze large numbers of these compounds but frequently excluding some chemical families or important PGRs within each family. In addition, most of the methods are specially designed for matrices easy to work with. Therefore, we wanted to develop a method which achieved the requirements lacking in the literature and also being fast and reliable. Here we present a simple, fast and robust method for the extraction and quantification of 20 different PGRs using UHPLC-MS/MS optimized in complex matrices. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    PubMed Central

    Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.

    2015-01-01

    Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results

  14. Application of fast Fourier transform cross-correlation and mass spectrometry data for accurate alignment of chromatograms.

    PubMed

    Zheng, Yi-Bao; Zhang, Zhi-Min; Liang, Yi-Zeng; Zhan, De-Jian; Huang, Jian-Hua; Yun, Yong-Huan; Xie, Hua-Lin

    2013-04-19

    Chromatography has been established as one of the most important analytical methods in the modern analytical laboratory. However, preprocessing of the chromatograms, especially peak alignment, is usually a time-consuming task prior to extracting useful information from the datasets because of the small unavoidable differences in the experimental conditions caused by minor changes and drift. Most of the alignment algorithms are performed on reduced datasets using only the detected peaks in the chromatograms, which means a loss of data and introduces the problem of extraction of peak data from the chromatographic profiles. These disadvantages can be overcome by using the full chromatographic information that is generated from hyphenated chromatographic instruments. A new alignment algorithm called CAMS (Chromatogram Alignment via Mass Spectra) is present here to correct the retention time shifts among chromatograms accurately and rapidly. In this report, peaks of each chromatogram were detected based on Continuous Wavelet Transform (CWT) with Haar wavelet and were aligned against the reference chromatogram via the correlation of mass spectra. The aligning procedure was accelerated by Fast Fourier Transform cross correlation (FFT cross correlation). This approach has been compared with several well-known alignment methods on real chromatographic datasets, which demonstrates that CAMS can preserve the shape of peaks and achieve a high quality alignment result. Furthermore, the CAMS method was implemented in the Matlab language and available as an open source package at http://www.github.com/matchcoder/CAMS. Copyright © 2013. Published by Elsevier B.V.

  15. YAHA: fast and flexible long-read alignment with optimal breakpoint detection.

    PubMed

    Faust, Gregory G; Hall, Ira M

    2012-10-01

    With improved short-read assembly algorithms and the recent development of long-read sequencers, split mapping will soon be the preferred method for structural variant (SV) detection. Yet, current alignment tools are not well suited for this. We present YAHA, a fast and flexible hash-based aligner. YAHA is as fast and accurate as BWA-SW at finding the single best alignment per query and is dramatically faster and more sensitive than both SSAHA2 and MegaBLAST at finding all possible alignments. Unlike other aligners that report all, or one, alignment per query, or that use simple heuristics to select alignments, YAHA uses a directed acyclic graph to find the optimal set of alignments that cover a query using a biologically relevant breakpoint penalty. YAHA can also report multiple mappings per defined segment of the query. We show that YAHA detects more breakpoints in less time than BWA-SW across all SV classes, and especially excels at complex SVs comprising multiple breakpoints. YAHA is currently supported on 64-bit Linux systems. Binaries and sample data are freely available for download from http://faculty.virginia.edu/irahall/YAHA. imh4y@virginia.edu.

  16. Operator Priming and Generalization of Practice in Adults' Simple Arithmetic

    ERIC Educational Resources Information Center

    Chen, Yalin; Campbell, Jamie I. D.

    2016-01-01

    There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication,…

  17. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations.

    PubMed

    Martínez-Romero, Marcos; O'Connor, Martin J; Shankar, Ravi D; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L; Gevaert, Olivier; Graybeal, John; Musen, Mark A

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository.

  18. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations

    PubMed Central

    Martínez-Romero, Marcos; O’Connor, Martin J.; Shankar, Ravi D.; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L.; Gevaert, Olivier; Graybeal, John; Musen, Mark A.

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository. PMID:29854196

  19. Stonehenge: A Simple and Accurate Predictor of Lunar Eclipses

    NASA Astrophysics Data System (ADS)

    Challener, S.

    1999-12-01

    Over the last century, much has been written about the astronomical significance of Stonehenge. The rage peaked in the mid to late 1960s when new computer technology enabled astronomers to make the first complete search for celestial alignments. Because there are hundreds of rocks or holes at Stonehenge and dozens of bright objects in the sky, the quest was fraught with obvious statistical problems. A storm of controversy followed and the subject nearly vanished from print. Only a handful of these alignments remain compelling. Today, few astronomers and still fewer archaeologists would argue that Stonehenge served primarily as an observatory. Instead, Stonehenge probably served as a sacred meeting place, which was consecrated by certain celestial events. These would include the sun's risings and settings at the solstices and possibly some lunar risings as well. I suggest that Stonehenge was also used to predict lunar eclipses. While Hawkins and Hoyle also suggested that Stonehenge was used in this way, their methods are complex and they make use of only early, minor, or outlying areas of Stonehenge. In contrast, I suggest a way that makes use of the imposing, central region of Stonehenge; the area built during the final phase of activity. To predict every lunar eclipse without predicting eclipses that do not occur, I use the less familiar lunar cycle of 47 lunar months. By moving markers about the Sarsen Circle, the Bluestone Circle, and the Bluestone Horseshoe, all umbral lunar eclipses can be predicted accurately.

  20. Fast and accurate inference of local ancestry in Latino populations

    PubMed Central

    Baran, Yael; Pasaniuc, Bogdan; Sankararaman, Sriram; Torgerson, Dara G.; Gignoux, Christopher; Eng, Celeste; Rodriguez-Cintron, William; Chapela, Rocio; Ford, Jean G.; Avila, Pedro C.; Rodriguez-Santana, Jose; Burchard, Esteban Gonzàlez; Halperin, Eran

    2012-01-01

    Motivation: It is becoming increasingly evident that the analysis of genotype data from recently admixed populations is providing important insights into medical genetics and population history. Such analyses have been used to identify novel disease loci, to understand recombination rate variation and to detect recent selection events. The utility of such studies crucially depends on accurate and unbiased estimation of the ancestry at every genomic locus in recently admixed populations. Although various methods have been proposed and shown to be extremely accurate in two-way admixtures (e.g. African Americans), only a few approaches have been proposed and thoroughly benchmarked on multi-way admixtures (e.g. Latino populations of the Americas). Results: To address these challenges we introduce here methods for local ancestry inference which leverage the structure of linkage disequilibrium in the ancestral population (LAMP-LD), and incorporate the constraint of Mendelian segregation when inferring local ancestry in nuclear family trios (LAMP-HAP). Our algorithms uniquely combine hidden Markov models (HMMs) of haplotype diversity within a novel window-based framework to achieve superior accuracy as compared with published methods. Further, unlike previous methods, the structure of our HMM does not depend on the number of reference haplotypes but on a fixed constant, and it is thereby capable of utilizing large datasets while remaining highly efficient and robust to over-fitting. Through simulations and analysis of real data from 489 nuclear trio families from the mainland US, Puerto Rico and Mexico, we demonstrate that our methods achieve superior accuracy compared with published methods for local ancestry inference in Latinos. Availability: http://lamp.icsi.berkeley.edu/lamp/lampld/ Contact: bpasaniu@hsph.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22495753

  1. A simple fluorescent probe for the fast sequential detection of copper and biothiols based on a benzothiazole derivative

    NASA Astrophysics Data System (ADS)

    Shen, Youming; Zhang, Xiangyang; Zhang, Chunxiang; Zhang, Youyu; Jin, Junling; Li, Haitao

    2018-02-01

    A simple benzothiazole fluorescent chemosensor was developed for the fast sequential detection of Cu2 + and biothiols through modulating the excited-state intramolecular proton transfer (ESIPT) process. The compound 1 exhibits highly selective and sensitive fluorescence ;on-off; recognition to Cu2 + with a 1:1 binding stoichiometry by ESIPT hinder. The in situ generated 1-Cu2 + complex can serve as an ;on-off; fluorescent probe for high selectivity toward biothiols via Cu2 + displacement approach, which exerts ESIPT recovery. It is worth pointing out that the 1-Cu2 + complex shows faster for cysteins (within 1 min) than other biothiols such as homocysteine (25 min) and glutathione (25 min). Moreover, the compound 1 displays 160 nm Stoke-shift for reversibly monitoring Cu2 + and biothiols. In addition, the probe is successfully used for fluorescent cellular imaging. This strategy via modulation the ESIPT state has been used for determination of Cu2 + and Cys with satisfactory results, which further demonstrates its value of practical applications.

  2. Fast and simple microwave synthesis of TiO2/Au nanoparticles for gas-phase photocatalytic hydrogen generation

    NASA Astrophysics Data System (ADS)

    May-Masnou, Anna; Soler, Lluís; Torras, Miquel; Salles, Pol; Llorca, Jordi; Roig, Anna

    2018-04-01

    The fabrication of small anatase titanium dioxide (TiO2) nanoparticles (NPs) attached to larger anisotropic gold (Au) morphologies by a very fast and simple two-step microwave-assisted synthesis is presented. The TiO2/Au NPs are synthesized using polyvinylpyrrolidone (PVP) as reducing, capping and stabilizing agent through a polyol approach. To optimize the contact between the titania and the gold and facilitate electron transfer, the PVP is removed by calcination at mild temperatures. The nanocatalysts activity is then evaluated in the photocatalytic production of hydrogen from water/ethanol mixtures in gas-phase at ambient temperature. A maximum value of 5.3 mmol·gcat-1·h-1 (7.4 mmol·gTiO2-1·h-1) of hydrogen is recorded for the system with larger gold particles at an optimum calcination temperature of 450 °C. Herein we demonstrate that TiO2-based photocatalysts with high Au loading and large Au particle size (≈ 50 nm) NPs have photocatalytic activity.

  3. Fast and accurate determination of K, Ca, and Mg in human serum by sector field ICP-MS.

    PubMed

    Yu, Lee L; Davis, W Clay; Nuevo Ordonez, Yoana; Long, Stephen E

    2013-11-01

    Electrolytes in serum are important biomarkers for skeletal and cellular health. The levels of electrolytes are monitored by measuring the Ca, Mg, K, and Na in blood serum. Many reference methods have been developed for the determination of Ca, Mg, and K in clinical measurements; however, isotope dilution thermal ionization mass spectrometry (ID-TIMS) has traditionally been the primary reference method serving as an anchor for traceability and accuracy to these secondary reference methods. The sample matrix must be separated before ID-TIMS measurements, which is a slow and tedious process that hindered the adoption of the technique in routine clinical measurements. We have developed a fast and accurate method for the determination of Ca, Mg, and K in serum by taking advantage of the higher mass resolution capability of the modern sector field inductively coupled plasma mass spectrometry (SF-ICP-MS). Each serum sample was spiked with a mixture containing enriched (44)Ca, (26)Mg, and (41)K, and the (42)Ca(+):(44)Ca(+), (24)Mg(+):(26)Mg(+), and (39)K(+):(41)K(+) ratios were measured. The Ca and Mg ratios were measured in medium resolution mode (m/Δm ≈ 4 500), and the K ratio in high resolution mode (m/Δm ≈ 10 000). Residual (40)Ar(1)H(+) interference was still observed but the deleterious effects of the interference were minimized by measuring the sample at K > 100 ng g(-1). The interferences of Sr(++) at the two Ca isotopes were less than 0.25 % of the analyte signal, and they were corrected with the (88)Sr(+) intensity by using the Sr(++):Sr(+) ratio. The sample preparation involved only simple dilutions, and the measurement using this sample preparation approach is known as dilution-and-shoot (DNS). The DNS approach was validated with samples prepared via the traditional acid digestion approach followed by ID-SF-ICP-MS measurement. DNS and digested samples of SRM 956c were measured with ID-SF-ICP-MS for quality assurance, and the results (mean

  4. Fast and Accurate Simulation Technique for Large Irregular Arrays

    NASA Astrophysics Data System (ADS)

    Bui-Van, Ha; Abraham, Jens; Arts, Michel; Gueuning, Quentin; Raucy, Christopher; Gonzalez-Ovejero, David; de Lera Acedo, Eloy; Craeye, Christophe

    2018-04-01

    A fast full-wave simulation technique is presented for the analysis of large irregular planar arrays of identical 3-D metallic antennas. The solution method relies on the Macro Basis Functions (MBF) approach and an interpolatory technique to compute the interactions between MBFs. The Harmonic-polynomial (HARP) model is established for the near-field interactions in a modified system of coordinates. For extremely large arrays made of complex antennas, two approaches assuming a limited radius of influence for mutual coupling are considered: one is based on a sparse-matrix LU decomposition and the other one on a tessellation of the array in the form of overlapping sub-arrays. The computation of all embedded element patterns is sped up with the help of the non-uniform FFT algorithm. Extensive validations are shown for arrays of log-periodic antennas envisaged for the low-frequency SKA (Square Kilometer Array) radio-telescope. The analysis of SKA stations with such a large number of elements has not been treated yet in the literature. Validations include comparison with results obtained with commercial software and with experiments. The proposed method is particularly well suited to array synthesis, in which several orders of magnitude can be saved in terms of computation time.

  5. relaxGUI: a new software for fast and simple NMR relaxation data analysis and calculation of ps-ns and μs motion of proteins.

    PubMed

    Bieri, Michael; d'Auvergne, Edward J; Gooley, Paul R

    2011-06-01

    Investigation of protein dynamics on the ps-ns and μs-ms timeframes provides detailed insight into the mechanisms of enzymes and the binding properties of proteins. Nuclear magnetic resonance (NMR) is an excellent tool for studying protein dynamics at atomic resolution. Analysis of relaxation data using model-free analysis can be a tedious and time consuming process, which requires good knowledge of scripting procedures. The software relaxGUI was developed for fast and simple model-free analysis and is fully integrated into the software package relax. It is written in Python and uses wxPython to build the graphical user interface (GUI) for maximum performance and multi-platform use. This software allows the analysis of NMR relaxation data with ease and the generation of publication quality graphs as well as color coded images of molecular structures. The interface is designed for simple data analysis and management. The software was tested and validated against the command line version of relax.

  6. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    PubMed

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of

  7. Two-dimensional fast marching for geometrical optics.

    PubMed

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore

    2014-11-03

    We develop an approach for the fast and accurate determination of geometrical optics solutions to Maxwell's equations in inhomogeneous 2D media and for TM polarized electric fields. The eikonal equation is solved by the fast marching method. Particular attention is paid to consistently discretizing the scatterers' boundaries and matching the discretization to that of the computational domain. The ray tracing is performed, in a direct and inverse way, by using a technique introduced in computer graphics for the fast and accurate generation of textured images from vector fields. The transport equation is solved by resorting only to its integral form, the transport of polarization being trivial for the considered geometry and polarization. Numerical results for the plane wave scattering of two perfectly conducting circular cylinders and for a Luneburg lens prove the accuracy of the algorithm. In particular, it is shown how the approach is capable of properly accounting for the multiple scattering occurring between the two metallic cylinders and how inverse ray tracing should be preferred to direct ray tracing in the case of the Luneburg lens.

  8. Simple Fourier optics formalism for high-angular-resolution systems and nulling interferometry.

    PubMed

    Hénault, François

    2010-03-01

    Reviewed are various designs of advanced, multiaperture optical systems dedicated to high-angular-resolution imaging or to the detection of exoplanets by nulling interferometry. A simple Fourier optics formalism applicable to both imaging arrays and nulling interferometers is presented, allowing their basic theoretical relationships to be derived as convolution or cross-correlation products suitable for fast and accurate computation. Several unusual designs, such as a "superresolving telescope" utilizing a mosaicking observation procedure or a free-flying, axially recombined interferometer are examined, and their performance in terms of imaging and nulling capacity are assessed. In all considered cases, it is found that the limiting parameter is the diameter of the individual telescopes. A final section devoted to nulling interferometry shows an apparent superiority of axial versus multiaxial recombining schemes. The entire study is valid only in the framework of first-order geometrical optics and scalar diffraction theory. Furthermore, it is assumed that all entrance subapertures are optically conjugated with their associated exit pupils.

  9. Fast and accurate automated cell boundary determination for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  10. Do we need 3D tube current modulation information for accurate organ dosimetry in chest CT? Protocols dose comparisons.

    PubMed

    Lopez-Rendon, Xochitl; Zhang, Guozhi; Coudyzer, Walter; Develter, Wim; Bosmans, Hilde; Zanca, Federica

    2017-11-01

    To compare the lung and breast dose associated with three chest protocols: standard, organ-based tube current modulation (OBTCM) and fast-speed scanning; and to estimate the error associated with organ dose when modelling the longitudinal (z-) TCM versus the 3D-TCM in Monte Carlo simulations (MC) for these three protocols. Five adult and three paediatric cadavers with different BMI were scanned. The CTDI vol of the OBTCM and the fast-speed protocols were matched to the patient-specific CTDI vol of the standard protocol. Lung and breast doses were estimated using MC with both z- and 3D-TCM simulated and compared between protocols. The fast-speed scanning protocol delivered the highest doses. A slight reduction for breast dose (up to 5.1%) was observed for two of the three female cadavers with the OBTCM in comparison to the standard. For both adult and paediatric, the implementation of the z-TCM data only for organ dose estimation resulted in 10.0% accuracy for the standard and fast-speed protocols, while relative dose differences were up to 15.3% for the OBTCM protocol. At identical CTDI vol values, the standard protocol delivered the lowest overall doses. Only for the OBTCM protocol is the 3D-TCM needed if an accurate (<10.0%) organ dosimetry is desired. • The z-TCM information is sufficient for accurate dosimetry for standard protocols. • The z-TCM information is sufficient for accurate dosimetry for fast-speed scanning protocols. • For organ-based TCM schemes, the 3D-TCM information is necessary for accurate dosimetry. • At identical CTDI vol , the fast-speed scanning protocol delivered the highest doses. • Lung dose was higher in XCare than standard protocol at identical CTDI vol .

  11. An accurate automated technique for quasi-optics measurement of the microwave diagnostics for fusion plasma

    NASA Astrophysics Data System (ADS)

    Hu, Jianqiang; Liu, Ahdi; Zhou, Chu; Zhang, Xiaohui; Wang, Mingyuan; Zhang, Jin; Feng, Xi; Li, Hong; Xie, Jinlin; Liu, Wandong; Yu, Changxuan

    2017-08-01

    A new integrated technique for fast and accurate measurement of the quasi-optics, especially for the microwave/millimeter wave diagnostic systems of fusion plasma, has been developed. Using the LabVIEW-based comprehensive scanning system, we can realize not only automatic but also fast and accurate measurement, which will help to eliminate the effects of temperature drift and standing wave/multi-reflection. With the Matlab-based asymmetric two-dimensional Gaussian fitting method, all the desired parameters of the microwave beam can be obtained. This technique can be used in the design and testing of microwave diagnostic systems such as reflectometers and the electron cyclotron emission imaging diagnostic systems of the Experimental Advanced Superconducting Tokamak.

  12. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  13. Simple formula for the surface area of the body and a simple model for anthropometry.

    PubMed

    Reading, Bruce D; Freeman, Brian

    2005-03-01

    The body surface area (BSA) of any adult, when derived from the arithmetic mean of the different values calculated from four independent accepted formulae, can be expressed accurately in Systeme International d'Unites (SI) units by the simple equation BSA = 1/6(WH)0.5, where W is body weight in kg, H is body height in m, and BSA is in m2. This formula, which is derived in part by modeling the body as a simple solid of revolution or a prolate spheroid (i.e., a stretched ellipsoid of revolution) gives students, teachers, and clinicians a simple rule for the rapid estimation of surface area using rational units. The formula was tested independently for human subjects by using it to predict body volume and then comparing this prediction against the actual volume measured by Archimedes' principle. Copyright 2005 Wiley-Liss, Inc.

  14. The Fast Scattering Code (FSC): Validation Studies and Program Guidelines

    NASA Technical Reports Server (NTRS)

    Tinetti, Ana F.; Dunn, Mark H.

    2011-01-01

    The Fast Scattering Code (FSC) is a frequency domain noise prediction program developed at the NASA Langley Research Center (LaRC) to simulate the acoustic field produced by the interaction of known, time harmonic incident sound with bodies of arbitrary shape and surface impedance immersed in a potential flow. The code uses the equivalent source method (ESM) to solve an exterior 3-D Helmholtz boundary value problem (BVP) by expanding the scattered acoustic pressure field into a series of point sources distributed on a fictitious surface placed inside the actual scatterer. This work provides additional code validation studies and illustrates the range of code parameters that produce accurate results with minimal computational costs. Systematic noise prediction studies are presented in which monopole generated incident sound is scattered by simple geometric shapes - spheres (acoustically hard and soft surfaces), oblate spheroids, flat disk, and flat plates with various edge topologies. Comparisons between FSC simulations and analytical results and experimental data are presented.

  15. 3ARM: A Fast, Accurate Radiative Transfer Model for Use in Climate Models

    NASA Technical Reports Server (NTRS)

    Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.

    1996-01-01

    A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.

  16. 3ARM: A Fast, Accurate Radiative Transfer Model for use in Climate Models

    NASA Technical Reports Server (NTRS)

    Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.

    1996-01-01

    A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.

  17. 3ARM: A Fast, Accurate Radiative Transfer Model For Use in Climate Models

    NASA Technical Reports Server (NTRS)

    Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.

    1996-01-01

    A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.

  18. Molecularly imprinted microspheres synthesized by a simple, fast, and universal suspension polymerization for selective extraction of the topical anesthetic benzocaine in human serum and fish tissues.

    PubMed

    Sun, Hui; Lai, Jia-Ping; Chen, Fang; Zhu, De-Rong

    2015-02-01

    A simple, fast, and universal suspension polymerization method was used to synthesize the molecularly imprinted microspheres (MIMs) for the topical anesthetic benzocaine (BZC). The desired diameter (10-20 μm) and uniform morphology of the MIMs were obtained easily by changing one or more of the synthesis conditions, including type and amount of surfactant, stirring rate, and ratio of organic to water phase. The MIMs obtained were used as a molecular-imprinting solid-phase-extraction (MISPE) material for extraction of BZC in human serum and fish tissues. The MISPE results revealed that the BZC in these biosamples could be enriched effectively after the MISPE operation. The recoveries of BZC on MIMs cartridges were higher than 90% (n = 3). Finally, an MISPE-HPLC method with UV detection was developed for highly selective extraction and fast detection of trace BZC in human serum and fish tissues. The developed method could also be used for the enrichment and detection of BZC in other complex biosamples.

  19. FragBag, an accurate representation of protein structure, retrieves structural neighbors from the entire PDB quickly and accurately.

    PubMed

    Budowski-Tal, Inbal; Nov, Yuval; Kolodny, Rachel

    2010-02-23

    Fast identification of protein structures that are similar to a specified query structure in the entire Protein Data Bank (PDB) is fundamental in structure and function prediction. We present FragBag: An ultrafast and accurate method for comparing protein structures. We describe a protein structure by the collection of its overlapping short contiguous backbone segments, and discretize this set using a library of fragments. Then, we succinctly represent the protein as a "bags-of-fragments"-a vector that counts the number of occurrences of each fragment-and measure the similarity between two structures by the similarity between their vectors. Our representation has two additional benefits: (i) it can be used to construct an inverted index, for implementing a fast structural search engine of the entire PDB, and (ii) one can specify a structure as a collection of substructures, without combining them into a single structure; this is valuable for structure prediction, when there are reliable predictions only of parts of the protein. We use receiver operating characteristic curve analysis to quantify the success of FragBag in identifying neighbor candidate sets in a dataset of over 2,900 structures. The gold standard is the set of neighbors found by six state of the art structural aligners. Our best FragBag library finds more accurate candidate sets than the three other filter methods: The SGM, PRIDE, and a method by Zotenko et al. More interestingly, FragBag performs on a par with the computationally expensive, yet highly trusted structural aligners STRUCTAL and CE.

  20. Accurate and simple method for quantification of hepatic fat content using magnetic resonance imaging: a prospective study in biopsy-proven nonalcoholic fatty liver disease.

    PubMed

    Hatta, Tomoko; Fujinaga, Yasunari; Kadoya, Masumi; Ueda, Hitoshi; Murayama, Hiroaki; Kurozumi, Masahiro; Ueda, Kazuhiko; Komatsu, Michiharu; Nagaya, Tadanobu; Joshita, Satoru; Kodama, Ryo; Tanaka, Eiji; Uehara, Tsuyoshi; Sano, Kenji; Tanaka, Naoki

    2010-12-01

    To assess the degree of hepatic fat content, simple and noninvasive methods with high objectivity and reproducibility are required. Magnetic resonance imaging (MRI) is one such candidate, although its accuracy remains unclear. We aimed to validate an MRI method for quantifying hepatic fat content by calibrating MRI reading with a phantom and comparing MRI measurements in human subjects with estimates of liver fat content in liver biopsy specimens. The MRI method was performed by a combination of MRI calibration using a phantom and double-echo chemical shift gradient-echo sequence (double-echo fast low-angle shot sequence) that has been widely used on a 1.5-T scanner. Liver fat content in patients with nonalcoholic fatty liver disease (NAFLD, n = 26) was derived from a calibration curve generated by scanning the phantom. Liver fat was also estimated by optical image analysis. The correlation between the MRI measurements and liver histology findings was examined prospectively. Magnetic resonance imaging measurements showed a strong correlation with liver fat content estimated from the results of light microscopic examination (correlation coefficient 0.91, P < 0.001) regardless of the degree of hepatic steatosis. Moreover, the severity of lobular inflammation or fibrosis did not influence the MRI measurements. This MRI method is simple and noninvasive, has excellent ability to quantify hepatic fat content even in NAFLD patients with mild steatosis or advanced fibrosis, and can be performed easily without special devices.

  1. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  2. A simple animal support for convenient weighing

    USGS Publications Warehouse

    Pan, H.P.; Caslick, J.W.; Harke, D.T.; Decker, D.G.

    1965-01-01

    A simple animal support constructed of web belts to hold skittish pigs for weighing was developed. The support is easily made, noninjurious to the pigs, and compact, facilitating rapid, accurate weighing. With minor modifications, the support can probably be used in weighing other animals.

  3. Individual differences in the components of children's and adults' information processing for simple symbolic and non-symbolic numeric decisions.

    PubMed

    Thompson, Clarissa A; Ratcliff, Roger; McKoon, Gail

    2016-10-01

    How do speed and accuracy trade off, and what components of information processing develop as children and adults make simple numeric comparisons? Data from symbolic and non-symbolic number tasks were collected from 19 first graders (Mage=7.12 years), 26 second/third graders (Mage=8.20 years), 27 fourth/fifth graders (Mage=10.46 years), and 19 seventh/eighth graders (Mage=13.22 years). The non-symbolic task asked children to decide whether an array of asterisks had a larger or smaller number than 50, and the symbolic task asked whether a two-digit number was greater than or less than 50. We used a diffusion model analysis to estimate components of processing in tasks from accuracy, correct and error response times, and response time (RT) distributions. Participants who were accurate on one task were accurate on the other task, and participants who made fast decisions on one task made fast decisions on the other task. Older participants extracted a higher quality of information from the stimulus arrays, were more willing to make a decision, and were faster at encoding, transforming the stimulus representation, and executing their responses. Individual participants' accuracy and RTs were uncorrelated. Drift rate and boundary settings were significantly related across tasks, but they were unrelated to each other. Accuracy was mainly determined by drift rate, and RT was mainly determined by boundary separation. We concluded that RT and accuracy operate largely independently. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Protein detection by Simple Western™ analysis.

    PubMed

    Harris, Valerie M

    2015-01-01

    Protein Simple© has taken a well-known protein detection method, the western blot, and revolutionized it. The Simple Western™ system uses capillary electrophoresis to identify and quantitate a protein of interest. Protein Simple© provides multiple detection apparatuses (Wes, Sally Sue, or Peggy Sue) that are suggested to save scientists valuable time by allowing the researcher to prepare the protein sample, load it along with necessary antibodies and substrates, and walk away. Within 3-5 h the protein will be separated by size, or charge, immuno-detection of target protein will be accurately quantitated, and results will be immediately made available. Using the Peggy Sue instrument, one study recently examined changes in MAPK signaling proteins in the sex-determining stage of gonadal development. Here the methodology is described.

  5. Fast-SNP: a fast matrix pre-processing algorithm for efficient loopless flux optimization of metabolic models

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155

  6. Accurate simulation of backscattering spectra in the presence of sharp resonances

    NASA Astrophysics Data System (ADS)

    Barradas, N. P.; Alves, E.; Jeynes, C.; Tosaki, M.

    2006-06-01

    In elastic backscattering spectrometry, the shape of the observed spectrum due to resonances in the nuclear scattering cross-section is influenced by many factors. If the energy spread of the beam before interaction is larger than the resonance width, then a simple convolution with the energy spread on exit and with the detection system resolution will lead to a calculated spectrum with a resonance much sharper than the observed signal. Also, the yield from a thin layer will not be calculated accurately. We have developed an algorithm for the accurate simulation of backscattering spectra in the presence of sharp resonances. Albeit approximate, the algorithm leads to dramatic improvements in the quality and accuracy of the simulations. It is simple to implement and leads to only small increases of the calculation time, being thus suitable for routine data analysis. We show different experimental examples, including samples with roughness and porosity.

  7. Development of a fast and simple test system for the semiquantitative protein detection in cerebrospinal liquids based on gold nanoparticles.

    PubMed

    Göbel, Gero; Lange, Robert; Hollidt, Jörg-Michael; Lisdat, Fred

    2016-01-01

    The fast and simple detection of increased protein concentrations in cerebrospinal liquids is preferable in the emergency medicine and it can help to avoid unnecessary laboratory work by an early classification of neurological diseases. Here a test system is developed which is based on the electrostatic interaction between negatively charged gold nanoparticles and proteins at pH values around 5. The test system can be adjusted in such a way that protein/nanoparticles aggregates are formed leading to a red-shift in the absorption spectrum of the nanoparticles suspension. At concentrations above 500 mg/l the color of the suspension changes from red via violet toward blue in a rather small concentration range from 500 to 1000 mg/l. Furthermore the influence of various parameters such as gold nanoparticle concentration, pH value and varying ion concentration in the sample on the test system is examined. Finally cerebrospinal liquids of a larger number of patients have been analyzed. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Fast and accurate non-sequential protein structure alignment using a new asymmetric linear sum assignment heuristic.

    PubMed

    Brown, Peter; Pullan, Wayne; Yang, Yuedong; Zhou, Yaoqi

    2016-02-01

    The three dimensional tertiary structure of a protein at near atomic level resolution provides insight alluding to its function and evolution. As protein structure decides its functionality, similarity in structure usually implies similarity in function. As such, structure alignment techniques are often useful in the classifications of protein function. Given the rapidly growing rate of new, experimentally determined structures being made available from repositories such as the Protein Data Bank, fast and accurate computational structure comparison tools are required. This paper presents SPalignNS, a non-sequential protein structure alignment tool using a novel asymmetrical greedy search technique. The performance of SPalignNS was evaluated against existing sequential and non-sequential structure alignment methods by performing trials with commonly used datasets. These benchmark datasets used to gauge alignment accuracy include (i) 9538 pairwise alignments implied by the HOMSTRAD database of homologous proteins; (ii) a subset of 64 difficult alignments from set (i) that have low structure similarity; (iii) 199 pairwise alignments of proteins with similar structure but different topology; and (iv) a subset of 20 pairwise alignments from the RIPC set. SPalignNS is shown to achieve greater alignment accuracy (lower or comparable root-mean squared distance with increased structure overlap coverage) for all datasets, and the highest agreement with reference alignments from the challenging dataset (iv) above, when compared with both sequentially constrained alignments and other non-sequential alignments. SPalignNS was implemented in C++. The source code, binary executable, and a web server version is freely available at: http://sparks-lab.org yaoqi.zhou@griffith.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Fast and Simple Analytical Method for Direct Determination of Total Chlorine Content in Polyglycerol by ICP-MS.

    PubMed

    Jakóbik-Kolon, Agata; Milewski, Andrzej; Dydo, Piotr; Witczak, Magdalena; Bok-Badura, Joanna

    2018-02-23

    The fast and simple method for total chlorine determination in polyglycerols using low resolution inductively coupled plasma mass spectrometry (ICP-MS) without the need for additional equipment and time-consuming sample decomposition was evaluated. Linear calibration curve for 35 Cl isotope in the concentration range 20-800 µg/L was observed. Limits of detection and quantification equaled to 15 µg/L and 44 µg/L, respectively. This corresponds to possibility of detection 3 µg/g and determination 9 µg/g of chlorine in polyglycerol using studied conditions (0.5% matrix-polyglycerol samples diluted or dissolved with water to an overall concentration of 0.5%). Matrix effects as well as the effect of chlorine origin have been evaluated. The presence of 0.5% (m/m) of matrix species similar to polyglycerol (polyethylene glycol-PEG) did not influence the chlorine determination for PEGs with average molecular weights (MW) up to 2000 Da. Good precision and accuracy of the chlorine content determination was achieved regardless on its origin (inorganic/organic). High analyte recovery level and low relative standard deviation values were observed for real polyglycerol samples spiked with chloride. Additionally, the Combustion Ion Chromatography System was used as a reference method. The results confirmed high accuracy and precision of the tested method.

  10. Fast particle confinement with optimized coil currents in the W7-X stellarator

    NASA Astrophysics Data System (ADS)

    Drevlak, M.; Geiger, J.; Helander, P.; Turkin, Y.

    2014-07-01

    One of the principal goals of the W7-X stellarator is to demonstrate good confinement of energetic ions at finite β. This confinement, however, is sensitive to the magnetic field configuration and is thus vulnerable to design modifications of the coil geometry. The collisionless drift orbit losses for 60 keV protons in W7-X are studied using the ANTS code. Particles in this energy range will be produced by the neutral beam injection (NBI) system being constructed for W7-X, and are particularly important because protons at this energy accurately mimick the behaviour of 3.5 MeV α-particles in a HELIAS reactor. To investigate the possibility of improved fast particle confinement, several approaches to adjust the coil currents (5 main field coil currents +2 auxiliary coil currents) were explored. These strategies include simple rules of thumb as well as computational optimization of various properties of the magnetic field. It is shown that significant improvement of collisionless fast particle confinement can be achieved in W7-X for particle populations similar to α particles produced in fusion reactions. Nevertheless, the experimental goal of demonstrating confinement improvement with rising plasma pressure using an NBI-generated population appears to be difficult based on optimization of the coil currents only. The principal reason for this difficulty is that the NBI deposition profile is broader than the region of good fast-ion confinement around the magnetic axis.

  11. Decision support system of e-book provider selection for library using Simple Additive Weighting

    NASA Astrophysics Data System (ADS)

    Ciptayani, P. I.; Dewi, K. C.

    2018-01-01

    Each library has its own criteria and differences in the importance of each criterion in choosing an e-book provider for them. The large number of providers and the different importance levels of each criterion make the problem of determining the e-book provider to be complex and take a considerable time in decision making. The aim of this study was to implement Decision support system (DSS) to assist the library in selecting the best e-book provider based on their preferences. The way of DSS works is by comparing the importance of each criterion and the condition of each alternative decision. SAW is one of DSS method that is quite simple, fast and widely used. This study used 9 criteria and 18 provider to demonstrate how SAW work in this study. With the DSS, then the decision-making time can be shortened and the calculation results can be more accurate than manual calculations.

  12. USI: a fast and accurate approach for conceptual document annotation.

    PubMed

    Fiorini, Nicolas; Ranwez, Sylvie; Montmain, Jacky; Ranwez, Vincent

    2015-03-14

    Semantic approaches such as concept-based information retrieval rely on a corpus in which resources are indexed by concepts belonging to a domain ontology. In order to keep such applications up-to-date, new entities need to be frequently annotated to enrich the corpus. However, this task is time-consuming and requires a high-level of expertise in both the domain and the related ontology. Different strategies have thus been proposed to ease this indexing process, each one taking advantage from the features of the document. In this paper we present USI (User-oriented Semantic Indexer), a fast and intuitive method for indexing tasks. We introduce a solution to suggest a conceptual annotation for new entities based on related already indexed documents. Our results, compared to those obtained by previous authors using the MeSH thesaurus and a dataset of biomedical papers, show that the method surpasses text-specific methods in terms of both quality and speed. Evaluations are done via usual metrics and semantic similarity. By only relying on neighbor documents, the User-oriented Semantic Indexer does not need a representative learning set. Yet, it provides better results than the other approaches by giving a consistent annotation scored with a global criterion - instead of one score per concept.

  13. Chromatic information and feature detection in fast visual analysis

    DOE PAGES

    Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.; ...

    2016-08-01

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less

  14. Chromatic information and feature detection in fast visual analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less

  15. A simple and rapid creatinine sensing via DLS selectivity, using calix[4]arene thiol functionalized gold nanoparticles.

    PubMed

    Sutariya, Pinkesh G; Pandya, Alok; Lodha, Anand; Menon, Shobhana K

    2016-01-15

    A new, simple, ultra-sensitive and selective approach has been reported for the "on spot" colorimetric detection of creatinine based on calix[4]arene functionalized gold nanoparticles (AuNPs) with excellent discrimination in the presence of other biomolecules. The lower detection limit of the method is 2.16nM. The gold nanoparticles and p-tert-butylcalix[4]arene were synthesized by microwave assisted method. Specifically, in our study, we used dynamic light scattering (DLS) which is a powerful method for the determination of small changes in particle size, improved selectivity and sensitivity of the creatinine detection system over colorimetric method. The nanoassembly is characterized by Transmission electron microscopy (TEM), DLS, UV-vis and ESI-MS spectroscopy, which demonstrates the binding affinity due its ability of hydrogen bonding and electrostatic interaction between -NH group of creatinine and pSDSC4. It exhibits fast response time (<60s) to creatinine and has long shelf-life (>5 weeks). The developed pSDSC4-AuNPs based creatinine biosensor will be established as simple, reliable and accurate tool for the determination of creatinine in human urine samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Operator priming and generalization of practice in adults' simple arithmetic.

    PubMed

    Chen, Yalin; Campbell, Jamie I D

    2016-04-01

    There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication, suggesting that a general addition procedure was primed by the + sign. In Experiment 1 (n = 36), we applied this operator-priming paradigm to rule-based problems (0 + N = N, 1 × N = N, 0 × N = 0) and 1 + N problems with N ranging from 0 to 9. For the rule-based problems, we found both operator-preview facilitation and generalization of practice (e.g., practicing 0 + 3 sped up unpracticed 0 + 8), the latter being a signature of procedure use; however, we also found operator-preview facilitation for 1 + N in the absence of generalization, which implies the 1 + N problems were solved by fact retrieval but nonetheless were facilitated by an operator preview. Thus, the operator preview effect does not discriminate procedure use from fact retrieval. Experiment 2 (n = 36) investigated whether a population with advanced mathematical training-engineering and computer science students-would show generalization of practice for nonrule-based simple addition problems (e.g., 1 + 4, 4 + 7). The 0 + N problems again presented generalization, whereas no nonzero problem type did; but all nonzero problems sped up when the identical problems were retested, as predicted by item-specific fact retrieval. The results pose a strong challenge to the generality of the proposal that skilled adults' simple addition is based on fast procedural algorithms, and instead support a fact-retrieval model of fast addition performance. (c) 2016 APA, all rights reserved).

  17. A Simple Device to Measure Root Growth Rates

    ERIC Educational Resources Information Center

    Rauser, Wilfried E.; Horton, Roger F.

    1975-01-01

    Describes construction and use of a simple auxanometer which students can use to accurately measure root growth rates of intact seedlings. Typical time course data are presented for the effect of ethylene and indole acetic acid on pea root growth. (Author/BR)

  18. Fast and accurate modeling of nonlinear pulse propagation in graded-index multimode fibers.

    PubMed

    Conforti, Matteo; Mas Arabi, Carlos; Mussot, Arnaud; Kudlinski, Alexandre

    2017-10-01

    We develop a model for the description of nonlinear pulse propagation in multimode optical fibers with a parabolic refractive index profile. It consists of a 1+1D generalized nonlinear Schrödinger equation with a periodic nonlinear coefficient, which can be solved in an extremely fast and efficient way. The model is able to quantitatively reproduce recently observed phenomena like geometric parametric instability and broadband dispersive wave emission. We envisage that our equation will represent a valuable tool for the study of spatiotemporal nonlinear dynamics in the growing field of multimode fiber optics.

  19. Sparse imaging for fast electron microscopy

    NASA Astrophysics Data System (ADS)

    Anderson, Hyrum S.; Ilic-Helms, Jovana; Rohrer, Brandon; Wheeler, Jason; Larson, Kurt

    2013-02-01

    Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited, large collections can lead to weeks of around-the-clock imaging time. To increase data collection speed, we propose and demonstrate on an operational SEM a fast method to sparsely sample and reconstruct smooth images. To accurately localize the electron probe position at fast scan rates, we model the dynamics of the scan coils, and use the model to rapidly and accurately visit a randomly selected subset of pixel locations. Images are reconstructed from the undersampled data by compressed sensing inversion using image smoothness as a prior. We report image fidelity as a function of acquisition speed by comparing traditional raster to sparse imaging modes. Our approach is equally applicable to other domains of nanometer microscopy in which the time to position a probe is a limiting factor (e.g., atomic force microscopy), or in which excessive electron doses might otherwise alter the sample being observed (e.g., scanning transmission electron microscopy).

  20. Adaptive optics in spinning disk microscopy: improved contrast and brightness by a simple and fast method.

    PubMed

    Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J

    2015-09-01

    Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  1. GenoCore: A simple and fast algorithm for core subset selection from large genotype datasets.

    PubMed

    Jeong, Seongmun; Kim, Jae-Yoon; Jeong, Soon-Chun; Kang, Sung-Taeg; Moon, Jung-Kyung; Kim, Namshin

    2017-01-01

    Selecting core subsets from plant genotype datasets is important for enhancing cost-effectiveness and to shorten the time required for analyses of genome-wide association studies (GWAS), and genomics-assisted breeding of crop species, etc. Recently, a large number of genetic markers (>100,000 single nucleotide polymorphisms) have been identified from high-density single nucleotide polymorphism (SNP) arrays and next-generation sequencing (NGS) data. However, there is no software available for picking out the efficient and consistent core subset from such a huge dataset. It is necessary to develop software that can extract genetically important samples in a population with coherence. We here present a new program, GenoCore, which can find quickly and efficiently the core subset representing the entire population. We introduce simple measures of coverage and diversity scores, which reflect genotype errors and genetic variations, and can help to select a sample rapidly and accurately for crop genotype dataset. Comparison of our method to other core collection software using example datasets are performed to validate the performance according to genetic distance, diversity, coverage, required system resources, and the number of selected samples. GenoCore selects the smallest, most consistent, and most representative core collection from all samples, using less memory with more efficient scores, and shows greater genetic coverage compared to the other software tested. GenoCore was written in R language, and can be accessed online with an example dataset and test results at https://github.com/lovemun/Genocore.

  2. A binary method for simple and accurate two-dimensional cursor control from EEG with minimal subject training.

    PubMed

    Kayagil, Turan A; Bai, Ou; Henriquez, Craig S; Lin, Peter; Furlani, Stephen J; Vorbach, Sherry; Hallett, Mark

    2009-05-06

    Brain-computer interfaces (BCI) use electroencephalography (EEG) to interpret user intention and control an output device accordingly. We describe a novel BCI method to use a signal from five EEG channels (comprising one primary channel with four additional channels used to calculate its Laplacian derivation) to provide two-dimensional (2-D) control of a cursor on a computer screen, with simple threshold-based binary classification of band power readings taken over pre-defined time windows during subject hand movement. We tested the paradigm with four healthy subjects, none of whom had prior BCI experience. Each subject played a game wherein he or she attempted to move a cursor to a target within a grid while avoiding a trap. We also present supplementary results including one healthy subject using motor imagery, one primary lateral sclerosis (PLS) patient, and one healthy subject using a single EEG channel without Laplacian derivation. For the four healthy subjects using real hand movement, the system provided accurate cursor control with little or no required user training. The average accuracy of the cursor movement was 86.1% (SD 9.8%), which is significantly better than chance (p = 0.0015). The best subject achieved a control accuracy of 96%, with only one incorrect bit classification out of 47. The supplementary results showed that control can be achieved under the respective experimental conditions, but with reduced accuracy. The binary method provides naïve subjects with real-time control of a cursor in 2-D using dichotomous classification of synchronous EEG band power readings from a small number of channels during hand movement. The primary strengths of our method are simplicity of hardware and software, and high accuracy when used by untrained subjects.

  3. Generating Accurate Urban Area Maps from Nighttime Satellite (DMSP/OLS) Data

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc; Lawrence, William; Elvidge, Christopher

    2000-01-01

    There has been an increasing interest by the international research community to use the nighttime acquired "city-lights" data sets collected by the US Defense Meteorological Satellite Program's Operational Linescan system to study issues relative to urbanization. Many researchers are interested in using these data to estimate human demographic parameters over large areas and then characterize the interactions between urban development , natural ecosystems, and other aspects of the human enterprise. Many of these attempts rely on an ability to accurately identify urbanized area. However, beyond the simple determination of the loci of human activity, using these data to generate accurate estimates of urbanized area can be problematic. Sensor blooming and registration error can cause large overestimates of urban land based on a simple measure of lit area from the raw data. We discuss these issues, show results of an attempt to do a historical urban growth model in Egypt, and then describe a few basic processing techniques that use geo-spatial analysis to threshold the DMSP data to accurately estimate urbanized areas. Algorithm results are shown for the United States and an application to use the data to estimate the impact of urban sprawl on sustainable agriculture in the US and China is described.

  4. Fast-SG: an alignment-free algorithm for hybrid assembly.

    PubMed

    Di Genova, Alex; Ruz, Gonzalo A; Sagot, Marie-France; Maass, Alejandro

    2018-05-01

    Long-read sequencing technologies are the ultimate solution for genome repeats, allowing near reference-level reconstructions of large genomes. However, long-read de novo assembly pipelines are computationally intense and require a considerable amount of coverage, thereby hindering their broad application to the assembly of large genomes. Alternatively, hybrid assembly methods that combine short- and long-read sequencing technologies can reduce the time and cost required to produce de novo assemblies of large genomes. Here, we propose a new method, called Fast-SG, that uses a new ultrafast alignment-free algorithm specifically designed for constructing a scaffolding graph using light-weight data structures. Fast-SG can construct the graph from either short or long reads. This allows the reuse of efficient algorithms designed for short-read data and permits the definition of novel modular hybrid assembly pipelines. Using comprehensive standard datasets and benchmarks, we show how Fast-SG outperforms the state-of-the-art short-read aligners when building the scaffoldinggraph and can be used to extract linking information from either raw or error-corrected long reads. We also show how a hybrid assembly approach using Fast-SG with shallow long-read coverage (5X) and moderate computational resources can produce long-range and accurate reconstructions of the genomes of Arabidopsis thaliana (Ler-0) and human (NA12878). Fast-SG opens a door to achieve accurate hybrid long-range reconstructions of large genomes with low effort, high portability, and low cost.

  5. Abel inversion using fast Fourier transforms.

    PubMed

    Kalal, M; Nugent, K A

    1988-05-15

    A fast Fourier transform based Abel inversion technique is proposed. The method is faster than previously used techniques, potentially very accurate (even for a relatively small number of points), and capable of handling large data sets. The technique is discussed in the context of its use with 2-D digital interferogram analysis algorithms. Several examples are given.

  6. Generating Fast and Accurate Compliance Reports for Various Data Rates

    NASA Astrophysics Data System (ADS)

    Penugonda, Srinath

    As the demands on the industry data rates have increased there is a need for interoperable interfaces to function flawlessly. Added to this complexity, the number of I/O data lines are also increasing making it more time consuming to design and test. This in general leads to creating of compliance standards to which interfaces must adhere. The goal of this theses is to aid the Signal Integrity Engineers with a better and fast way of rendering a full picture of the interface compliance parameters. Three different interfaces at various data rates were chosen. They are: 25Gbps Very Short Reach (VSR) based on Optical Internetworking Forum (OIF), Mobile Industry Processer Interface (MIPI) particularly for camera based on MIPI Alliance organization upto 1.5Gbps and for a passive Universal Serial Bus (USB) Type-C cable based on USB organization particularly for generation-I with data rate of 10Gbps. After a full understanding of each of the interfaces, a complete end-to-end reports for each of the interfaces were developed with an easy to use user interface. A standard one-to-one comparison is done with commercially available software tools for the above mentioned interfaces. The tools were developed in MATLAB and Python. Data was usually obtained by probing at interconnect, from either an oscilloscope or vector network analyzer.

  7. Fast neural network surrogates for very high dimensional physics-based models in computational oceanography.

    PubMed

    van der Merwe, Rudolph; Leen, Todd K; Lu, Zhengdong; Frolov, Sergey; Baptista, Antonio M

    2007-05-01

    We present neural network surrogates that provide extremely fast and accurate emulation of a large-scale circulation model for the coupled Columbia River, its estuary and near ocean regions. The circulation model has O(10(7)) degrees of freedom, is highly nonlinear and is driven by ocean, atmospheric and river influences at its boundaries. The surrogates provide accurate emulation of the full circulation code and run over 1000 times faster. Such fast dynamic surrogates will enable significant advances in ensemble forecasts in oceanography and weather.

  8. On the nature of fast sausage waves in coronal loops

    NASA Astrophysics Data System (ADS)

    Bahari, Karam

    2018-05-01

    The effect of the parameters of coronal loops on the nature of fast sausage waves are investigated. To do this three models of the coronal loop considered, a simple loop model, a current-carrying loop model and a model with radially structured density called "Inner μ" profile. For all the models the Magnetohydrodynamic (MHD) equations solved analytically in the linear approximation and the restoring forces of oscillations obtained. The ratio of the magnetic tension force to the pressure gradient force obtained as a function of the distance from the axis of the loop. In the simple loop model for all values of the loop parameters the fast sausages wave have a mixed nature of Alfvénic and fast MHD waves, in the current-carrying loop model with thick annulus and low density contrast the fast sausage waves can be considered as purely Alfvénic wave in the core region of the loop, and in the "Inner μ" profile for each set of the parameters of the loop the wave can be considered as a purely Alfvénic wave in some regions of the loop.

  9. The fast kinematic magnetic dynamo and the dissipationless limit

    NASA Technical Reports Server (NTRS)

    Finn, John M.; Ott, Edward

    1990-01-01

    The evolution of the magnetic field in models that incorporate chaotic field line stretching, field cancellation, and finite magnetic Reynolds number is examined analytically and numerically. Although the models used here are highly idealized, it is claimed that they display and illustrate typical behavior relevant to fast magnetic dynamic behavior. It is shown, in particular, that consideration of magnetic flux through a finite fixed surface provides a simple and effective way of deducing fast dynamo behavior from the zero resistivity equation. Certain aspects of the fast dynamo problem can thus be reduced to a study of nonlinear dynamic properties of the underlying flow.

  10. A simple and fast method for extraction and quantification of cryptophyte phycoerythrin.

    PubMed

    Thoisen, Christina; Hansen, Benni Winding; Nielsen, Søren Laurentius

    2017-01-01

    The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equipment. The cryptophyte cells on the filters were disrupted at -80 °C and added phosphate buffer for extraction at 4 °C followed by absorbance measurement. The cryptophyte Rhodomonas salina was used as a model organism. •Simple method for extraction and quantification of phycoerythrin from cryptophytes.•Minimal usage of equipment and chemicals, and low labor costs.•Applicable for industrial and biological purposes.

  11. A simple capacitive method to evaluate ethanol fuel samples

    NASA Astrophysics Data System (ADS)

    Vello, Tatiana P.; de Oliveira, Rafael F.; Silva, Gustavo O.; de Camargo, Davi H. S.; Bufon, Carlos C. B.

    2017-02-01

    Ethanol is a biofuel used worldwide. However, the presence of excessive water either during the distillation process or by fraudulent adulteration is a major concern in the use of ethanol fuel. High water levels may cause engine malfunction, in addition to being considered illegal. Here, we describe the development of a simple, fast and accurate platform based on nanostructured sensors to evaluate ethanol samples. The device fabrication is facile, based on standard microfabrication and thin-film deposition methods. The sensor operation relies on capacitance measurements employing a parallel plate capacitor containing a conformational aluminum oxide (Al2O3) thin layer (15 nm). The sensor operates over the full range water concentration, i.e., from approximately 0% to 100% vol. of water in ethanol, with water traces being detectable down to 0.5% vol. These characteristics make the proposed device unique with respect to other platforms. Finally, the good agreement between the sensor response and analyses performed by gas chromatography of ethanol biofuel endorses the accuracy of the proposed method. Due to the full operation range, the reported sensor has the technological potential for use as a point-of-care analytical tool at gas stations or in the chemical, pharmaceutical, and beverage industries, to mention a few.

  12. Removal of the Gibbs phenomenon and its application to fast-Fourier-transform-based mode solvers.

    PubMed

    Wangüemert-Pérez, J G; Godoy-Rubio, R; Ortega-Moñux, A; Molina-Fernández, I

    2007-12-01

    A simple strategy for accurately recovering discontinuous functions from their Fourier series coefficients is presented. The aim of the proposed approach, named spectrum splitting (SS), is to remove the Gibbs phenomenon by making use of signal-filtering-based concepts and some properties of the Fourier series. While the technique can be used in a vast range of situations, it is particularly suitable for being incorporated into fast-Fourier-transform-based electromagnetic mode solvers (FFT-MSs), which are known to suffer from very poor convergence rates when applied to situations where the field distributions are highly discontinuous (e.g., silicon-on-insulator photonic wires). The resultant method, SS-FFT-MS, is exhaustively tested under the assumption of a simplified one-dimensional model, clearly showing a dramatic improvement of the convergence rates with respect to the original FFT-based methods.

  13. Peptide-gated ion channels and the simple nervous system of Hydra.

    PubMed

    Gründer, Stefan; Assmann, Marc

    2015-02-15

    Neurons either use electrical or chemical synapses to communicate with each other. Transmitters at chemical synapses are either small molecules or neuropeptides. After binding to their receptors, transmitters elicit postsynaptic potentials, which can either be fast and transient or slow and longer lasting, depending on the type of receptor. Fast transient potentials are mediated by ionotropic receptors and slow long-lasting potentials by metabotropic receptors. Transmitters and receptors are well studied for animals with a complex nervous system such as vertebrates and insects, but much less is known for animals with a simple nervous system like Cnidaria. As cnidarians arose early in animal evolution, nervous systems might have first evolved within this group and the study of neurotransmission in cnidarians might reveal an ancient mechanism of neuronal communication. The simple nervous system of the cnidarian Hydra extensively uses neuropeptides and, recently, we cloned and functionally characterized an ion channel that is directly activated by neuropeptides of the Hydra nervous system. These results demonstrate the existence of peptide-gated ion channels in Hydra, suggesting they mediate fast transmission in its nervous system. As related channels are also present in the genomes of the cnidarian Nematostella, of placozoans and of ctenophores, it should be considered that the early nervous systems of cnidarians and ctenophores have co-opted neuropeptides for fast transmission at chemical synapses. © 2015. Published by The Company of Biologists Ltd.

  14. Pediatric siMS score: A new, simple and accurate continuous metabolic syndrome score for everyday use in pediatrics.

    PubMed

    Vukovic, Rade; Milenkovic, Tatjana; Stojan, George; Vukovic, Ana; Mitrovic, Katarina; Todorovic, Sladjana; Soldatovic, Ivan

    2017-01-01

    The dichotomous nature of the current definition of metabolic syndrome (MS) in youth results in loss of information. On the other hand, the calculation of continuous MS scores using standardized residuals in linear regression (Z scores) or factor scores of principal component analysis (PCA) is highly impractical for clinical use. Recently, a novel, easily calculated continuous MS score called siMS score was developed based on the IDF MS criteria for the adult population. To develop a Pediatric siMS score (PsiMS), a modified continuous MS score for use in the obese youth, based on the original siMS score, while keeping the score as simple as possible and retaining high correlation with more complex scores. The database consisted of clinical data on 153 obese (BMI ≥95th percentile) children and adolescents. Continuous MS scores were calculated using Z scores and PCA, as well as the original siMS score. Four variants of PsiMS score were developed in accordance with IDF criteria for MS in youth and correlation of these scores with PCA and Z score derived MS continuous scores was assessed. PsiMS score calculated using formula: (2xWaist/Height) + (Glucose(mmol/l)/5.6) + (triglycerides(mmol/l)/1.7) + (Systolic BP/130)-(HDL(mmol/l)/1.02) showed the highest correlation with most of the complex continuous scores (0.792-0.901). The original siMS score also showed high correlation with continuous MS scores. PsiMS score represents a practical and accurate score for the evaluation of MS in the obese youth. The original siMS score should be used when evaluating large cohorts consisting of both adults and children.

  15. Simple and ultra-fast recognition and quantitation of compounded monoclonal antibodies: Application to flow injection analysis combined to UV spectroscopy and matching method.

    PubMed

    Jaccoulet, E; Schweitzer-Chaput, A; Toussaint, B; Prognon, P; Caudron, E

    2018-09-01

    Compounding of monoclonal antibody (mAbs) constantly increases in hospital. Quality control (QC) of the compounded mAbs based on quantification and identification is required to prevent potential errors and fast method is needed to manage outpatient chemotherapy administration. A simple and ultra-fast (less than 30 s) method using flow injection analysis associated to least square matching method issued from the analyzer software was performed and evaluated for the routine hospital QC of three compounded mAbs: bevacizumab, infliximab and rituximab. The method was evaluated through qualitative and quantitative parameters. Preliminary analysis of the UV absorption and second derivative spectra of the mAbs allowed us to adapt analytical conditions according to the therapeutic range of the mAbs. In terms of quantitative QC, linearity, accuracy and precision were assessed as specified in ICH guidelines. Very satisfactory recovery was achieved and the RSD (%) of the intermediate precision were less than 1.1%. Qualitative analytical parameters were also evaluated in terms of specificity, sensitivity and global precision through a matrix of confusion. Results showed to be concentration and mAbs dependant and excellent (100%) specificity and sensitivity were reached within specific concentration range. Finally, routine application on "real life" samples (n = 209) from different batch of the three mAbs complied with the specifications of the quality control i.e. excellent identification (100%) and ± 15% of targeting concentration belonging to the calibration range. The successful use of the combination of second derivative spectroscopy and partial least square matching method demonstrated the interest of FIA for the ultra-fast QC of mAbs after compounding using matching method. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. A Fast-Starting Robotic Fish

    NASA Astrophysics Data System (ADS)

    Modarres-Sadeghi, Yahya; Watts, Matthew; Conte, Joe; Hover, Franz; Triantafyllou, Michael

    2009-11-01

    We have built a simple mechanical system to emulate the fast-start performance of fish. The system consisted of a thin metal beam covered by a urethane rubber fish body. The body form of the mechanical fish in this work was modeled from a pike species, which is the most successfully studied fast-start specialist species. The mechanical fish was held in curvature and hung in water by two restraining lines, which were simultaneously released by pneumatic cutting mechanisms. The potential energy in the beam was transferred into the fluid, thereby accelerating the fish, similar to a pike. We measured the resulting velocity and acceleration, as well as the efficiency of propulsion for the mechanical fish model and also ran a series of flow visualization tests to observe the resulting flow pattern. We also studied the influence of stiffness and geometry of the tail on the efficiency of propulsion and flow pattern. The hydrodynamic efficiency of the fish, calculated by the transfer of energy, was around 10%. Flow visualization of the mechanical fast-start wake was also analyzed, showing that the acceleration is associated with the fast movement of an intense vortex in a near-lateral direction.

  17. Low voltage-driven oxide phototransistors with fast recovery, high signal-to-noise ratio, and high responsivity fabricated via a simple defect-generating process

    PubMed Central

    Yun, Myeong Gu; Kim, Ye Kyun; Ahn, Cheol Hyoun; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun; Kim, Yong-Hoon

    2016-01-01

    We have demonstrated that photo-thin film transistors (photo-TFTs) fabricated via a simple defect-generating process could achieve fast recovery, a high signal to noise (S/N) ratio, and high sensitivity. The photo-TFTs are inverted-staggered bottom-gate type indium-gallium-zinc-oxide (IGZO) TFTs fabricated using atomic layer deposition (ALD)-derived Al2O3 gate insulators. The surfaces of the Al2O3 gate insulators are damaged by ion bombardment during the deposition of the IGZO channel layers by sputtering and the damage results in the hysteresis behavior of the photo-TFTs. The hysteresis loops broaden as the deposition power density increases. This implies that we can easily control the amount of the interface trap sites and/or trap sites in the gate insulator near the interface. The photo-TFTs with large hysteresis-related defects have high S/N ratio and fast recovery in spite of the low operation voltages including a drain voltage of 1 V, positive gate bias pulse voltage of 3 V, and gate voltage pulse width of 3 V (0 to 3 V). In addition, through the hysteresis-related defect-generating process, we have achieved a high responsivity since the bulk defects that can be photo-excited and eject electrons also increase with increasing deposition power density. PMID:27553518

  18. Fast, accurate, small-scale 3D scene capture using a low-cost depth sensor

    PubMed Central

    Carey, Nicole; Nagpal, Radhika; Werfel, Justin

    2017-01-01

    Commercially available depth sensing devices are primarily designed for domains that are either macroscopic, or static. We develop a solution for fast microscale 3D reconstruction, using off-the-shelf components. By the addition of lenses, precise calibration of camera internals and positioning, and development of bespoke software, we turn an infrared depth sensor designed for human-scale motion and object detection into a device with mm-level accuracy capable of recording at up to 30Hz. PMID:28758159

  19. Fast and accurate modeling of stray light in optical systems

    NASA Astrophysics Data System (ADS)

    Perrin, Jean-Claude

    2017-11-01

    The first problem to be solved in most optical designs with respect to stray light is that of internal reflections on the several surfaces of individual lenses and mirrors, and on the detector itself. The level of stray light ratio can be considerably reduced by taking into account the stray light during the optimization to determine solutions in which the irradiance due to these ghosts is kept to the minimum possible value. Unhappily, the routines available in most optical design software's, for example CODE V, do not permit all alone to make exact quantitative calculations of the stray light due to these ghosts. Therefore, the engineer in charge of the optical design is confronted to the problem of using two different software's, one for the design and optimization, for example CODE V, one for stray light analysis, for example ASAP. This makes a complete optimization very complex . Nevertheless, using special techniques and combinations of the routines available in CODE V, it is possible to have at its disposal a software macro tool to do such an analysis quickly and accurately, including Monte-Carlo ray tracing, or taking into account diffraction effects. This analysis can be done in a few minutes, to be compared to hours with other software's.

  20. Development of the Fully Adaptive Storm Tide (FAST) Model for hurricane induced storm surges and associated inundation

    NASA Astrophysics Data System (ADS)

    Teng, Y. C.; Kelly, D.; Li, Y.; Zhang, K.

    2016-02-01

    A new state-of-the-art model (the Fully Adaptive Storm Tide model, FAST) for the prediction of storm surges over complex landscapes is presented. The FAST model is based on the conservation form of the full non-linear depth-averaged long wave equations. The equations are solved via an explicit finite volume scheme with interfacial fluxes being computed via Osher's approximate Riemann solver. Geometric source terms are treated in a high order manner that is well-balanced. The numerical solution technique has been chosen to enable the accurate simulation of wetting and drying over complex topography. Another important feature of the FAST model is the use of a simple underlying Cartesian mesh with tree-based static and dynamic adaptive mesh refinement (AMR). This permits the simulation of unsteady flows over varying landscapes (including localized features such as canals) by locally increasing (or relaxing) grid resolution in a dynamic fashion. The use of (dynamic) AMR lowers the computational cost of the storm surge model whilst retaining high resolution (and thus accuracy) where and when it is required. In additional, the FAST model has been designed to execute in a parallel computational environment with localized time-stepping. The FAST model has already been carefully verified against a series of benchmark type problems (Kelly et al. 2015). Here we present two simulations of the storm tide due to Hurricane Ike(2008) and Hurricane Sandy (2012). The model incorporates high resolution LIDAR data for the major portion of the New York City. Results compare favorably with water elevations measured by NOAA tidal gauges, by mobile sensors deployed and high water marks collected by the USGS.

  1. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  2. Gearing up for Fast Grading and Reporting

    ERIC Educational Resources Information Center

    O'Connor, Ken; Jung, Lee Ann; Reeves, Douglas

    2018-01-01

    The authors posit that the traditional grading system involving points and percentages is not the best way to prepare students to be the self-directed, independent learners they need to be. A better system would produce grades that are FAST (fair, accurate, specific, timely). To bring about these changes, school leaders need not create full…

  3. A pipette-based calibration system for fast-scan cyclic voltammetry with fast response times.

    PubMed

    Ramsson, Eric S

    2016-01-01

    Fast-scan cyclic voltammetry (FSCV) is an electrochemical technique that utilizes the oxidation and/or reduction of an analyte of interest to infer rapid changes in concentrations. In order to calibrate the resulting oxidative or reductive current, known concentrations of an analyte must be introduced under controlled settings. Here, I describe a simple and cost-effective method, using a Petri dish and pipettes, for the calibration of carbon fiber microelectrodes (CFMs) using FSCV.

  4. LinkImpute: Fast and Accurate Genotype Imputation for Nonmodel Organisms

    PubMed Central

    Money, Daniel; Gardner, Kyle; Migicovsky, Zoë; Schwaninger, Heidi; Zhong, Gan-Yuan; Myles, Sean

    2015-01-01

    Obtaining genome-wide genotype data from a set of individuals is the first step in many genomic studies, including genome-wide association and genomic selection. All genotyping methods suffer from some level of missing data, and genotype imputation can be used to fill in the missing data and improve the power of downstream analyses. Model organisms like human and cattle benefit from high-quality reference genomes and panels of reference genotypes that aid in imputation accuracy. In nonmodel organisms, however, genetic and physical maps often are either of poor quality or are completely absent, and there are no panels of reference genotypes available. There is therefore a need for imputation methods designed specifically for nonmodel organisms in which genomic resources are poorly developed and marker order is unreliable or unknown. Here we introduce LinkImpute, a software package based on a k-nearest neighbor genotype imputation method, LD-kNNi, which is designed for unordered markers. No physical or genetic maps are required, and it is designed to work on unphased genotype data from heterozygous species. It exploits the fact that markers useful for imputation often are not physically close to the missing genotype but rather distributed throughout the genome. Using genotyping-by-sequencing data from diverse and heterozygous accessions of apples, grapes, and maize, we compare LD-kNNi with several genotype imputation methods and show that LD-kNNi is fast, comparable in accuracy to the best-existing methods, and exhibits the least bias in allele frequency estimates. PMID:26377960

  5. LinkImpute: Fast and Accurate Genotype Imputation for Nonmodel Organisms.

    PubMed

    Money, Daniel; Gardner, Kyle; Migicovsky, Zoë; Schwaninger, Heidi; Zhong, Gan-Yuan; Myles, Sean

    2015-09-15

    Obtaining genome-wide genotype data from a set of individuals is the first step in many genomic studies, including genome-wide association and genomic selection. All genotyping methods suffer from some level of missing data, and genotype imputation can be used to fill in the missing data and improve the power of downstream analyses. Model organisms like human and cattle benefit from high-quality reference genomes and panels of reference genotypes that aid in imputation accuracy. In nonmodel organisms, however, genetic and physical maps often are either of poor quality or are completely absent, and there are no panels of reference genotypes available. There is therefore a need for imputation methods designed specifically for nonmodel organisms in which genomic resources are poorly developed and marker order is unreliable or unknown. Here we introduce LinkImpute, a software package based on a k-nearest neighbor genotype imputation method, LD-kNNi, which is designed for unordered markers. No physical or genetic maps are required, and it is designed to work on unphased genotype data from heterozygous species. It exploits the fact that markers useful for imputation often are not physically close to the missing genotype but rather distributed throughout the genome. Using genotyping-by-sequencing data from diverse and heterozygous accessions of apples, grapes, and maize, we compare LD-kNNi with several genotype imputation methods and show that LD-kNNi is fast, comparable in accuracy to the best-existing methods, and exhibits the least bias in allele frequency estimates. Copyright © 2015 Money et al.

  6. Mammalian choices: combining fast-but-inaccurate and slow-but-accurate decision-making systems.

    PubMed

    Trimmer, Pete C; Houston, Alasdair I; Marshall, James A R; Bogacz, Rafal; Paul, Elizabeth S; Mendl, Mike T; McNamara, John M

    2008-10-22

    Empirical findings suggest that the mammalian brain has two decision-making systems that act at different speeds. We represent the faster system using standard signal detection theory. We represent the slower (but more accurate) cortical system as the integration of sensory evidence over time until a certain level of confidence is reached. We then consider how two such systems should be combined optimally for a range of information linkage mechanisms. We conclude with some performance predictions that will hold if our representation is realistic.

  7. Optimization-Based Calibration of FAST.Farm Parameters Against SOWFA: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreira, Paula D; Annoni, Jennifer; Jonkman, Jason

    2018-01-04

    FAST.Farm is a medium-delity wind farm modeling tool that can be used to assess power and loads contributions of wind turbines in a wind farm. The objective of this paper is to undertake a calibration procedure to set the user parameters of FAST.Farm to accurately represent results from large-eddy simulations. The results provide an in- depth analysis of the comparison of FAST.Farm and large-eddy simulations before and after calibration. The comparison of FAST.Farm and large-eddy simulation results are presented with respect to streamwise and radial velocity components as well as wake-meandering statistics (mean and standard deviation) in the lateral andmore » vertical directions under different atmospheric and turbine operating conditions.« less

  8. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    PubMed Central

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875

  9. Synthesis of Survey Questions That Accurately Discriminate the Elements of the TPACK Framework

    ERIC Educational Resources Information Center

    Jaikaran-Doe, Seeta; Doe, Peter Edward

    2015-01-01

    A number of validated survey instruments for assessing technological pedagogical content knowledge (TPACK) do not accurately discriminate between the seven elements of the TPACK framework particularly technological content knowledge (TCK) and technological pedagogical knowledge (TPK). By posing simple questions that assess technological,…

  10. Predicting Mercury's precession using simple relativistic Newtonian dynamics

    NASA Astrophysics Data System (ADS)

    Friedman, Y.; Steiner, J. M.

    2016-03-01

    We present a new simple relativistic model for planetary motion describing accurately the anomalous precession of the perihelion of Mercury and its origin. The model is based on transforming Newton's classical equation for planetary motion from absolute to real spacetime influenced by the gravitational potential and introducing the concept of influenced direction.

  11. Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations

    DTIC Science & Technology

    2011-09-30

    System via Accurate Light Calculations Curtis D. Mobley Sequoia Scientific, Inc. 2700 Richards Road, Suite 107 Bellevue, WA 98005 phone: 425...7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Sequoia Scientific, Inc,2700 Richards Road, Suite 107,Bellevue,WA,98005 8. PERFORMING...EcoLight-S 1.0 Users’ Guide and Technical Documentation. Sequoia Scientific, Inc., Bellevue, WA, 38 pages. Mobley, C. D., 2011. Fast light calculations

  12. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing.

    PubMed

    Lim, Hansaim; Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; He, Di; Zhuang, Luke; Meng, Patrick; Xie, Lei

    2016-10-01

    Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and

  13. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing

    PubMed Central

    Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; Meng, Patrick; Xie, Lei

    2016-01-01

    Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and

  14. Fast and low-cost method for VBES bathymetry generation in coastal areas

    NASA Astrophysics Data System (ADS)

    Sánchez-Carnero, N.; Aceña, S.; Rodríguez-Pérez, D.; Couñago, E.; Fraile, P.; Freire, J.

    2012-12-01

    Sea floor topography is key information in coastal area management. Nowadays, LiDAR and multibeam technologies provide accurate bathymetries in those areas; however these methodologies are yet too expensive for small customers (fishermen associations, small research groups) willing to keep a periodic surveillance of environmental resources. In this paper, we analyse a simple methodology for vertical beam echosounder (VBES) bathymetric data acquisition and postprocessing, using low-cost means and free customizable tools such as ECOSONS and gvSIG (that is compared with industry standard ArcGIS). Echosounder data was filtered, resampled and, interpolated (using kriging or radial basis functions). Moreover, the presented methodology includes two data correction processes: Monte Carlo simulation, used to reduce GPS errors, and manually applied bathymetric line transformations, both improving the obtained results. As an example, we present the bathymetry of the Ría de Cedeira (Galicia, NW Spain), a good testbed area for coastal bathymetry methodologies given its extension and rich topography. The statistical analysis, performed by direct ground-truthing, rendered an upper bound of 1.7 m error, at 95% confidence level, and 0.7 m r.m.s. (cross-validation provided 30 cm and 25 cm, respectively). The methodology presented is fast and easy to implement, accurate outside transects (accuracy can be estimated), and can be used as a low-cost periodical monitoring method.

  15. Interpretation of fast-ion signals during beam modulation experiments

    DOE PAGES

    Heidbrink, W. W.; Collins, C. S.; Stagner, L.; ...

    2016-07-22

    Fast-ion signals produced by a modulated neutral beam are used to infer fast-ion transport. The measured quantity is the divergence of perturbed fast-ion flux from the phase-space volume measured by the diagnostic, ∇•more » $$\\bar{Γ}$$. Since velocity-space transport often contributes to this divergence, the phase-space sensitivity of the diagnostic (or “weight function”) plays a crucial role in the interpretation of the signal. The source and sink make major contributions to the signal but their effects are accurately modeled by calculations that employ an exponential decay term for the sink. Recommendations for optimal design of a fast-ion transport experiment are given, illustrated by results from DIII-D measurements of fast-ion transport by Alfv´en eigenmodes. Finally, the signal-to-noise ratio of the diagnostic, systematic uncertainties in the modeling of the source and sink, and the non-linearity of the perturbation all contribute to the error in ∇•$$\\bar{Γ}$$.« less

  16. Simple refractometer based on in-line fiber interferometers

    NASA Astrophysics Data System (ADS)

    Esteban, Ó.; Martínez Manuel, R.; Shlyagin, M. G.

    2015-09-01

    A very simple but accurate optical fiber refractometer based on the Fresnel reflection in the fiber tip and two in-line low-reflective mirrors for light intensity referencing is reported. Each mirror was generated by connecting together 2 fiber sections with FC/PC and FC/APC connectors using the standard FC/PC mating sleeve. For the sensor interrogation, a standard DFB diode laser pumped with a sawtooth-wave current was used. A resolution of 6 x 10-4 was experimentally demonstrated using different liquids. A simple sensor construction and the use of low cost components make the reported system interesting for many applications.

  17. Validation approach for a fast and simple targeted screening method for 75 antibiotics in meat and aquaculture products using LC-MS/MS.

    PubMed

    Dubreil, Estelle; Gautier, Sophie; Fourmond, Marie-Pierre; Bessiral, Mélaine; Gaugain, Murielle; Verdon, Eric; Pessel, Dominique

    2017-04-01

    An approach is described to validate a fast and simple targeted screening method for antibiotic analysis in meat and aquaculture products by LC-MS/MS. The strategy of validation was applied for a panel of 75 antibiotics belonging to different families, i.e., penicillins, cephalosporins, sulfonamides, macrolides, quinolones and phenicols. The samples were extracted once with acetonitrile, concentrated by evaporation and injected into the LC-MS/MS system. The approach chosen for the validation was based on the Community Reference Laboratory (CRL) guidelines for the validation of screening qualitative methods. The aim of the validation was to prove sufficient sensitivity of the method to detect all the targeted antibiotics at the level of interest, generally the maximum residue limit (MRL). A robustness study was also performed to test the influence of different factors. The validation showed that the method is valid to detect and identify 73 antibiotics of the 75 antibiotics studied in meat and aquaculture products at the validation levels.

  18. A simple robust and accurate a posteriori sub-cell finite volume limiter for the discontinuous Galerkin method on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Loubère, Raphaël

    2016-08-01

    In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order

  19. Reliable and accurate extraction of Hamaker constants from surface force measurements.

    PubMed

    Miklavcic, S J

    2018-08-15

    A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  1. A simple finite element method for the Stokes equations

    DOE PAGES

    Mu, Lin; Ye, Xiu

    2017-03-21

    The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.

  2. A simple finite element method for the Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Ye, Xiu

    The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.

  3. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need

  4. Accurate and fast multiple-testing correction in eQTL studies.

    PubMed

    Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm

    2015-06-04

    In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  5. QUESP and QUEST revisited - fast and accurate quantitative CEST experiments.

    PubMed

    Zaiss, Moritz; Angelovski, Goran; Demetriou, Eleni; McMahon, Michael T; Golay, Xavier; Scheffler, Klaus

    2018-03-01

    Chemical exchange saturation transfer (CEST) NMR or MRI experiments allow detection of low concentrated molecules with enhanced sensitivity via their proton exchange with the abundant water pool. Be it endogenous metabolites or exogenous contrast agents, an exact quantification of the actual exchange rate is required to design optimal pulse sequences and/or specific sensitive agents. Refined analytical expressions allow deeper insight and improvement of accuracy for common quantification techniques. The accuracy of standard quantification methodologies, such as quantification of exchange rate using varying saturation power or varying saturation time, is improved especially for the case of nonequilibrium initial conditions and weak labeling conditions, meaning the saturation amplitude is smaller than the exchange rate (γB 1  < k). The improved analytical 'quantification of exchange rate using varying saturation power/time' (QUESP/QUEST) equations allow for more accurate exchange rate determination, and provide clear insights on the general principles to execute the experiments and to perform numerical evaluation. The proposed methodology was evaluated on the large-shift regime of paramagnetic chemical-exchange-saturation-transfer agents using simulated data and data of the paramagnetic Eu(III) complex of DOTA-tetraglycineamide. The refined formulas yield improved exchange rate estimation. General convergence intervals of the methods that would apply for smaller shift agents are also discussed. Magn Reson Med 79:1708-1721, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    NASA Astrophysics Data System (ADS)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  7. A Simple Model of Hox Genes: Bone Morphology Demonstration

    ERIC Educational Resources Information Center

    Shmaefsky, Brian

    2008-01-01

    Visual demonstrations of abstract scientific concepts are effective strategies for enhancing content retention (Shmaefsky 2004). The concepts associated with gene regulation of growth and development are particularly complex and are well suited for teaching with visual models. This demonstration provides a simple and accurate model of Hox gene…

  8. Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman

    2015-01-01

    The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.

  9. Fast algorithm for probabilistic bone edge detection (FAPBED)

    NASA Astrophysics Data System (ADS)

    Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.

    2005-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean

  10. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Measurement of jaw motion: the proposal of a simple and accurate method.

    PubMed

    Pinheiro, A P; Pereira, A A; Andrade, A O; Bellomo, D

    2011-01-01

    The analysis of jaw movements has long been used as a measure for clinical diagnosis and assessment. A number of strategies are available for monitoring the trajectory; however most of these strategies make use of expensive tools, which are often not available to many clinics in the world. In this context, this research proposes the development of a new tool capable of quantifying the movements of opening/closing, protrusion and laterotrusion of the mandible. These movements are important for the clinical evaluation of both the temporomandibular function and muscles involved in mastication. The proposed system, unlike current commercial systems, employs a low-cost video camera and a computer program, which is used for reconstructing the trajectory of a reflective marker that is fixed on the jaw. In order to illustrate the application of the devised tool a clinical trial was carried out, investigating jaw movements of 10 subjects. The results obtained in this study were compatible with those found in the literature with the advantage of using a low-cost, simple, non-invasive and flexible solution customized for the practical needs of clinics. The average error of the system was less than 1.0%.

  12. An Accurate and Stable FFT-based Method for Pricing Options under Exp-Lévy Processes

    NASA Astrophysics Data System (ADS)

    Ding, Deng; Chong U, Sio

    2010-05-01

    An accurate and stable method for pricing European options in exp-Lévy models is presented. The main idea of this new method is combining the quadrature technique and the Carr-Madan Fast Fourier Transform methods. The theoretical analysis shows that the overall complexity of this new method is still O(N log N) with N grid points as the fast Fourier transform methods. Numerical experiments for different exp-Lévy processes also show that the numerical algorithm proposed by this new method has an accuracy and stability for the small strike prices K. That develops and improves the Carr-Madan method.

  13. A fast and accurate online sequential learning algorithm for feedforward networks.

    PubMed

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  14. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  15. An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors

    NASA Astrophysics Data System (ADS)

    Shen, Yanfei; Cui, Jie; Mohammadi, Saeed

    2017-05-01

    A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.

  16. Fast imputation using medium- or low-coverage sequence data

    USDA-ARS?s Scientific Manuscript database

    Direct imputation from raw sequence reads can be more accurate than calling genotypes first and then imputing, especially if read depth is low or error rates high, but different imputation strategies are required than those used for data from genotyping chips. A fast algorithm to impute from lower t...

  17. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  18. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  19. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  20. Arbitrarily accurate twin composite π -pulse sequences

    NASA Astrophysics Data System (ADS)

    Torosov, Boyan T.; Vitanov, Nikolay V.

    2018-04-01

    We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .

  1. The attentional drift-diffusion model extends to simple purchasing decisions.

    PubMed

    Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio

    2012-01-01

    How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions.

  2. The Attentional Drift-Diffusion Model Extends to Simple Purchasing Decisions

    PubMed Central

    Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio

    2012-01-01

    How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions. PMID:22707945

  3. Fast modal decomposition for optical fibers using digital holography.

    PubMed

    Lyu, Meng; Lin, Zhiquan; Li, Guowei; Situ, Guohai

    2017-07-26

    Eigenmode decomposition of the light field at the output end of optical fibers can provide fundamental insights into the nature of electromagnetic-wave propagation through the fibers. Here we present a fast and complete modal decomposition technique for step-index optical fibers. The proposed technique employs digital holography to measure the light field at the output end of the multimode optical fiber, and utilizes the modal orthonormal property of the basis modes to calculate the modal coefficients of each mode. Optical experiments were carried out to demonstrate the proposed decomposition technique, showing that this approach is fast, accurate and cost-effective.

  4. Accurate formulas for interaction force and energy in frequency modulation force spectroscopy

    NASA Astrophysics Data System (ADS)

    Sader, John E.; Jarvis, Suzanne P.

    2004-03-01

    Frequency modulation atomic force microscopy utilizes the change in resonant frequency of a cantilever to detect variations in the interaction force between cantilever tip and sample. While a simple relation exists enabling the frequency shift to be determined for a given force law, the required complementary inverse relation does not exist for arbitrary oscillation amplitudes of the cantilever. In this letter we address this problem and present simple yet accurate formulas that enable the interaction force and energy to be determined directly from the measured frequency shift. These formulas are valid for any oscillation amplitude and interaction force, and are therefore of widespread applicability in frequency modulation dynamic force spectroscopy.

  5. Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data

    NASA Technical Reports Server (NTRS)

    Brasunas, J.; Mamoutkine, A.; Gorius, N.

    2016-01-01

    Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.

  6. The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces

    NASA Astrophysics Data System (ADS)

    Vuik, C.; Saghir, A.; Boerstoel, G. P.

    2000-08-01

    Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright

  7. A simple formula for predicting claw volume of cattle.

    PubMed

    Scott, T D; Naylor, J M; Greenough, P R

    1999-11-01

    The object of this study was to develop a simple method for accurately calculating the volume of bovine claws under field conditions. The digits of 30 slaughterhouse beef cattle were examined and the following four linear measurements taken from each pair of claws: (1) the length of the dorsal surface of the claw (Toe); (2) the length of the coronary band (CorBand); (3) the length of the bearing surface (Base); and (4) the height of the claw at the abaxial groove (AbaxGr). Measurements of claw volume using a simple hydrometer were highly repeatable (r(2)= 0.999) and could be calculated from linear measurements using the formula:Claw Volume (cm(3)) = (17.192 x Base) + (7.467 x AbaxGr) + 45.270 x (CorBand) - 798.5This formula was found to be accurate (r(2)= 0.88) when compared to volume data derived from a hydrometer displacement procedure. The front claws occupied 54% of the total volume compared to 46% for the hind claws. Copyright 1999 Harcourt Publishers Ltd.

  8. Verge and Foliot Clock Escapement: A Simple Dynamical System

    NASA Astrophysics Data System (ADS)

    Denny, Mark

    2010-09-01

    The earliest mechanical clocks appeared in Europe in the 13th century. From about 1250 CE to 1670 CE, these simple clocks consisted of a weight suspended from a rope or chain that was wrapped around a horizontal axle. To tell time, the weight must fall with a slow uniform speed, but, under the action of gravity alone, such a suspended weight would accelerate. To prevent this acceleration, an escapement mechanism was required. The best such escapement mechanism was called the verge and foliot escapement, and it was so successful that it lasted until about 1800 CE. These simple weight-driven clocks with verge and foliot escapements were accurate enough to mark the hours but not minutes or seconds. From 1670, significant improvements were made (principally by introducing pendulums and the newly invented anchor escapement) that justified the introduction of hands to mark minutes, and then seconds. By the end of the era of mechanical clocks, in the first half of the 20th century, these much-studied and much-refined machines were accurate to a millisecond a day.

  9. A simple and accurate HPLC method for fecal bile acid profile in healthy and cirrhotic subjects: validation by GC-MS and LC-MS[S

    PubMed Central

    Kakiyama, Genta; Muto, Akina; Takei, Hajime; Nittono, Hiroshi; Murai, Tsuyoshi; Kurosawa, Takao; Hofmann, Alan F.; Pandak, William M.; Bajaj, Jasmohan S.

    2014-01-01

    We have developed a simple and accurate HPLC method for measurement of fecal bile acids using phenacyl derivatives of unconjugated bile acids, and applied it to the measurement of fecal bile acids in cirrhotic patients. The HPLC method has the following steps: 1) lyophilization of the stool sample; 2) reconstitution in buffer and enzymatic deconjugation using cholylglycine hydrolase/sulfatase; 3) incubation with 0.1 N NaOH in 50% isopropanol at 60°C to hydrolyze esterified bile acids; 4) extraction of bile acids from particulate material using 0.1 N NaOH; 5) isolation of deconjugated bile acids by solid phase extraction; 6) formation of phenacyl esters by derivatization using phenacyl bromide; and 7) HPLC separation measuring eluted peaks at 254 nm. The method was validated by showing that results obtained by HPLC agreed with those obtained by LC-MS/MS and GC-MS. We then applied the method to measuring total fecal bile acid (concentration) and bile acid profile in samples from 38 patients with cirrhosis (17 early, 21 advanced) and 10 healthy subjects. Bile acid concentrations were significantly lower in patients with advanced cirrhosis, suggesting impaired bile acid synthesis. PMID:24627129

  10. A Simple and Fast Extraction Method for the Determination of Multiclass Antibiotics in Eggs Using LC-MS/MS.

    PubMed

    Wang, Kun; Lin, Kunde; Huang, Xinwen; Chen, Meng

    2017-06-21

    The purpose of this study was to develop and validate a simple, fast, and specific extraction method for the analysis of 64 antibiotics from nine classes (including sulfonamides, quinolones, tetracyclines, macrolides, lincosamide, nitrofurans, β-lactams, nitromidazoles, and cloramphenicols) in chicken eggs. Briefly, egg samples were simply extracted with a mixture of acetonitrile-water (90:10, v/v) and 0.1 mol·L -1 Na 2 EDTA solution assisted with ultrasonic. The extract was centrifuged, condensed, and directly analyzed on a liquid chromatography coupled to tandem mass spectrometry. Compared with conventional cleanup methods (passing through solid phase extract cartridges), the established method demonstrated comparable efficiencies in eliminating matrix effects and higher or equivalent recoveries for most of the target compounds. Typical validation parameters including specificity, linearity, matrix effect, limits of detection (LODs) and quantification (LOQs), the decision limit, detection capability, trueness, and precision were evaluated. The recoveries of target compounds ranged from 70.8% to 116.1% at three spiking levels (5, 20, and 50 μg·kg -1 ), with relative standard deviations less than 14%. LODs and LOQs were in the ranges of 0.005-2.00 μg·kg -1 and 0.015-6.00 μg·kg -1 for all of the antibiotics, respectively. A total of five antibiotics were successfully detected in 22 commercial eggs from local markets. This work suggests that the method is suitable for the analysis of multiclass antibiotics in eggs.

  11. Noninvasive Tests Do Not Accurately Differentiate Nonalcoholic Steatohepatitis From Simple Steatosis: A Systematic Review and Meta-analysis.

    PubMed

    Verhaegh, Pauline; Bavalia, Roisin; Winkens, Bjorn; Masclee, Ad; Jonkers, Daisy; Koek, Ger

    2018-06-01

    Nonalcoholic fatty liver disease is a rapidly increasing health problem. Liver biopsy analysis is the most sensitive test to differentiate between nonalcoholic steatohepatitis (NASH) and simple steatosis (SS), but noninvasive methods are needed. We performed a systematic review and meta-analysis of noninvasive tests for differentiating NASH from SS, focusing on blood markers. We performed a systematic search of the PubMed, Medline and Embase (1990-2016) databases using defined keywords, limited to full-text papers in English and human adults, and identified 2608 articles. Two independent reviewers screened the articles and identified 122 eligible articles that used liver biopsy as reference standard. If at least 2 studies were available, pooled sensitivity (sens p ) and specificity (spec p ) values were determined using the Meta-Analysis Package for R (metafor). In the 122 studies analyzed, 219 different blood markers (107 single markers and 112 scoring systems) were identified to differentiate NASH from simple steatosis, and 22 other diagnostic tests were studied. Markers identified related to several pathophysiological mechanisms. The markers analyzed in the largest proportions of studies were alanine aminotransferase (sens p , 63.5% and spec p , 74.4%) within routine biochemical tests, adiponectin (sensp, 72.0% and spec p , 75.7%) within inflammatory markers, CK18-M30 (sens p , 68.4% and spec p , 74.2%) within markers of cell death or proliferation and homeostatic model assessment of insulin resistance (sens p , 69.0% and spec p , 72.7%) within the metabolic markers. Two scoring systems could also be pooled: the NASH test (differentiated NASH from borderline NASH plus simple steatosis with 22.9% sens p and 95.3% spec p ) and the GlycoNASH test (67.1% sens p and 63.8% spec p ). In the meta-analysis, we found no test to differentiate NASH from SS with a high level of pooled sensitivity and specificity (≥80%). However, some blood markers, when included in scoring

  12. New Method for Accurate Calibration of Micro-Channel Plate based Detection Systems and its use in the Fast Plasma Investigation of NASA's Magnetospheric MultiScale Mission

    NASA Astrophysics Data System (ADS)

    Gliese, U.; Avanov, L. A.; Barrie, A.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Zeuch, M.; Pollock, C. J.; Jacques, A. D.

    2013-12-01

    The Fast Plasma Investigation (FPI) of the NASA Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30ms for electrons; 150ms for ions) and spatially differentiated measurements of full the 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity and reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated by setting a fixed detection threshold and, subsequently, measuring a detection system count rate plateau curve to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection amplifier threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully and individually characterize each of the fundamental parameters of the detection system. We present a new detection system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. The fundamental

  13. A simple backscattering microscope for fast tracking of biological molecules

    PubMed Central

    Sowa, Yoshiyuki; Steel, Bradley C.; Berry, Richard M.

    2010-01-01

    Recent developments in techniques for observing single molecules under light microscopes have helped reveal the mechanisms by which molecular machines work. A wide range of markers can be used to detect molecules, from single fluorophores to micron sized markers, depending on the research interest. Here, we present a new and simple objective-type backscattering microscope to track gold nanoparticles with nanometer and microsecond resolution. The total noise of our system in a 55 kHz bandwidth is ∼0.6 nm per axis, sufficient to measure molecular movement. We found our backscattering microscopy to be useful not only for in vitro but also for in vivo experiments because of lower background scattering from cells than in conventional dark-field microscopy. We demonstrate the application of this technique to measuring the motion of a biological rotary molecular motor, the bacterial flagellar motor, in live Escherichia coli cells. PMID:21133475

  14. A novel method for the accurate evaluation of Poisson's ratio of soft polymer materials.

    PubMed

    Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S; Kang, Dong-Joong; Park, Sungchan; Park, Seonghun

    2013-01-01

    A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6-47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials.

  15. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  16. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media.

    PubMed

    Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A

    2015-01-01

    To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam.

  17. A convenient and accurate parallel Input/Output USB device for E-Prime.

    PubMed

    Canto, Rosario; Bufalari, Ilaria; D'Ausilio, Alessandro

    2011-03-01

    Psychological and neurophysiological experiments require the accurate control of timing and synchrony for Input/Output signals. For instance, a typical Event-Related Potential (ERP) study requires an extremely accurate synchronization of stimulus delivery with recordings. This is typically done via computer software such as E-Prime, and fast communications are typically assured by the Parallel Port (PP). However, the PP is an old and disappearing technology that, for example, is no longer available on portable computers. Here we propose a convenient USB device enabling parallel I/O capabilities. We tested this device against the PP on both a desktop and a laptop machine in different stress tests. Our data demonstrate the accuracy of our system, which suggests that it may be a good substitute for the PP with E-Prime.

  18. DNA barcode data accurately assign higher spider taxa

    PubMed Central

    Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of

  19. Accurate Energy Transaction Allocation using Path Integration and Interpolation

    NASA Astrophysics Data System (ADS)

    Bhide, Mandar Mohan

    This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.

  20. Simple Fabrication of a Highly Sensitive and Fast Glucose Biosensor using Enzyme Immobilized in Mesocellular Carbon Foam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Dohoon; Lee, Jinwoo; Kim, Jungbae

    2005-12-05

    We fabricated a highly sensitive and fast glucose biosensor by simply immobilizing glucose oxidase in mesocellular carbon foam. Due to its unique structure, the MSU-F-C enabled high enzyme loading without serious mass transfer limitation, resulting in high catalytic efficiency. As a result, the glucose biosensor fabricated with MSU-F-C/GOx showed a high sensitivity and fast response. Given these results and the inherent electrical conductivity, we anticipate that MSU-F-C will make a useful matrix for enzyme immobilization in various biocatalytic and electrobiocatalytic applications.

  1. Simple Identification of Complex ADHD Subtypes Using Current Symptom Counts

    ERIC Educational Resources Information Center

    Volk, Heather E.; Todorov, Alexandre A.; Hay, David A.; Todd, Richard D.

    2009-01-01

    The results of the assessment of the accuracy of simple rules based on symptom count for assigning youths to attention deficit hyperactivity disorder subtypes show that having six or more total symptoms and fewer than three hyperactive-impulsive symptoms is an accurate predictor for the latent class sever inattentive subtype.

  2. A simple approximation for the current-voltage characteristics of high-power, relativistic diodes

    DOE PAGES

    Ekdahl, Carl

    2016-06-10

    A simple approximation for the current-voltage characteristics of a relativistic electron diode is presented. The approximation is accurate from non-relativistic through relativistic electron energies. Although it is empirically developed, it has many of the fundamental properties of the exact diode solutions. Lastly, the approximation is simple enough to be remembered and worked on almost any pocket calculator, so it has proven to be quite useful on the laboratory floor.

  3. Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.

    PubMed

    Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J

    2018-06-01

    Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.

  4. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  5. Fast and accurate: high-speed metrological large-range AFM for surface and nanometrology

    NASA Astrophysics Data System (ADS)

    Dai, Gaoliang; Koenders, Ludger; Fluegge, Jens; Hemmleb, Matthias

    2018-05-01

    Low measurement speed remains a major shortcoming of the scanning probe microscopic technique. It not only leads to a low measurement throughput, but a significant measurement drift over the long measurement time needed (up to hours or even days). To overcome this challenge, PTB, the national metrology institute of Germany, has developed a high-speed metrological large-range atomic force microscope (HS Met. LR-AFM) capable of measuring speeds up to 1 mm s‑1. This paper has introduced the design concept in detail. After modelling scanning probe microscopic measurements, our results suggest that the signal spectrum of the surface to be measured is the spatial spectrum of the surface scaled by the scanning speed. The higher the scanning speed , the broader the spectrum to be measured. To realise an accurate HS Met. LR-AFM, our solution is to combine different stages/sensors synchronously in measurements, which provide a much larger spectrum area for high-speed measurement capability. Two application examples have been demonstrated. The first is a new concept called reference areal surface metrology. Using the developed HS Met. LR-AFM, surfaces are measured accurately and traceably at a speed of 500 µm s‑1 and the results are applied as a reference 3D data map of the surfaces. By correlating the reference 3D data sets and 3D data sets of tools under calibration, which are measured at the same surface, it has the potential to comprehensively characterise the tools, for instance, the spectrum properties of the tools. The investigation results of two commercial confocal microscopes are demonstrated, indicating very promising results. The second example is the calibration of a kind of 3D nano standard, which has spatially distributed landmarks, i.e. special unique features defined by 3D-coordinates. Experimental investigations confirmed that the calibration accuracy is maintained at a measurement speed of 100 µm s‑1, which improves the calibration efficiency by a

  6. Ultra-fast quantitative imaging using ptychographic iterative engine based digital micro-mirror device

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Tian, Xiaolin; Kong, Yan; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-01-01

    As a lensfree imaging technique, ptychographic iterative engine (PIE) method can provide both quantitative sample amplitude and phase distributions avoiding aberration. However, it requires field of view (FoV) scanning often relying on mechanical translation, which not only slows down measuring speed, but also introduces mechanical errors decreasing both resolution and accuracy in retrieved information. In order to achieve high-accurate quantitative imaging with fast speed, digital micromirror device (DMD) is adopted in PIE for large FoV scanning controlled by on/off state coding by DMD. Measurements were implemented using biological samples as well as USAF resolution target, proving high resolution in quantitative imaging using the proposed system. Considering its fast and accurate imaging capability, it is believed the DMD based PIE technique provides a potential solution for medical observation and measurements.

  7. Research on a pulmonary nodule segmentation method combining fast self-adaptive FCM and classification.

    PubMed

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms.

  8. Research on a Pulmonary Nodule Segmentation Method Combining Fast Self-Adaptive FCM and Classification

    PubMed Central

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120

  9. A simple and fast heuristic for protein structure comparison.

    PubMed

    Pelta, David A; González, Juan R; Moreno Vega, Marcos

    2008-03-25

    Protein structure comparison is a key problem in bioinformatics. There exist several methods for doing protein comparison, being the solution of the Maximum Contact Map Overlap problem (MAX-CMO) one of the alternatives available. Although this problem may be solved using exact algorithms, researchers require approximate algorithms that obtain good quality solutions using less computational resources than the formers. We propose a variable neighborhood search metaheuristic for solving MAX-CMO. We analyze this strategy in two aspects: 1) from an optimization point of view the strategy is tested on two different datasets, obtaining an error of 3.5%(over 2702 pairs) and 1.7% (over 161 pairs) with respect to optimal values; thus leading to high accurate solutions in a simpler and less expensive way than exact algorithms; 2) in terms of protein structure classification, we conduct experiments on three datasets and show that is feasible to detect structural similarities at SCOP's family and CATH's architecture levels using normalized overlap values. Some limitations and the role of normalization are outlined for doing classification at SCOP's fold level. We designed, implemented and tested.a new tool for solving MAX-CMO, based on a well-known metaheuristic technique. The good balance between solution's quality and computational effort makes it a valuable tool. Moreover, to the best of our knowledge, this is the first time the MAX-CMO measure is tested at SCOP's fold and CATH's architecture levels with encouraging results.

  10. 3D palmprint data fast acquisition and recognition

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxu; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2014-11-01

    This paper presents a fast 3D (Three-Dimension) palmprint capturing system and develops an efficient 3D palmprint feature extraction and recognition method. In order to fast acquire accurate 3D shape and texture of palmprint, a DLP projector triggers a CCD camera to realize synchronization. By generating and projecting green fringe pattern images onto the measured palm surface, 3D palmprint data are calculated from the fringe pattern images. The periodic feature vector can be derived from the calculated 3D palmprint data, so undistorted 3D biometrics is obtained. Using the obtained 3D palmprint data, feature matching test have been carried out by Gabor filter, competition rules and the mean curvature. Experimental results on capturing 3D palmprint show that the proposed acquisition method can fast get 3D shape information of palmprint. Some initial experiments on recognition show the proposed method is efficient by using 3D palmprint data.

  11. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, V; Wang, Y; Romero, A

    2014-06-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain groundmore » truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto

  12. Accuracy of two simple methods for estimation of thyroidal {sup 131}I kinetics for dosimetry-based treatment of Graves' disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Traino, A. C.; Xhafa, B.; Sezione di Fisica Medica, U.O. Fisica Sanitaria, Azienda Ospedaliero-Universitaria Pisana, via Roma n. 67, Pisa 56125

    2009-04-15

    One of the major challenges to the more widespread use of individualized, dosimetry-based radioiodine treatment of Graves' disease is the development of a reasonably fast, simple, and cost-effective method to measure thyroidal {sup 131}I kinetics in patients. Even though the fixed activity administration method does not optimize the therapy, giving often too high or too low a dose to the gland, it provides effective treatment for almost 80% of patients without consuming excessive time and resources. In this article two simple methods for the evaluation of the kinetics of {sup 131}I in the thyroid gland are presented and discussed. Themore » first is based on two measurements 4 and 24 h after a diagnostic {sup 131}I administration and the second on one measurement 4 h after such an administration and a linear correlation between this measurement and the maximum uptake in the thyroid. The thyroid absorbed dose calculated by each of the two methods is compared to that calculated by a more complete {sup 131}I kinetics evaluation, based on seven thyroid uptake measurements for 35 patients at various times after the therapy administration. There are differences in the thyroid absorbed doses between those derived by each of the two simpler methods and the ''reference'' value (derived by more complete uptake measurements following the therapeutic {sup 131}I administration), with 20% median and 40% 90-percentile differences for the first method (i.e., based on two thyroid uptake measurements at 4 and 24 h after {sup 131}I administration) and 25% median and 45% 90-percentile differences for the second method (i.e., based on one measurement at 4 h post-administration). Predictably, although relatively fast and convenient, neither of these simpler methods appears to be as accurate as thyroid dose estimates based on more complete kinetic data.« less

  13. fastBMA: scalable network inference and transitive reduction.

    PubMed

    Hung, Ling-Hong; Shi, Kaiyuan; Wu, Migao; Young, William Chad; Raftery, Adrian E; Yeung, Ka Yee

    2017-10-01

    Inferring genetic networks from genome-wide expression data is extremely demanding computationally. We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10 000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub (https://github.com/lhhunghimself/fastBMA), as part of the updated networkBMA Bioconductor package (https://www.bioconductor.org/packages/release/bioc/html/networkBMA.html) and as ready-to-deploy Docker images (https://hub.docker.com/r/biodepot/fastbma/). © The Authors 2017. Published by Oxford University Press.

  14. Fast and accurate semi-automated segmentation method of spinal cord MR images at 3T applied to the construction of a cervical spinal cord template.

    PubMed

    El Mendili, Mohamed-Mounir; Chen, Raphaël; Tiret, Brice; Villard, Noémie; Trunet, Stéphanie; Pélégrini-Issac, Mélanie; Lehéricy, Stéphane; Pradat, Pierre-François; Benali, Habib

    2015-01-01

    To design a fast and accurate semi-automated segmentation method for spinal cord 3T MR images and to construct a template of the cervical spinal cord. A semi-automated double threshold-based method (DTbM) was proposed enabling both cross-sectional and volumetric measures from 3D T2-weighted turbo spin echo MR scans of the spinal cord at 3T. Eighty-two healthy subjects, 10 patients with amyotrophic lateral sclerosis, 10 with spinal muscular atrophy and 10 with spinal cord injuries were studied. DTbM was compared with active surface method (ASM), threshold-based method (TbM) and manual outlining (ground truth). Accuracy of segmentations was scored visually by a radiologist in cervical and thoracic cord regions. Accuracy was also quantified at the cervical and thoracic levels as well as at C2 vertebral level. To construct a cervical template from healthy subjects' images (n=59), a standardization pipeline was designed leading to well-centered straight spinal cord images and accurate probability tissue map. Visual scoring showed better performance for DTbM than for ASM. Mean Dice similarity coefficient (DSC) was 95.71% for DTbM and 90.78% for ASM at the cervical level and 94.27% for DTbM and 89.93% for ASM at the thoracic level. Finally, at C2 vertebral level, mean DSC was 97.98% for DTbM compared with 98.02% for TbM and 96.76% for ASM. DTbM showed similar accuracy compared with TbM, but with the advantage of limited manual interaction. A semi-automated segmentation method with limited manual intervention was introduced and validated on 3T images, enabling the construction of a cervical spinal cord template.

  15. Fast and Accurate Semi-Automated Segmentation Method of Spinal Cord MR Images at 3T Applied to the Construction of a Cervical Spinal Cord Template

    PubMed Central

    El Mendili, Mohamed-Mounir; Trunet, Stéphanie; Pélégrini-Issac, Mélanie; Lehéricy, Stéphane; Pradat, Pierre-François; Benali, Habib

    2015-01-01

    Objective To design a fast and accurate semi-automated segmentation method for spinal cord 3T MR images and to construct a template of the cervical spinal cord. Materials and Methods A semi-automated double threshold-based method (DTbM) was proposed enabling both cross-sectional and volumetric measures from 3D T2-weighted turbo spin echo MR scans of the spinal cord at 3T. Eighty-two healthy subjects, 10 patients with amyotrophic lateral sclerosis, 10 with spinal muscular atrophy and 10 with spinal cord injuries were studied. DTbM was compared with active surface method (ASM), threshold-based method (TbM) and manual outlining (ground truth). Accuracy of segmentations was scored visually by a radiologist in cervical and thoracic cord regions. Accuracy was also quantified at the cervical and thoracic levels as well as at C2 vertebral level. To construct a cervical template from healthy subjects’ images (n=59), a standardization pipeline was designed leading to well-centered straight spinal cord images and accurate probability tissue map. Results Visual scoring showed better performance for DTbM than for ASM. Mean Dice similarity coefficient (DSC) was 95.71% for DTbM and 90.78% for ASM at the cervical level and 94.27% for DTbM and 89.93% for ASM at the thoracic level. Finally, at C2 vertebral level, mean DSC was 97.98% for DTbM compared with 98.02% for TbM and 96.76% for ASM. DTbM showed similar accuracy compared with TbM, but with the advantage of limited manual interaction. Conclusion A semi-automated segmentation method with limited manual intervention was introduced and validated on 3T images, enabling the construction of a cervical spinal cord template. PMID:25816143

  16. A Simple, Fast, Low Cost, HPLC/UV Validated Method for Determination of Flutamide: Application to Protein Binding Studies.

    PubMed

    Esmaeilzadeh, Sara; Valizadeh, Hadi; Zakeri-Milani, Parvin

    2016-06-01

    The main goal of this study was development of a reverse phase high performance liquid chromatography (RP-HPLC) method for flutamide quantitation which is applicable to protein binding studies. Ultrafilteration method was used for protein binding study of flutamide. For sample analysis, flutamide was extracted by a simple and low cost extraction method using diethyl ether and then was determined by HPLC/UV. Acetanilide was used as an internal standard. The chromatographic system consisted of a reversed-phase C8 column with C8 pre-column, and the mobile phase of a mixture of 29% (v/v) methanol, 38% (v/v) acetonitrile and 33% (v/v) potassium dihydrogen phosphate buffer (50 mM) with pH adjusted to 3.2. Acetanilide and flutamide were eluted at 1.8 and 2.9 min, respectively. The linearity of method was confirmed in the range of 62.5-16000 ng/ml (r(2) > 0.99). The limit of quantification was shown to be 62.5 ng/ml. Precision and accuracy ranges found to be (0.2-1.4%, 90-105%) and (0.2-5.3 %, 86.7-98.5 %) respectively. Acetanilide and flutamide capacity factor values of 1.35 and 2.87, tailing factor values of 1.24 and 1.07 and resolution values of 1.8 and 3.22 were obtained in accordance with ICH guidelines. Based on the obtained results a rapid, precise, accurate, sensitive and cost-effective analysis procedure was proposed for quantitative determination of flutamide.

  17. Fast simulation of yttrium-90 bremsstrahlung photons with GATE.

    PubMed

    Rault, Erwann; Staelens, Steven; Van Holen, Roel; De Beenhouwer, Jan; Vandenberghe, Stefaan

    2010-06-01

    Multiple investigators have recently reported the use of yttrium-90 (90Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging for the dosimetry of targeted radionuclide therapies. Because Monte Carlo (MC) simulations are useful for studying SPECT imaging, this study investigates the MC simulation of 90Y bremsstrahlung photons in SPECT. To overcome the computationally expensive simulation of electrons, the authors propose a fast way to simulate the emission of 90Y bremsstrahlung photons based on prerecorded bremsstrahlung photon probability density functions (PDFs). The accuracy of bremsstrahlung photon simulation is evaluated in two steps. First, the validity of the fast bremsstrahlung photon generator is checked. To that end, fast and analog simulations of photons emitted from a 90Y point source in a water phantom are compared. The same setup is then used to verify the accuracy of the bremsstrahlung photon simulations, comparing the results obtained with PDFs generated from both simulated and measured data to measurements. In both cases, the energy spectra and point spread functions of the photons detected in a scintillation camera are used. Results show that the fast simulation method is responsible for a 5% overestimation of the low-energy fluence (below 75 keV) of the bremsstrahlung photons detected using a scintillation camera. The spatial distribution of the detected photons is, however, accurately reproduced with the fast method and a computational acceleration of approximately 17-fold is achieved. When measured PDFs are used in the simulations, the simulated energy spectrum of photons emitted from a point source of 90Y in a water phantom and detected in a scintillation camera closely approximates the measured spectrum. The PSF of the photons imaged in the 50-300 keV energy window is also accurately estimated with a 12.4% underestimation of the full width at half maximum and 4.5% underestimation of the full width at tenth maximum

  18. RapGene: a fast and accurate strategy for synthetic gene assembly in Escherichia coli

    PubMed Central

    Zampini, Massimiliano; Stevens, Pauline Rees; Pachebat, Justin A.; Kingston-Smith, Alison; Mur, Luis A. J.; Hayes, Finbarr

    2015-01-01

    The ability to assemble DNA sequences de novo through efficient and powerful DNA fabrication methods is one of the foundational technologies of synthetic biology. Gene synthesis, in particular, has been considered the main driver for the emergence of this new scientific discipline. Here we describe RapGene, a rapid gene assembly technique which was successfully tested for the synthesis and cloning of both prokaryotic and eukaryotic genes through a ligation independent approach. The method developed in this study is a complete bacterial gene synthesis platform for the quick, accurate and cost effective fabrication and cloning of gene-length sequences that employ the widely used host Escherichia coli. PMID:26062748

  19. Fast ion transport at a gas-metal interface

    DOE PAGES

    McDevitt, Christopher J.; Tang, Xian-Zhu; Guo, Zehua

    2017-11-06

    Fast ion transport and the resulting fusion yield reduction are computed at a gas-metal interface. The extent of fusion yield reduction is observed to depend sensitively on the charge state of the surrounding pusher material and the width of the atomically mixed region. These sensitivities suggest that idealized boundary conditions often implemented at the gas-pusher interface for the purpose of estimating fast ion loss will likely overestimate fusion reactivity reduction in several important limits. Additionally, the impact of a spatially complex material interface is investigated by considering a collection of droplets of the pusher material immersed in a DT plasma.more » It is found that for small Knudsen numbers, the extent of fusion yield reduction scales with the surface area of the material interface. As the Knudsen number is increased, but, the simple surface area scaling is broken, suggesting that hydrodynamic mix has a nontrivial impact on the extent of fast ion losses.« less

  20. SOLAR OBLIQUITY INDUCED BY PLANET NINE: SIMPLE CALCULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Dong

    2016-12-01

    Bailey et al. and Gomes et al. recently suggested that the 6° misalignment between the Sun’s rotational equator and the orbital plane of the major planets may be produced by forcing from the hypothetical Planet Nine on an inclined orbit. Here, we present a simple yet accurate calculation of the effect, which provides a clear description of how the Sun’s spin orientation depends on the property of Planet Nine in this scenario.

  1. A simple and fast heuristic for protein structure comparison

    PubMed Central

    Pelta, David A; González, Juan R; Moreno Vega, Marcos

    2008-01-01

    Background Protein structure comparison is a key problem in bioinformatics. There exist several methods for doing protein comparison, being the solution of the Maximum Contact Map Overlap problem (MAX-CMO) one of the alternatives available. Although this problem may be solved using exact algorithms, researchers require approximate algorithms that obtain good quality solutions using less computational resources than the formers. Results We propose a variable neighborhood search metaheuristic for solving MAX-CMO. We analyze this strategy in two aspects: 1) from an optimization point of view the strategy is tested on two different datasets, obtaining an error of 3.5%(over 2702 pairs) and 1.7% (over 161 pairs) with respect to optimal values; thus leading to high accurate solutions in a simpler and less expensive way than exact algorithms; 2) in terms of protein structure classification, we conduct experiments on three datasets and show that is feasible to detect structural similarities at SCOP's family and CATH's architecture levels using normalized overlap values. Some limitations and the role of normalization are outlined for doing classification at SCOP's fold level. Conclusion We designed, implemented and tested.a new tool for solving MAX-CMO, based on a well-known metaheuristic technique. The good balance between solution's quality and computational effort makes it a valuable tool. Moreover, to the best of our knowledge, this is the first time the MAX-CMO measure is tested at SCOP's fold and CATH's architecture levels with encouraging results. Software is available for download at . PMID:18366735

  2. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  3. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed Central

    Kong, A; Cox, N J

    1997-01-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087

  4. Fast Image Texture Classification Using Decision Trees

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  5. Antifouling booster biocide extraction from marine sediments: a fast and simple method based on vortex-assisted matrix solid-phase extraction.

    PubMed

    Caldas, Sergiane Souza; Soares, Bruno Meira; Abreu, Fiamma; Castro, Ítalo Braga; Fillmann, Gilberto; Primel, Ednei Gilberto

    2018-03-01

    This paper reports the development of an analytical method employing vortex-assisted matrix solid-phase dispersion (MSPD) for the extraction of diuron, Irgarol 1051, TCMTB (2-thiocyanomethylthiobenzothiazole), DCOIT (4,5-dichloro-2-n-octyl-3-(2H)-isothiazolin-3-one), and dichlofluanid from sediment samples. Separation and determination were performed by liquid chromatography tandem-mass spectrometry. Important MSPD parameters, such as sample mass, mass of C18, and type and volume of extraction solvent, were investigated by response surface methodology. Quantitative recoveries were obtained with 2.0 g of sediment sample, 0.25 g of C18 as the solid support, and 10 mL of methanol as the extraction solvent. The MSPD method was suitable for the extraction and determination of antifouling biocides in sediment samples, with recoveries between 61 and 103% and a relative standard deviation lower than 19%. Limits of quantification between 0.5 and 5 ng g -1 were obtained. Vortex-assisted MPSD was shown to be fast and easy to use, with the advantages of low cost and reduced solvent consumption compared to the commonly employed techniques for the extraction of booster biocides from sediment samples. Finally, the developed method was applied to real samples. Results revealed that the developed extraction method is effective and simple, thus allowing the determination of biocides in sediment samples.

  6. Development and validation of a novel, simple, and accurate spectrophotometric method for the determination of lead in human serum.

    PubMed

    Shayesteh, Tavakol Heidari; Khajavi, Farzad; Khosroshahi, Abolfazl Ghafuri; Mahjub, Reza

    2016-01-01

    The determination of blood lead levels is the most useful indicator of the determination of the amount of lead that is absorbed by the human body. Various methods, like atomic absorption spectroscopy (AAS), have already been used for the detection of lead in biological fluid, but most of these methods are based on complicated, expensive, and highly instructed instruments. In this study, a simple and accurate spectroscopic method for the determination of lead has been developed and applied for the investigation of lead concentration in biological samples. In this study, a silica gel column was used to extract lead and eliminate interfering agents in human serum samples. The column was washed with deionized water. The pH was adjusted to the value of 8.2 using phosphate buffer, and then tartrate and cyanide solutions were added as masking agents. The lead content was extracted into the organic phase containing dithizone as a complexion reagent and the dithizone-Pb(II) complex was formed and approved by visible spectrophotometry at 538 nm. The recovery was found to be 84.6 %. In order to validate the method, a calibration curve involving the use of various concentration levels was calculated and proven to be linear in the range of 0.01-1.5 μg/ml, with an R (2) regression coefficient of 0.9968 by statistical analysis of linear model validation. The largest error % values were found to be -5.80 and +11.6 % for intra-day and inter-day measurements, respectively. The largest RSD % values were calculated to be 6.54 and 12.32 % for intra-day and inter-day measurements, respectively. Further, the limit of detection (LOD) was calculated to be 0.002 μg/ml. The developed method was applied to determine the lead content in the human serum of voluntary miners, and it has been proven that there is no statistically significant difference between the data provided from this novel method and the data obtained from previously studied AAS.

  7. Indexed variation graphs for efficient and accurate resistome profiling.

    PubMed

    Rowe, Will P M; Winn, Martyn D

    2018-05-14

    Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.

  8. A simple and fast method for computing the relativistic Compton Scattering Kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas

    1986-01-01

    The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.

  9. Extremely simple holographic projection of color images

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej

    2012-03-01

    A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).

  10. A fast and simple spectrofluorometric method for the determination of alendronate sodium in pharmaceuticals

    PubMed Central

    Ezzati Nazhad Dolatabadi, Jafar; Hamishehkar, Hamed; de la Guardia, Miguel; Valizadeh, Hadi

    2014-01-01

    Introduction: Alendronate sodium enhances bone formation and increases osteoblast proliferation and maturation and leads to the inhibition of osteoblast apoptosis. Therefore, a rapid and simple spectrofluorometric method has been developed and validated for the quantitative determination of it. Methods: The procedure is based on the reaction of primary amino group of alendronate with o-phthalaldehyde (OPA) in sodium hydroxide solution. Results: The calibration graph was linear over the concentration range of 0.0-2.4 μM and limit of detection and limit of quantification of the method was 8.89 and 29 nanomolar, respectively. The enthalpy and entropy of the reaction between alendronate sodium and OPA showed that the reaction is endothermic and entropy favored (ΔH = 154.08 kJ/mol; ΔS = 567.36 J/mol K) which indicates that OPA interaction with alendronate is increased at elevated temperature. Conclusion: This simple method can be used as a practical technique for the analysis of alendronate in various samples. PMID:24790897

  11. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  12. Simple and Fast Method for Fabrication of Endoscopic Implantable Sensor Arrays

    PubMed Central

    Tahirbegi, I. Bogachan; Alvira, Margarita; Mir, Mònica; Samitier, Josep

    2014-01-01

    Here we have developed a simple method for the fabrication of disposable implantable all-solid-state ion-selective electrodes (ISE) in an array format without using complex fabrication equipment or clean room facilities. The electrodes were designed in a needle shape instead of planar electrodes for a full contact with the tissue. The needle-shape platform comprises 12 metallic pins which were functionalized with conductive inks and ISE membranes. The modified microelectrodes were characterized with cyclic voltammetry, scanning electron microscope (SEM), and optical interferometry. The surface area and roughness factor of each microelectrode were determined and reproducible values were obtained for all the microelectrodes on the array. In this work, the microelectrodes were modified with membranes for the detection of pH and nitrate ions to prove the reliability of the fabricated sensor array platform adapted to an endoscope. PMID:24971473

  13. Simple and fast method for fabrication of endoscopic implantable sensor arrays.

    PubMed

    Tahirbegi, I Bogachan; Alvira, Margarita; Mir, Mònica; Samitier, Josep

    2014-06-26

    Here we have developed a simple method for the fabrication of disposable implantable all-solid-state ion-selective electrodes (ISE) in an array format without using complex fabrication equipment or clean room facilities. The electrodes were designed in a needle shape instead of planar electrodes for a full contact with the tissue. The needle-shape platform comprises 12 metallic pins which were functionalized with conductive inks and ISE membranes. The modified microelectrodes were characterized with cyclic voltammetry, scanning electron microscope (SEM), and optical interferometry. The surface area and roughness factor of each microelectrode were determined and reproducible values were obtained for all the microelectrodes on the array. In this work, the microelectrodes were modified with membranes for the detection of pH and nitrate ions to prove the reliability of the fabricated sensor array platform adapted to an endoscope.

  14. Large-scale extraction of accurate drug-disease treatment pairs from biomedical literature for drug repurposing

    PubMed Central

    2013-01-01

    Background A large-scale, highly accurate, machine-understandable drug-disease treatment relationship knowledge base is important for computational approaches to drug repurposing. The large body of published biomedical research articles and clinical case reports available on MEDLINE is a rich source of FDA-approved drug-disease indication as well as drug-repurposing knowledge that is crucial for applying FDA-approved drugs for new diseases. However, much of this information is buried in free text and not captured in any existing databases. The goal of this study is to extract a large number of accurate drug-disease treatment pairs from published literature. Results In this study, we developed a simple but highly accurate pattern-learning approach to extract treatment-specific drug-disease pairs from 20 million biomedical abstracts available on MEDLINE. We extracted a total of 34,305 unique drug-disease treatment pairs, the majority of which are not included in existing structured databases. Our algorithm achieved a precision of 0.904 and a recall of 0.131 in extracting all pairs, and a precision of 0.904 and a recall of 0.842 in extracting frequent pairs. In addition, we have shown that the extracted pairs strongly correlate with both drug target genes and therapeutic classes, therefore may have high potential in drug discovery. Conclusions We demonstrated that our simple pattern-learning relationship extraction algorithm is able to accurately extract many drug-disease pairs from the free text of biomedical literature that are not captured in structured databases. The large-scale, accurate, machine-understandable drug-disease treatment knowledge base that is resultant of our study, in combination with pairs from structured databases, will have high potential in computational drug repurposing tasks. PMID:23742147

  15. A fast algorithm for identifying friends-of-friends halos

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Modi, C.

    2017-07-01

    We describe a simple and fast algorithm for identifying friends-of-friends features and prove its correctness. The algorithm avoids unnecessary expensive neighbor queries, uses minimal memory overhead, and rejects slowdown in high over-density regions. We define our algorithm formally based on pair enumeration, a problem that has been heavily studied in fast 2-point correlation codes and our reference implementation employs a dual KD-tree correlation function code. We construct features in a hierarchical tree structure, and use a splay operation to reduce the average cost of identifying the root of a feature from O [ log L ] to O [ 1 ] (L is the size of a feature) without additional memory costs. This reduces the overall time complexity of merging trees from O [ L log L ] to O [ L ] , reducing the number of operations per splay by orders of magnitude. We next introduce a pruning operation that skips merge operations between two fully self-connected KD-tree nodes. This improves the robustness of the algorithm, reducing the number of merge operations in high density peaks from O [δ2 ] to O [ δ ] . We show that for cosmological data set the algorithm eliminates more than half of merge operations for typically used linking lengths b ∼ 0 . 2 (relative to mean separation). Furthermore, our algorithm is extremely simple and easy to implement on top of an existing pair enumeration code, reusing the optimization effort that has been invested in fast correlation function codes.

  16. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  17. Fast simulation tool for ultraviolet radiation at the earth's surface

    NASA Astrophysics Data System (ADS)

    Engelsen, Ola; Kylling, Arve

    2005-04-01

    FastRT is a fast, yet accurate, UV simulation tool that computes downward surface UV doses, UV indices, and irradiances in the spectral range 290 to 400 nm with a resolution as small as 0.05 nm. It computes a full UV spectrum within a few milliseconds on a standard PC, and enables the user to convolve the spectrum with user-defined and built-in spectral response functions including the International Commission on Illumination (CIE) erythemal response function used for UV index calculations. The program accounts for the main radiative input parameters, i.e., instrumental characteristics, solar zenith angle, ozone column, aerosol loading, clouds, surface albedo, and surface altitude. FastRT is based on look-up tables of carefully selected entries of atmospheric transmittances and spherical albedos, and exploits the smoothness of these quantities with respect to atmospheric, surface, geometrical, and spectral parameters. An interactive site, http://nadir.nilu.no/~olaeng/fastrt/fastrt.html, enables the public to run the FastRT program with most input options. This page also contains updated information about FastRT and links to freely downloadable source codes and binaries.

  18. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  19. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  20. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  1. Fast Euler solver for transonic airfoils. I - Theory. II - Applications

    NASA Technical Reports Server (NTRS)

    Dadone, Andrea; Moretti, Gino

    1988-01-01

    Equations written in terms of generalized Riemann variables are presently integrated by inverting six bidiagonal matrices and two tridiagonal matrices, using an implicit Euler solver that is based on the lambda-formulation. The solution is found on a C-grid whose boundaries are very close to the airfoil. The fast solver is then applied to the computation of several flowfields on a NACA 0012 airfoil at various Mach number and alpha values, yielding results that are primarily concerned with transonic flows. The effects of grid fineness and boundary distances are analyzed; the code is found to be robust and accurate, as well as fast.

  2. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  3. Accurate, consistent, and fast droplet splitting and dispensing in electrowetting on dielectric digital microfluidics

    NASA Astrophysics Data System (ADS)

    Nikapitiya, N. Y. Jagath B.; Nahar, Mun Mun; Moon, Hyejin

    2017-12-01

    This letter reports two novel electrode design considerations to satisfy two very important aspects of EWOD operation—(1) Highly consistent volume of generated droplets and (2) Highly improved accuracy in the generated droplet volume. Considering the design principles investigated two novel designs were proposed; L-junction electrode design to offer high throughput droplet generation and Y-junction electrode design to split a droplet very fast while maintaining equal volume of each part. Devices of novel designs were fabricated and tested, and the results are compared with those of conventional approach. It is demonstrated that inaccuracy and inconsistency of droplet volume dispensed in the device with novel electrode designs are as low as 0.17 and 0.10%, respectively, while those of conventional approach are 25 and 0.76%, respectively. The dispensing frequency is enhanced from 4 to 9 Hz by using the novel design.

  4. Stroke echoscan protocol: a fast and accurate pathway to diagnose embolic strokes.

    PubMed

    Pagola, Jorge; González-Alujas, Teresa; Muchada, Marian; Teixidó, Gisela; Flores, Alan; De Blauwe, Sophie; Seró, Laia; Luna, David Rodríguez; Rubiera, Marta; Ribó, Marc; Boned, Sandra; Álvarez-Sabin, José; Evangelista, Arturo; Molina, Carlos A

    2015-01-01

    Cardiac Echoscan is the simplified transthoracic echocardiogram focused on the main source of emboli detection in the acute stroke diagnosis (Stroke Echoscan). We describe the clinical impact related to the Stroke Echoscan protocol in our Center. Acute stroke patients who underwent the Stroke Echoscan by a trained stroke neurologist were included (Echoscan group). All examinations were reviewed by cardiologists. The main embolic stroke etiologies were: ventricular akinesia (VA), severe aortic atheroma (AA) plaque and cardiac shunt (SHUNT). The rate of the embolic stroke etiologies and the median length of stay (LOS) were compared with a cohort of patients studied by cardiologist (Echo group). Eighty acute stroke patients were included. The sensitivity (S) and specificity (E) were: VA (S 98.6%, E 66.7%, k = .7), AA (S 93.3%, E 96.9%, k = .88) and SHUNT (S 100%, E 100%, k = 1), respectively. The rate of AA diagnosis was significantly higher in Echoscan group (18.8% vs. 8.9%; P = .05). Echoscan protocol significantly reduced the LOS: 6 days (IQR 3-10) versus Echo group 9 days (IQR 6-13; P < .001). The Echoscan protocol was an accurate quick test, which reduced the length of stay and increased the percentage of severe AA plaque diagnosis. Copyright © 2014 by the American Society of Neuroimaging.

  5. Inexpensive and fast pathogenic bacteria screening using field-effect transistors.

    PubMed

    Formisano, Nello; Bhalla, Nikhil; Heeran, Mel; Reyes Martinez, Juana; Sarkar, Amrita; Laabei, Maisem; Jolly, Pawan; Bowen, Chris R; Taylor, John T; Flitsch, Sabine; Estrela, Pedro

    2016-11-15

    While pathogenic bacteria contribute to a large number of globally important diseases and infections, current clinical diagnosis is based on processes that often involve culturing which can be time-consuming. Therefore, innovative, simple, rapid and low-cost solutions to effectively reduce the burden of bacterial infections are urgently needed. Here we demonstrate a label-free sensor for fast bacterial detection based on metal-oxide-semiconductor field-effect transistors (MOSFETs). The electric charge of bacteria binding to the glycosylated gates of a MOSFET enables quantification in a straightforward manner. We show that the limit of quantitation is 1.9×10(5) CFU/mL with this simple device, which is more than 10,000-times lower than is achieved with electrochemical impedance spectroscopy (EIS) and matrix-assisted laser desorption ionisation time-of-flight mass spectrometry (MALDI-ToF) on the same modified surfaces. Moreover, the measurements are extremely fast and the sensor can be mass produced at trivial cost as a tool for initial screening of pathogens. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Fast, accurate and easy-to-pipeline methods for amplicon sequence processing

    NASA Astrophysics Data System (ADS)

    Antonielli, Livio; Sessitsch, Angela

    2016-04-01

    Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.

  7. Genometa--a fast and accurate classifier for short metagenomic shotgun reads.

    PubMed

    Davenport, Colin F; Neugebauer, Jens; Beckmann, Nils; Friedrich, Benedikt; Kameri, Burim; Kokott, Svea; Paetow, Malte; Siekmann, Björn; Wieding-Drewes, Matthias; Wienhöfer, Markus; Wolf, Stefan; Tümmler, Burkhard; Ahlers, Volker; Sprengel, Frauke

    2012-01-01

    Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species) and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer. The Genometa program, a step by step tutorial and Java source code are freely available from http://genomics1.mh-hannover.de/genometa/ and on http://code.google.com/p/genometa/. This program has been tested on Ubuntu Linux and Windows XP/7.

  8. A new fast direct solver for the boundary element method

    NASA Astrophysics Data System (ADS)

    Huang, S.; Liu, Y. J.

    2017-09-01

    A new fast direct linear equation solver for the boundary element method (BEM) is presented in this paper. The idea of the new fast direct solver stems from the concept of the hierarchical off-diagonal low-rank matrix. The hierarchical off-diagonal low-rank matrix can be decomposed into the multiplication of several diagonal block matrices. The inverse of the hierarchical off-diagonal low-rank matrix can be calculated efficiently with the Sherman-Morrison-Woodbury formula. In this paper, a more general and efficient approach to approximate the coefficient matrix of the BEM with the hierarchical off-diagonal low-rank matrix is proposed. Compared to the current fast direct solver based on the hierarchical off-diagonal low-rank matrix, the proposed method is suitable for solving general 3-D boundary element models. Several numerical examples of 3-D potential problems with the total number of unknowns up to above 200,000 are presented. The results show that the new fast direct solver can be applied to solve large 3-D BEM models accurately and with better efficiency compared with the conventional BEM.

  9. A Simple, Fast, Low Cost, HPLC/UV Validated Method for Determination of Flutamide: Application to Protein Binding Studies

    PubMed Central

    Esmaeilzadeh, Sara; Valizadeh, Hadi; Zakeri-Milani, Parvin

    2016-01-01

    Purpose: The main goal of this study was development of a reverse phase high performance liquid chromatography (RP-HPLC) method for flutamide quantitation which is applicable to protein binding studies. Methods: Ultrafilteration method was used for protein binding study of flutamide. For sample analysis, flutamide was extracted by a simple and low cost extraction method using diethyl ether and then was determined by HPLC/UV. Acetanilide was used as an internal standard. The chromatographic system consisted of a reversed-phase C8 column with C8 pre-column, and the mobile phase of a mixture of 29% (v/v) methanol, 38% (v/v) acetonitrile and 33% (v/v) potassium dihydrogen phosphate buffer (50 mM) with pH adjusted to 3.2. Results: Acetanilide and flutamide were eluted at 1.8 and 2.9 min, respectively. The linearity of method was confirmed in the range of 62.5-16000 ng/ml (r2 > 0.99). The limit of quantification was shown to be 62.5 ng/ml. Precision and accuracy ranges found to be (0.2-1.4%, 90-105%) and (0.2-5.3 %, 86.7-98.5 %) respectively. Acetanilide and flutamide capacity factor values of 1.35 and 2.87, tailing factor values of 1.24 and 1.07 and resolution values of 1.8 and 3.22 were obtained in accordance with ICH guidelines. Conclusion: Based on the obtained results a rapid, precise, accurate, sensitive and cost-effective analysis procedure was proposed for quantitative determination of flutamide. PMID:27478788

  10. Fast response pyroelectric detector-preamplifier assembled device

    NASA Astrophysics Data System (ADS)

    Bai, PiJi; Tai, Yunjian; Liu, Huiping

    2008-03-01

    The pyroelectric detector is wide used for its simple structure and high performance to price ratio. It has been used in thermal detecting, infrared spectrum and laser testing. When the pyroelectric detector was applied in practice, fast reponse speed is need. For improving the response speed of the pyroelectric detector some specific technology has been used in the preamplifier schematic. High sense and fast response character of the pyroelectric detector-preamplifier assembled device had been achieved. When the device is applied in acute concussion condition, it must survive from the acute concussion condition testing. For it reliability some specific technology was used in the device fabricating procedure. At last the performance parameter testing result and simulation application condition result given in this paper show the performance of the pyroelectric detector-preamplifier assembled device had achieved the advance goal.

  11. Highly accurate symplectic element based on two variational principles

    NASA Astrophysics Data System (ADS)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  12. Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models.

    PubMed

    Drugowitsch, Jan

    2016-02-11

    We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models' parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation.

  13. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-07

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  14. GOSSIP: a method for fast and accurate global alignment of protein structures.

    PubMed

    Kifer, I; Nussinov, R; Wolfson, H J

    2011-04-01

    The database of known protein structures (PDB) is increasing rapidly. This results in a growing need for methods that can cope with the vast amount of structural data. To analyze the accumulating data, it is important to have a fast tool for identifying similar structures and clustering them by structural resemblance. Several excellent tools have been developed for the comparison of protein structures. These usually address the task of local structure alignment, an important yet computationally intensive problem due to its complexity. It is difficult to use such tools for comparing a large number of structures to each other at a reasonable time. Here we present GOSSIP, a novel method for a global all-against-all alignment of any set of protein structures. The method detects similarities between structures down to a certain cutoff (a parameter of the program), hence allowing it to detect similar structures at a much higher speed than local structure alignment methods. GOSSIP compares many structures in times which are several orders of magnitude faster than well-known available structure alignment servers, and it is also faster than a database scanning method. We evaluate GOSSIP both on a dataset of short structural fragments and on two large sequence-diverse structural benchmarks. Our conclusions are that for a threshold of 0.6 and above, the speed of GOSSIP is obtained with no compromise of the accuracy of the alignments or of the number of detected global similarities. A server, as well as an executable for download, are available at http://bioinfo3d.cs.tau.ac.il/gossip/.

  15. Nanopore Technology: A Simple, Inexpensive, Futuristic Technology for DNA Sequencing.

    PubMed

    Gupta, P D

    2016-10-01

    In health care, importance of DNA sequencing has been fully established. Sanger's Capillary Electrophoresis DNA sequencing methodology is time consuming, cumbersome, hence become more expensive. Lately, because of its versatility DNA sequencing became house hold name, and therefore, there is an urgent need of simple, fast, inexpensive, DNA sequencing technology. In the beginning of this century efforts were made, and Nanopore DNA sequencing technology was developed; still it is infancy, nevertheless, it is the futuristic technology.

  16. Fast 3D NIR systems for facial measurement and lip-reading

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther

    2017-05-01

    Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.

  17. Study on shear properties of coral sand under cyclic simple shear condition

    NASA Astrophysics Data System (ADS)

    Ji, Wendong; Zhang, Yuting; Jin, Yafei

    2018-05-01

    In recent years, the ocean development in our country urgently needs to be accelerated. The construction of artificial coral reefs has become an important development direction. In this paper, experimental studies of simple shear and cyclic simple shear of coral sand are carried out, and the shear properties and particle breakage of coral sand are analyzed. The results show that the coral sand samples show an overall shear failure in the simple shear test, which is more accurate and effective for studying the particle breakage. The shear displacement corresponding to the peak shear stress of the simple shear test is significantly larger than that corresponding to the peak shear stress of the direct shear test. The degree of particle breakage caused by the simple shear test is significantly related to the normal stress level. The particle breakage of coral sand after the cyclic simple shear test obviously increases compared with that of the simple shear test, and universal particle breakage occurs within the whole particle size range. The increasing of the cycle-index under cyclic simple shear test results in continuous compacting of the sample, so that the envelope curve of peak shearing force increases with the accumulated shear displacement.

  18. Fast rerouting schemes for protected mobile IP over MPLS networks

    NASA Astrophysics Data System (ADS)

    Wen, Chih-Chao; Chang, Sheng-Yi; Chen, Huan; Chen, Kim-Joan

    2005-10-01

    Fast rerouting is a critical traffic engineering operation in the MPLS networks. To implement the Mobile IP service over the MPLS network, one can collaborate with the fast rerouting operation to enhance the availability and survivability. MPLS can protect critical LSP tunnel between Home Agent (HA) and Foreign Agent (FA) using the fast rerouting scheme. In this paper, we propose a simple but efficient algorithm to address the triangle routing problem for the Mobile IP over the MPLS networks. We consider this routing issue as a link weighting and capacity assignment (LW-CA) problem. The derived solution is used to plan the fast restoration mechanism to protect the link or node failure. In this paper, we first model the LW-CA problem as a mixed integer optimization problem. Our goal is to minimize the call blocking probability on the most congested working truck for the mobile IP connections. Many existing network topologies are used to evaluate the performance of our scheme. Results show that our proposed scheme can obtain the best performance in terms of the smallest blocking probability compared to other schemes.

  19. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-01-01

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507

  20. Not So Fast! (and Not So Frugal!): Rethinking the Recognition Heuristic

    ERIC Educational Resources Information Center

    Oppenheimer, Daniel M.

    2003-01-01

    The "fast and frugal" approach to reasoning (Gigerenzer, G., & Todd, P. M. (1999). "Simple heuristics that make us smart." New York: Oxford University Press) claims that individuals use non-compensatory strategies in judgment--the idea that only one cue is taken into account in reasoning. The simplest and most important of these heuristics…

  1. Fast or Frugal, but Not Both: Decision Heuristics under Time Pressure

    ERIC Educational Resources Information Center

    Bobadilla-Suarez, Sebastian; Love, Bradley C.

    2018-01-01

    Heuristics are simple, yet effective, strategies that people use to make decisions. Because heuristics do not require all available information, they are thought to be easy to implement and to not tax limited cognitive resources, which has led heuristics to be characterized as fast-and-frugal. We question this monolithic conception of heuristics…

  2. An Approach in Radiation Therapy Treatment Planning: A Fast, GPU-Based Monte Carlo Method.

    PubMed

    Karbalaee, Mojtaba; Shahbazi-Gahrouei, Daryoush; Tavakoli, Mohammad B

    2017-01-01

    An accurate and fast radiation dose calculation is essential for successful radiation radiotherapy. The aim of this study was to implement a new graphic processing unit (GPU) based radiation therapy treatment planning for accurate and fast dose calculation in radiotherapy centers. A program was written for parallel running based on GPU. The code validation was performed by EGSnrc/DOSXYZnrc. Moreover, a semi-automatic, rotary, asymmetric phantom was designed and produced using a bone, the lung, and the soft tissue equivalent materials. All measurements were performed using a Mapcheck dosimeter. The accuracy of the code was validated using the experimental data, which was obtained from the anthropomorphic phantom as the gold standard. The findings showed that, compared with those of DOSXYZnrc in the virtual phantom and for most of the voxels (>95%), <3% dose-difference or 3 mm distance-to-agreement (DTA) was found. Moreover, considering the anthropomorphic phantom, compared to the Mapcheck dose measurements, <5% dose-difference or 5 mm DTA was observed. Fast calculation speed and high accuracy of GPU-based Monte Carlo method in dose calculation may be useful in routine radiation therapy centers as the core and main component of a treatment planning verification system.

  3. Précis of Simple heuristics that make us smart.

    PubMed

    Todd, P M; Gigerenzer, G

    2000-10-01

    How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), we explore fast and frugal heuristics--simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data--that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program.

  4. Modifying scoping codes to accurately calculate TMI-cores with lifetimes greater than 500 effective full-power days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, D.; Levine, S.L.; Luoma, J.

    1992-01-01

    The Three Mile Island unit 1 core reloads have been designed using fast but accurate scoping codes, PSUI-LEOPARD and ADMARC. PSUI-LEOPARD has been normalized to EPRI-CPM2 results and used to calculate the two-group constants, whereas ADMARC is a modern two-dimensional, two-group diffusion theory nodal code. Problems in accuracy were encountered for cycles 8 and higher as the core lifetime was increased beyond 500 effective full-power days. This is because the heavier loaded cores in both {sup 235}U and {sup 10}B have harder neutron spectra, which produces a change in the transport effect in the baffle reflector region, and the burnablemore » poison (BP) simulations were not accurate enough for the cores containing the increased amount of {sup 10}B required in the BP rods. In the authors study, a technique has been developed to take into account the change in the transport effect in the baffle region by modifying the fast neutron diffusion coefficient as a function of cycle length and core exposure or burnup. A more accurate BP simulation method is also developed, using integral transport theory and CPM2 data, to calculate the BP contribution to the equivalent fuel assembly (supercell) two-group constants. The net result is that the accuracy of the scoping codes is as good as that produced by CASMO/SIMULATE or CPM2/SIMULATE when comparing with measured data.« less

  5. Simple, inexpensive computerized rodent activity meters.

    PubMed

    Horton, R M; Karachunski, P I; Kellermann, S A; Conti-Fine, B M

    1995-10-01

    We describe two approaches for using obsolescent computers, either an IBM PC XT or an Apple Macintosh Plus, to accurately quantify spontaneous rodent activity, as revealed by continuous monitoring of the spontaneous usage of running activity wheels. Because such computers can commonly be obtained at little or no expense, and other commonly available materials and inexpensive parts can be used, these meters can be built quite economically. Construction of these meters requires no specialized electronics expertise, and their software requirements are simple. The computer interfaces are potentially of general interest, as they could also be used for monitoring a variety of events in a research setting.

  6. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    RAZ scoring is a simple, accurate and inexpensive screening technique for cardiomyopathy. Although HF QRS ECG is highly sensitive for cardiomyopathy, its specificity may be compromised in patients with cardiac pathologies other than cardiomyopathy, such as uncomplicated coronary artery disease or multiple coronary disease risk factors. Further studies are required to determine whether HF QRS might be useful for monitoring cardiomyopathy severity or the efficacy of therapy in a longitudinal fashion.

  7. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  8. A Simple Technique for Accurate Transfer of Secondary Copings in a Tooth-Supported Telescopic Prosthesis.

    PubMed

    Shankargouda, Swapnil B; Sidhu, Preena; Kardalkar, Swetha; Desai, Pooja M

    2017-02-01

    Residual ridge resorption is a rapid, progressive, irreversible, and inevitable process of bone resorption. Long-standing teeth and implants have been shown to have maintained the bone around them without resorption. Thus, overdenture therapy has been proven to be beneficial in situations where few remaining teeth are present. In addition to the various advantages seen with tooth-supported telescopic overdentures, a few shortcomings can also be expected, including unseating of the overdenture, increased bulk of the prosthesis, secondary caries, etc. The precise transfer of the secondary telescopic copings to maintain the spatial relationship, without any micromovement, remains the most critical step in ensuring the success of the tooth-supported telescopic prosthesis. Thus, a simple and innovative technique of splinting the secondary copings was devised to prevent distortion and micromovement and maintain its spatial relationship. © 2015 by the American College of Prosthodontists.

  9. Local SIMPLE multi-atlas-based segmentation applied to lung lobe detection on chest CT

    NASA Astrophysics Data System (ADS)

    Agarwal, M.; Hendriks, E. A.; Stoel, B. C.; Bakker, M. E.; Reiber, J. H. C.; Staring, M.

    2012-02-01

    For multi atlas-based segmentation approaches, a segmentation fusion scheme which considers local performance measures may be more accurate than a method which uses a global performance measure. We improve upon an existing segmentation fusion method called SIMPLE and extend it to be localized and suitable for multi-labeled segmentations. We demonstrate the algorithm performance on 23 CT scans of COPD patients using a leave-one- out experiment. Our algorithm performs significantly better (p < 0.01) than majority voting, STAPLE, and SIMPLE, with a median overlap of the fissure of 0.45, 0.48, 0.55 and 0.6 for majority voting, STAPLE, SIMPLE, and the proposed algorithm, respectively.

  10. Apparatus for continuous, fast, and precise measurements of position and velocity of a small spherical particle

    NASA Technical Reports Server (NTRS)

    Venkataraman, T. S.; Eidson, W. W.; Cohen, L. D.; Farina, J. D.; Acquista, C.

    1983-01-01

    The position and velocity of optically levitated glass spheres (radii 10-20 microns) movng in a gas are measured accurately, rapidly, and continuously using a high-speed rotating polygon mirror. The experimental technique developed here has repeatable position accuracies better than 20 microns. Each measurement takes less than 1 microsec and can be repeated every 100 microsec. The position of the levitated glass spheres can be manipulated accurately by modulating the laser power with an acoustic optic modulator. The technique provides a fast and accurate method to study general particle dynamics in a fluid.

  11. An accurate, simple prognostic model consisting of age, JAK2, CALR, and MPL mutation status for patients with primary myelofibrosis.

    PubMed

    Rozovski, Uri; Verstovsek, Srdan; Manshouri, Taghi; Dembitz, Vilma; Bozinovic, Ksenija; Newberry, Kate; Zhang, Ying; Bove, Joseph E; Pierce, Sherry; Kantarjian, Hagop; Estrov, Zeev

    2017-01-01

    In most patients with primary myelofibrosis, one of three mutually exclusive somatic mutations is detected. In approximately 60% of patients, the Janus kinase 2 gene is mutated, in 20%, the calreticulin gene is mutated, and in 5%, the myeloproliferative leukemia virus gene is mutated. Although patients with mutated calreticulin or myeloproliferative leukemia genes have a favorable outcome, and those with none of these mutations have an unfavorable outcome, prognostication based on mutation status is challenging due to the heterogeneous survival of patients with mutated Janus kinase 2. To develop a prognostic model based on mutation status, we screened primary myelofibrosis patients seen at the MD Anderson Cancer Center, Houston, USA, between 2000 and 2013 for the presence of Janus kinase 2, calreticulin, and myeloproliferative leukemia mutations. Of 344 primary myelofibrosis patients, Janus kinase 2 V617F was detected in 226 (66%), calreticulin mutation in 43 (12%), and myeloproliferative leukemia mutation in 16 (5%); 59 patients (17%) were triple-negatives. A 50% cut-off dichotomized Janus kinase 2-mutated patients into those with high Janus kinase 2 V617F allele burden and favorable survival and those with low Janus kinase 2 V617F allele burden and unfavorable survival. Patients with a favorable mutation status (high Janus kinase 2 V617F allele burden/myeloproliferative leukemia/calreticulin mutation) and aged 65 years or under had a median survival of 126 months. Patients with one risk factor (low Janus kinase 2 V617F allele burden/triple-negative or age >65 years) had an intermediate survival duration, and patients aged over 65 years with an adverse mutation status (low Janus kinase 2 V617F allele burden or triple-negative) had a median survival of only 35 months. Our simple and easily applied age- and mutation status-based scoring system accurately predicted the survival of patients with primary myelofibrosis. Copyright© Ferrata Storti Foundation.

  12. Fast Fuzzy Arithmetic Operations

    NASA Technical Reports Server (NTRS)

    Hampton, Michael; Kosheleva, Olga

    1997-01-01

    In engineering applications of fuzzy logic, the main goal is not to simulate the way the experts really think, but to come up with a good engineering solution that would (ideally) be better than the expert's control, In such applications, it makes perfect sense to restrict ourselves to simplified approximate expressions for membership functions. If we need to perform arithmetic operations with the resulting fuzzy numbers, then we can use simple and fast algorithms that are known for operations with simple membership functions. In other applications, especially the ones that are related to humanities, simulating experts is one of the main goals. In such applications, we must use membership functions that capture every nuance of the expert's opinion; these functions are therefore complicated, and fuzzy arithmetic operations with the corresponding fuzzy numbers become a computational problem. In this paper, we design a new algorithm for performing such operations. This algorithm is applicable in the case when negative logarithms - log(u(x)) of membership functions u(x) are convex, and reduces computation time from O(n(exp 2))to O(n log(n)) (where n is the number of points x at which we know the membership functions u(x)).

  13. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  14. Proposal and evaluation of FASDIM, a Fast And Simple De-Identification Method for unstructured free-text clinical records.

    PubMed

    Chazard, Emmanuel; Mouret, Capucine; Ficheur, Grégoire; Schaffar, Aurélien; Beuscart, Jean-Baptiste; Beuscart, Régis

    2014-04-01

    Medical free-text records enable to get rich information about the patients, but often need to be de-identified by removing the Protected Health Information (PHI), each time the identification of the patient is not mandatory. Pattern matching techniques require pre-defined dictionaries, and machine learning techniques require an extensive training set. Methods exist in French, but either bring weak results or are not freely available. The objective is to define and evaluate FASDIM, a Fast And Simple De-Identification Method for French medical free-text records. FASDIM consists in removing all the words that are not present in the authorized word list, and in removing all the numbers except those that match a list of protection patterns. The corresponding lists are incremented in the course of the iterations of the method. For the evaluation, the workload is estimated in the course of records de-identification. The efficiency of the de-identification is assessed by independent medical experts on 508 discharge letters that are randomly selected and de-identified by FASDIM. Finally, the letters are encoded after and before de-identification according to 3 terminologies (ATC, ICD10, CCAM) and the codes are compared. The construction of the list of authorized words is progressive: 12h for the first 7000 letters, 16 additional hours for 20,000 additional letters. The Recall (proportion of removed Protected Health Information, PHI) is 98.1%, the Precision (proportion of PHI within the removed token) is 79.6% and the F-measure (harmonic mean) is 87.9%. In average 30.6 terminology codes are encoded per letter, and 99.02% of those codes are preserved despite the de-identification. FASDIM gets good results in French and is freely available. It is easy to implement and does not require any predefined dictionary. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. An Accurate Co-registration Method for Airborne Repeat-pass InSAR

    NASA Astrophysics Data System (ADS)

    Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.

    2017-10-01

    Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.

  16. Science Opportunity Analyzer (SOA): Science Planning Made Simple

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; Polanskey, Carol A.

    2004-01-01

    .For the first time at JPL, the Cassini mission to Saturn is using distributed science operations for developing their experiments. Remote scientists needed the ability to: a) Identify observation opportunities; b) Create accurate, detailed designs for their observations; c) Verify that their designs meet their objectives; d) Check their observations against project flight rules and constraints; e) Communicate their observations to other scientists. Many existing tools provide one or more of these functions, but Science Opportunity Analyzer (SOA) has been built to unify these tasks into a single application. Accurate: Utilizes JPL Navigation and Ancillary Information Facility (NAIF) SPICE* software tool kit - Provides high fidelity modeling. - Facilitates rapid adaptation to other flight projects. Portable: Available in Unix, Windows and Linux. Adaptable: Designed to be a multi-mission tool so it can be readily adapted to other flight projects. Implemented in Java, Java 3D and other innovative technologies. Conclusion: SOA is easy to use. It only requires 6 simple steps. SOA's ability to show the same accurate information in multiple ways (multiple visualization formats, data plots, listings and file output) is essential to meet the needs of a diverse, distributed science operations environment.

  17. Accurate Estimate of Some Propagation Characteristics for the First Higher Order Mode in Graded Index Fiber with Simple Analytic Chebyshev Method

    NASA Astrophysics Data System (ADS)

    Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas

    2013-03-01

    Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.

  18. Fast, Computer Supported Experimental Determination of Absolute Zero Temperature at School

    ERIC Educational Resources Information Center

    Bogacz, Bogdan F.; Pedziwiatr, Antoni T.

    2014-01-01

    A simple and fast experimental method of determining absolute zero temperature is presented. Air gas thermometer coupled with pressure sensor and data acquisition system COACH is applied in a wide range of temperature. By constructing a pressure vs temperature plot for air under constant volume it is possible to obtain--by extrapolation to zero…

  19. A fast, preconditioned conjugate gradient Toeplitz solver

    NASA Technical Reports Server (NTRS)

    Pan, Victor; Schrieber, Robert

    1989-01-01

    A simple factorization is given of an arbitrary hermitian, positive definite matrix in which the factors are well-conditioned, hermitian, and positive definite. In fact, given knowledge of the extreme eigenvalues of the original matrix A, an optimal improvement can be achieved, making the condition numbers of each of the two factors equal to the square root of the condition number of A. This technique is to applied to the solution of hermitian, positive definite Toeplitz systems. Large linear systems with hermitian, positive definite Toeplitz matrices arise in some signal processing applications. A stable fast algorithm is given for solving these systems that is based on the preconditioned conjugate gradient method. The algorithm exploits Toeplitz structure to reduce the cost of an iteration to O(n log n) by applying the fast Fourier Transform to compute matrix-vector products. Matrix factorization is used as a preconditioner.

  20. Standardizing a simpler, more sensitive and accurate tail bleeding assay in mice

    PubMed Central

    Liu, Yang; Jennings, Nicole L; Dart, Anthony M; Du, Xiao-Jun

    2012-01-01

    AIM: To optimize the experimental protocols for a simple, sensitive and accurate bleeding assay. METHODS: Bleeding assay was performed in mice by tail tip amputation, immersing the tail in saline at 37 °C, continuously monitoring bleeding patterns and measuring bleeding volume from changes in the body weight. Sensitivity and extent of variation of bleeding time and bleeding volume were compared in mice treated with the P2Y receptor inhibitor prasugrel at various doses or in mice deficient of FcRγ, a signaling protein of the glycoprotein VI receptor. RESULTS: We described details of the bleeding assay with the aim of standardizing this commonly used assay. The bleeding assay detailed here was simple to operate and permitted continuous monitoring of bleeding pattern and detection of re-bleeding. We also reported a simple and accurate way of quantifying bleeding volume from changes in the body weight, which correlated well with chemical assay of hemoglobin levels (r2 = 0.990, P < 0.0001). We determined by tail bleeding assay the dose-effect relation of the anti-platelet drug prasugrel from 0.015 to 5 mg/kg. Our results showed that the correlation of bleeding time and volume was unsatisfactory and that compared with the bleeding time, bleeding volume was more sensitive in detecting a partial inhibition of platelet’s haemostatic activity (P < 0.01). Similarly, in mice with genetic disruption of FcRγ as a signaling molecule of P-selectin glycoprotein ligand-1 leading to platelet dysfunction, both increased bleeding volume and repeated bleeding pattern defined the phenotype of the knockout mice better than that of a prolonged bleeding time. CONCLUSION: Determination of bleeding pattern and bleeding volume, in addition to bleeding time, improved the sensitivity and accuracy of this assay, particularly when platelet function is partially inhibited. PMID:24520531

  1. A fast and simple solid phase microextraction coupled with gas chromatography-triple quadrupole mass spectrometry method for the assay of urinary markers of glutaric acidemias.

    PubMed

    Naccarato, Attilio; Gionfriddo, Emanuela; Elliani, Rosangela; Sindona, Giovanni; Tagarelli, Antonio

    2014-10-30

    The analysis of characteristic urinary acidic markers such as glutaric, 3-hydroxyglutaric, 2-hydroxyglutaric, adipic, suberic, sebacic, ethylmalonic, 3-hydroxyisovaleric and isobutyric acid constitutes the recommended follow-up testing procedure for glutaric acidemia type 1 (GA-1) and type 2 (GA-2). The goal of the work herein presented is the development of a fast and simple method for the quantification of these biomarkers in human urine. The proposed analytical approach is based on the use of solid phase microextraction (SPME) combined with gas chromatography-triple quadrupole mass spectrometry (GC-QqQ-MS) afterward a rapid derivatization of acidic moieties by propyl chloroformate, propanol and pyridine. Trueness and precision of the proposed protocol, tested at 5, 30 and 80mgl -1 , provided satisfactory values: recoveries were in the range between 72% and 116% and the relative standard deviations (RSD%) were between 0.9% and 18% (except for isobutyric acid at 5mgl -1 ). The LOD values achieved by the proposed method ranged between 1.0 and 473μgl -1 . Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Automated and fast building of three-dimensional RNA structures.

    PubMed

    Zhao, Yunjie; Huang, Yangyu; Gong, Zhou; Wang, Yanjie; Man, Jianfen; Xiao, Yi

    2012-01-01

    Building tertiary structures of non-coding RNA is required to understand their functions and design new molecules. Current algorithms of RNA tertiary structure prediction give satisfactory accuracy only for small size and simple topology and many of them need manual manipulation. Here, we present an automated and fast program, 3dRNA, for RNA tertiary structure prediction with reasonable accuracy for RNAs of larger size and complex topology.

  3. Determinants of High Fasting Insulin and Insulin Resistance Among Overweight/Obese Adolescents.

    PubMed

    Ling, Jerri Chiu Yun; Mohamed, Mohd Nahar Azmi; Jalaludin, Muhammad Yazid; Rampal, Sanjay; Zaharan, Nur Lisa; Mohamed, Zahurin

    2016-11-08

    Hyperinsulinaemia is the earliest subclinical metabolic abnormality, which precedes insulin resistance in obese children. An investigation was conducted on the potential predictors of fasting insulin and insulin resistance among overweight/obese adolescents in a developing Asian country. A total of 173 overweight/obese (BMI > 85 th percentile) multi-ethnic Malaysian adolescents aged 13 were recruited from 23 randomly selected schools in this cross-sectional study. Waist circumference (WC), body fat percentage (BF%), physical fitness score (PFS), fasting glucose and fasting insulin were measured. Insulin resistance was calculated using homeostasis model assessment of insulin resistance (HOMA-IR). Adjusted stepwise multiple regression analysis was performed to predict fasting insulin and HOMA-IR. Covariates included pubertal stage, socioeconomic status, nutritional and physical activity scores. One-third of our adolescents were insulin resistant, with girls having significantly higher fasting insulin and HOMA-IR than boys. Gender, pubertal stage, BMI, WC and BF% had significant, positive moderate correlations with fasting insulin and HOMA-IR while PFS was inversely correlated (p < 0.05). Fasting insulin was primarily predicted by gender-girls (Beta = 0.305, p < 0.0001), higher BMI (Beta = -0.254, p = 0.02) and greater WC (Beta = 0.242, p = 0.03). This study demonstrated that gender, BMI and WC are simple predictors of fasting insulin and insulin resistance in overweight/obese adolescents.

  4. Testing a simple field method for assessing nitrate removal in riparian zones

    Treesearch

    Philippe Vidon; Michael G. Dosskey

    2008-01-01

    Being able to identify riparian sites that function better for nitrate removal from groundwater is critical to using efficiently the riparian zones for water quality management. For this purpose, managers need a method that is quick, inexpensive, and accurate enough to enable effective management decisions. This study assesses the precision and accuracy of a simple...

  5. Improvements in the Approximate Formulae for the Period of the Simple Pendulum

    ERIC Educational Resources Information Center

    Turkyilmazoglu, M.

    2010-01-01

    This paper is concerned with improvements in some exact formulae for the period of the simple pendulum problem. Two recently presented formulae are re-examined and refined rationally, yielding more accurate approximate periods. Based on the improved expressions here, a particular new formula is proposed for the period. It is shown that the derived…

  6. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  7. Small-size pedestrian detection in large scene based on fast R-CNN

    NASA Astrophysics Data System (ADS)

    Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu

    2018-04-01

    Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.

  8. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    NASA Astrophysics Data System (ADS)

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    2018-04-01

    We propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons ( γ ∗, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impact parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.

  9. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less

  10. Utilizing fast multipole expansions for efficient and accurate quantum-classical molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul

    2015-03-01

    Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.

  11. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    DOE PAGES

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    2018-04-27

    Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less

  12. Fast realization of nonrecursive digital filters with limits on signal delay

    NASA Astrophysics Data System (ADS)

    Titov, M. A.; Bondarenko, N. N.

    1983-07-01

    Attention is given to the problem of achieving a fast realization of nonrecursive digital filters with the aim of reducing signal delay. It is shown that a realization wherein the impulse characteristic of the filter is divided into blocks satisfies the delay requirements and is almost as economical in terms of the number of multiplications as conventional fast convolution. In addition, the block method leads to a reduction in the needed size of the memory and in the number of additions; the short-convolution procedure is substantially simplified. Finally, the block method facilitates the paralleling of computations owing to the simple transfers between subfilters.

  13. Open Probe fast GC-MS - combining ambient sampling ultra-fast separation and in-vacuum ionization for real-time analysis.

    PubMed

    Keshet, U; Alon, T; Fialkov, A B; Amirav, A

    2017-07-01

    An Open Probe inlet was combined with a low thermal mass ultra-fast gas chromatograph (GC), in-vacuum electron ionization ion source and a mass spectrometer (MS) of GC-MS for obtaining real-time analysis with separation. The Open Probe enables ambient sampling via sample vaporization in an oven that is open to room air, and the ultra-fast GC provides ~30-s separation, while if no separation is required, it can act as a transfer line with 2 to 3-s sample transfer time. Sample analysis is as simple as touching the sample, pushing the sample holder into the Open Probe oven and obtaining the results in 30 s. The Open Probe fast GC was mounted on a standard Agilent 7890 GC that was coupled with an Agilent 5977A MS. Open Probe fast GC-MS provides real-time analysis combined with GC separation and library identification, and it uses the low-cost MS of GC-MS. The operation of Open Probe fast GC-MS is demonstrated in the 30-s separation and 50-s full analysis cycle time of tetrahydrocannabinol and cannabinol in Cannabis flower, sub 1-min analysis of trace trinitrotoluene transferred from a finger onto a glass surface, vitamin E in canola oil, sterols in olive oil, polybrominated flame retardants in plastics, alprazolam in Xanax drug pill and free fatty acids and cholesterol in human blood. The extrapolated limit of detection for pyrene is <1 fg, but the concentration is too high and the software noise calculation is untrustworthy. The broad range of compounds amenable for analysis is demonstrated in the analysis of reserpine. The possible use with alternate standard GC-MS and Open Probe fast GC-MS is demonstrated in the analysis of heroin in its street drug powder. The use of Open Probe with the fast GC acting as a transfer line is demonstrated in <10-s analysis without separation of ibuprofen and estradiol. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production

    NASA Astrophysics Data System (ADS)

    Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne

    2018-05-01

    A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.

  15. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    NASA Astrophysics Data System (ADS)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  16. Strawberry: Fast and accurate genome-guided transcript reconstruction and quantification from RNA-Seq.

    PubMed

    Liu, Ruolin; Dickerson, Julie

    2017-11-01

    We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.

  17. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  18. Stability properties and fast ion confinement of hybrid tokamak plasma configurations

    NASA Astrophysics Data System (ADS)

    Graves, J. P.; Brunetti, D.; Pfefferle, D.; Faustin, J. M. P.; Cooper, W. A.; Kleiner, A.; Lanthaler, S.; Patten, H. W.; Raghunathan, M.

    2015-11-01

    In hybrid scenarios with flat q just above unity, extremely fast growing tearing modes are born from toroidal sidebands of the near resonant ideal internal kink mode. New scalings of the growth rate with the magnetic Reynolds number arise from two fluid effects and sheared toroidal flow. Non-linear saturated 1/1 dominant modes obtained from initial value stability calculation agree with the amplitude of the 1/1 component of a 3D VMEC equilibrium calculation. Viable and realistic equilibrium representation of such internal kink modes allow fast ion studies to be accurately established. Calculations of MAST neutral beam ion distributions using the VENUS-LEVIS code show very good agreement of observed impaired core fast ion confinement when long lived modes occur. The 3D ICRH code SCENIC also enables the establishment of minority RF distributions in hybrid plasmas susceptible to saturated near resonant internal kink modes.

  19. Neighborhood fast food availability and fast food consumption.

    PubMed

    Oexle, Nathalie; Barnes, Timothy L; Blake, Christine E; Bell, Bethany A; Liese, Angela D

    2015-09-01

    Recent nutritional and public health research has focused on how the availability of various types of food in a person's immediate area or neighborhood influences his or her food choices and eating habits. It has been theorized that people living in areas with a wealth of unhealthy fast-food options may show higher levels of fast-food consumption, a factor that often coincides with being overweight or obese. However, measuring food availability in a particular area is difficult to achieve consistently: there may be differences in the strict physical locations of food options as compared to how individuals perceive their personal food availability, and various studies may use either one or both of these measures. The aim of this study was to evaluate the association between weekly fast-food consumption and both a person's perceived availability of fast-food and an objective measure of fast-food presence - Geographic Information Systems (GIS) - within that person's neighborhood. A randomly selected population-based sample of eight counties in South Carolina was used to conduct a cross-sectional telephone survey assessing self-report fast-food consumption and perceived availability of fast food. GIS was used to determine the actual number of fast-food outlets within each participant's neighborhood. Using multinomial logistic regression analyses, we found that neither perceived availability nor GIS-based presence of fast-food was significantly associated with weekly fast-food consumption. Our findings indicate that availability might not be the dominant factor influencing fast-food consumption. We recommend using subjective availability measures and considering individual characteristics that could influence both perceived availability of fast food and its impact on fast-food consumption. If replicated, our findings suggest that interventions aimed at reducing fast-food consumption by limiting neighborhood fast-food availability might not be completely effective

  20. Simple and rapid analytical method for detection of amino acids in blood using blood spot on filter paper, fast-GC/MS and isotope dilution technique.

    PubMed

    Kawana, Shuichi; Nakagawa, Katsuhiro; Hasegawa, Yuki; Yamaguchi, Seiji

    2010-11-15

    A simple and rapid method for quantitative analysis of amino acids, including valine (Val), leucine (Leu), isoleucine (Ile), methionine (Met) and phenylalanine (Phe), in whole blood has been developed using GC/MS. In this method, whole blood was collected using a filter paper technique, and a 1/8 in. blood spot punch was used for sample preparation. Amino acids were extracted from the sample, and the extracts were purified using cation-exchange resins. The isotope dilution method using ²H₈-Val, ²H₃-Leu, ²H₃-Met and ²H₅-Phe as internal standards was applied. Following propyl chloroformate derivatization, the derivatives were analyzed using fast-GC/MS. The extraction recoveries using these techniques ranged from 69.8% to 87.9%, and analysis time for each sample was approximately 26 min. Calibration curves at concentrations from 0.0 to 1666.7 μmol/l for Val, Leu, Ile and Phe and from 0.0 to 333.3 μmol/l for Met showed good linearity with regression coefficients=1. The method detection limits for Val, Leu, Ile, Met and Phe were 24.2, 16.7, 8.7, 1.5 and 12.9 μmol/l, respectively. This method was applied to blood spot samples obtained from patients with phenylketonuria (PKU), maple syrup urine disease (MSUD), hypermethionine and neonatal intrahepatic cholestasis caused by citrin deficiency (NICCD), and the analysis results showed that the concentrations of amino acids that characterize these diseases were increased. These results indicate that this method provides a simple and rapid procedure for precise determination of amino acids in whole blood. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. Vision drives accurate approach behavior during prey capture in laboratory mice

    PubMed Central

    Hoy, Jennifer L.; Yavorska, Iryna; Wehr, Michael; Niell, Cristopher M.

    2016-01-01

    Summary The ability to genetically identify and manipulate neural circuits in the mouse is rapidly advancing our understanding of visual processing in the mammalian brain [1,2]. However, studies investigating the circuitry that underlies complex ethologically-relevant visual behaviors in the mouse have been primarily restricted to fear responses [3–5]. Here, we show that a laboratory strain of mouse (Mus musculus, C57BL/6J) robustly pursues, captures and consumes live insect prey, and that vision is necessary for mice to perform the accurate orienting and approach behaviors leading to capture. Specifically, we differentially perturbed visual or auditory input in mice and determined that visual input is required for accurate approach, allowing maintenance of bearing to within 11 degrees of the target on average during pursuit. While mice were able to capture prey without vision, the accuracy of their approaches and capture rate dramatically declined. To better explore the contribution of vision to this behavior, we developed a simple assay that isolated visual cues and simplified analysis of the visually guided approach. Together, our results demonstrate that laboratory mice are capable of exhibiting dynamic and accurate visually-guided approach behaviors, and provide a means to estimate the visual features that drive behavior within an ethological context. PMID:27773567

  2. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo

    2018-06-01

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  3. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  4. fast_protein_cluster: parallel and optimized clustering of large-scale protein modeling data.

    PubMed

    Hung, Ling-Hong; Samudrala, Ram

    2014-06-15

    fast_protein_cluster is a fast, parallel and memory efficient package used to cluster 60 000 sets of protein models (with up to 550 000 models per set) generated by the Nutritious Rice for the World project. fast_protein_cluster is an optimized and extensible toolkit that supports Root Mean Square Deviation after optimal superposition (RMSD) and Template Modeling score (TM-score) as metrics. RMSD calculations using a laptop CPU are 60× faster than qcprot and 3× faster than current graphics processing unit (GPU) implementations. New GPU code further increases the speed of RMSD and TM-score calculations. fast_protein_cluster provides novel k-means and hierarchical clustering methods that are up to 250× and 2000× faster, respectively, than Clusco, and identify significantly more accurate models than Spicker and Clusco. fast_protein_cluster is written in C++ using OpenMP for multi-threading support. Custom streaming Single Instruction Multiple Data (SIMD) extensions and advanced vector extension intrinsics code accelerate CPU calculations, and OpenCL kernels support AMD and Nvidia GPUs. fast_protein_cluster is available under the M.I.T. license. (http://software.compbio.washington.edu/fast_protein_cluster) © The Author 2014. Published by Oxford University Press.

  5. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    NASA Astrophysics Data System (ADS)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  6. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations.

    PubMed

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-15

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  7. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGES

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  8. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space

    PubMed Central

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956

  9. Accurate SERS detection of malachite green in aquatic products on basis of graphene wrapped flexible sensor.

    PubMed

    Ouyang, Lei; Yao, Ling; Zhou, Taohong; Zhu, Lihua

    2018-10-16

    Malachite Green (MG) is a banned pesticide for aquaculture products. As a required inspection item, its fast and accurate determination before the products' accessing market is very important. Surface enhanced Raman scattering (SERS) is a promising tool for MG sensing, but it requires the overcoming of several problems such as fairly poor sensitivity and reproducibility, especially laser induced chemical conversion and photo-bleaching during SERS observation. By using a graphene wrapped Ag array based flexible membrane sensor, a modified SERS strategy was proposed for the sensitive and accurate detection of MG. The graphene layer functioned as an inert protector for impeding chemical transferring of the bioproduct Leucomalachite Green (LMG) to MG during the SERS detection, and as a heat transmitter for preventing laser induced photo-bleaching, which enables the separate detection of MG and LMG in fish extracts. The combination of the Ag array and the graphene cover also produced plentiful densely and uniformly distributed hot spots, leading to analytical enhancement factor up to 3.9 × 10 8 and excellent reproducibility (relative standard deviation low to 5.8% for 70 runs). The proposed method was easily used for MG detection with limit of detection (LOD) as low as 2.7 × 10 -11  mol L -1 . The flexibility of the sensor enable it have a merit for in-field fast detection of MG residues on the scale of a living fish through a surface extraction and paste transferring manner. The developed strategy was successfully applied in the analysis of real samples, showing good prospects for both the fast inspection and quantitative detection of MG. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Fast left ventricle tracking in CMR images using localized anatomical affine optical flow

    NASA Astrophysics Data System (ADS)

    Queirós, Sandro; Vilaça, João. L.; Morais, Pedro; Fonseca, Jaime C.; D'hooge, Jan; Barbosa, Daniel

    2015-03-01

    In daily cardiology practice, assessment of left ventricular (LV) global function using non-invasive imaging remains central for the diagnosis and follow-up of patients with cardiovascular diseases. Despite the different methodologies currently accessible for LV segmentation in cardiac magnetic resonance (CMR) images, a fast and complete LV delineation is still limitedly available for routine use. In this study, a localized anatomically constrained affine optical flow method is proposed for fast and automatic LV tracking throughout the full cardiac cycle in short-axis CMR images. Starting from an automatically delineated LV in the end-diastolic frame, the endocardial and epicardial boundaries are propagated by estimating the motion between adjacent cardiac phases using optical flow. In order to reduce the computational burden, the motion is only estimated in an anatomical region of interest around the tracked boundaries and subsequently integrated into a local affine motion model. Such localized estimation enables to capture complex motion patterns, while still being spatially consistent. The method was validated on 45 CMR datasets taken from the 2009 MICCAI LV segmentation challenge. The proposed approach proved to be robust and efficient, with an average distance error of 2.1 mm and a correlation with reference ejection fraction of 0.98 (1.9 +/- 4.5%). Moreover, it showed to be fast, taking 5 seconds for the tracking of a full 4D dataset (30 ms per image). Overall, a novel fast, robust and accurate LV tracking methodology was proposed, enabling accurate assessment of relevant global function cardiac indices, such as volumes and ejection fraction

  11. Simple optimized Brenner potential for thermodynamic properties of diamond

    NASA Astrophysics Data System (ADS)

    Liu, F.; Tang, Q. H.; Shang, B. S.; Wang, T. C.

    2012-02-01

    We have examined the commonly used Brenner potentials in the context of the thermodynamic properties of diamond. A simple optimized Brenner potential is proposed that provides very good predictions of the thermodynamic properties of diamond. It is shown that, compared to the experimental data, the lattice wave theory of molecular dynamics (LWT) with this optimized Brenner potential can accurately predict the temperature dependence of specific heat, lattice constant, Grüneisen parameters and coefficient of thermal expansion (CTE) of diamond.

  12. Improved Detection System Description and New Method for Accurate Calibration of Micro-Channel Plate Based Instruments and Its Use in the Fast Plasma Investigation on NASA's Magnetospheric MultiScale Mission

    NASA Technical Reports Server (NTRS)

    Gliese, U.; Avanov, L. A.; Barrie, A. C.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Gershman, D. J.; Dorelli, J. C.; hide

    2015-01-01

    The Fast Plasma Investigation (FPI) on NASAs Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30 ms for electrons; 150 ms for ions) and spatially differentiated measurements of the full 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity, the reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions of magnetically reconnecting plasmas. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated using the plateau curve technique. In this, a fixed detection threshold is set. The detection system count rate is then measured as a function of MCP voltage to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection system discrimination threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully describe the behavior of the detection system and individually characterize each of its fundamental parameters. To improve this situation, we have developed a detailed phenomenological description of the detection system, its behavior and its signal, crosstalk and noise sources. Based on this, we have devised a new detection

  13. FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+

    NASA Astrophysics Data System (ADS)

    Sahoo, B. K.

    2010-12-01

    We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.

  14. High performance pipelined multiplier with fast carry-save adder

    NASA Technical Reports Server (NTRS)

    Wu, Angus

    1990-01-01

    A high-performance pipelined multiplier is described. Its high performance results from the fast carry-save adder basic cell which has a simple structure and is suitable for the Gate Forest semi-custom environment. The carry-save adder computes the sum and carry within two gate delay. Results show that the proposed adder can operate at 200 MHz for a 2-micron CMOS process; better performance is expected in a Gate Forest realization.

  15. Reconsidering "evidence" for fast-and-frugal heuristics.

    PubMed

    Hilbig, Benjamin E

    2010-12-01

    In several recent reviews, authors have argued for the pervasive use of fast-and-frugal heuristics in human judgment. They have provided an overview of heuristics and have reiterated findings corroborating that such heuristics can be very valid strategies leading to high accuracy. They also have reviewed previous work that implies that simple heuristics are actually used by decision makers. Unfortunately, concerning the latter point, these reviews appear to be somewhat incomplete. More important, previous conclusions have been derived from investigations that bear some noteworthy methodological limitations. I demonstrate these by proposing a new heuristic and provide some novel critical findings. Also, I review some of the relevant literature often not-or only partially-considered. Overall, although some fast-and-frugal heuristics indeed seem to predict behavior at times, there is little to no evidence for others. More generally, the empirical evidence available does not warrant the conclusion that heuristics are pervasively used.

  16. Prediction of Petermann I and II Spot Sizes for Single-mode Dispersion-shifted and Dispersion-flattened Fibers by a Simple Technique

    NASA Astrophysics Data System (ADS)

    Kamila, Kiranmay; Panda, Anup Kumar; Gangopadhyay, Sankar

    2013-09-01

    Employing the series expression for the fundamental modal field of dispersion-shifted trapezoidal and dispersion-flattened graded and step W fibers, we present simple but accurate analytical expressions for Petermann I and II spot sizes of such kind of fibers. Choosing some typical dispersion-shifted trapezoidal and dispersion-flattened graded and step W fibers as examples, we show that our estimations match excellently with the exact numerical results. The evaluation of the concerned propagation parameters by our formalism needs very little computations. This accurate but simple formalism will benefit the system engineers working in the field of all optical technology.

  17. Development of a Simple RP-HPLC-UV Method for Determination of Azithromycin in Bulk and Pharmaceutical Dosage forms as an Alternative to the USP Method

    PubMed Central

    Ghari, Tayebeh; Kobarfard, Farzad; Mortazavi, Seyed Alireza

    2013-01-01

    The present study was designed to develop a simple, validated liquid chromatographic method for the analysis of azithromycin in bulk and pharmaceutical dosage forms using ultraviolet detector. The best stationary phase was determined as C18 column, 5 μm, 250 mm × 4.6 mm. Mobile phase was optimized to obtain a fast and selective separation of the drug. Flow rate was 1.5 mL/min, Wavelength was set at 210 nm and the volume of each injection was 500 μL. An isocratic methanol/buffer mobile phase at the ratio of 90:10 v/v gave the best separation and resolution. The proposed method was accurate, precise, sensitive, and linear over a wide range of concentration of azithromycin. The developed method has the advantage of using UV detector compared to the USP method in which electrochemical detector has been used. The validated method was successfully applied to the determination of azithromycin in bulk and pharmaceutical dosage forms. PMID:24250672

  18. A simple model for cell type recognition using 2D-correlation analysis of FTIR images from breast cancer tissue

    NASA Astrophysics Data System (ADS)

    Ali, Mohamed H.; Rakib, Fazle; Al-Saad, Khalid; Al-Saady, Rafif; Lyng, Fiona M.; Goormaghtigh, Erik

    2018-07-01

    Breast cancer is the second most common cancer after lung cancer. So far, in clinical practice, most cancer parameters originating from histopathology rely on the visualization by a pathologist of microscopic structures observed in stained tissue sections, including immunohistochemistry markers. Fourier transform infrared spectroscopy (FTIR) spectroscopy provides a biochemical fingerprint of a biopsy sample and, together with advanced data analysis techniques, can accurately classify cell types. Yet, one of the challenges when dealing with FTIR imaging is the slow recording of the data. One cm2 tissue section requires several hours of image recording. We show in the present paper that 2D covariance analysis singles out only a few wavenumbers where both variance and covariance are large. Simple models could be built using 4 wavenumbers to identify the 4 main cell types present in breast cancer tissue sections. Decision trees provide particularly simple models to reach discrimination between the 4 cell types. The robustness of these simple decision-tree models were challenged with FTIR spectral data obtained using different recording conditions. One test set was recorded by transflection on tissue sections in the presence of paraffin while the training set was obtained on dewaxed tissue sections by transmission. Furthermore, the test set was collected with a different brand of FTIR microscope and a different pixel size. Despite the different recording conditions, separating extracellular matrix (ECM) from carcinoma spectra was 100% successful, underlying the robustness of this univariate model and the utility of covariance analysis for revealing efficient wavenumbers. We suggest that 2D covariance maps using the full spectral range could be most useful to select the interesting wavenumbers and achieve very fast data acquisition on quantum cascade laser infrared imaging microscopes.

  19. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    NASA Astrophysics Data System (ADS)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  20. Ultra-fast HPM detectors improve NAD(P)H FLIM

    NASA Astrophysics Data System (ADS)

    Becker, Wolfgang; Wetzker, Cornelia; Benda, Aleš

    2018-02-01

    Metabolic imaging by NAD(P)H FLIM requires the decay functions in the individual pixels to be resolved into the decay components of bound and unbound NAD(P)H. Metabolic information is contained in the lifetime and relative amplitudes of the components. The separation of the decay components and the accuracy of the amplitudes and lifetimes improves substantially by using ultra-fast HPM-100-06 and HPM-100-07 hybrid detectors. The IRF width in combination with the Becker & Hickl SPC-150N and SPC-150NX TCSPC modules is less than 20 ps. An IRF this fast does not interfere with the fluorescence decay. The usual deconvolution process in the data analysis then virtually becomes a simple curve fitting, and the parameters of the NAD(P)H decay components are obtained at unprecedented accuracy.

  1. The Monash University Interactive Simple Climate Model

    NASA Astrophysics Data System (ADS)

    Dommenget, D.

    2013-12-01

    The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.

  2. A second order discontinuous Galerkin fast sweeping method for Eikonal equations

    NASA Astrophysics Data System (ADS)

    Li, Fengyan; Shu, Chi-Wang; Zhang, Yong-Tao; Zhao, Hongkai

    2008-09-01

    In this paper, we construct a second order fast sweeping method with a discontinuous Galerkin (DG) local solver for computing viscosity solutions of a class of static Hamilton-Jacobi equations, namely the Eikonal equations. Our piecewise linear DG local solver is built on a DG method developed recently [Y. Cheng, C.-W. Shu, A discontinuous Galerkin finite element method for directly solving the Hamilton-Jacobi equations, Journal of Computational Physics 223 (2007) 398-415] for the time-dependent Hamilton-Jacobi equations. The causality property of Eikonal equations is incorporated into the design of this solver. The resulting local nonlinear system in the Gauss-Seidel iterations is a simple quadratic system and can be solved explicitly. The compactness of the DG method and the fast sweeping strategy lead to fast convergence of the new scheme for Eikonal equations. Extensive numerical examples verify efficiency, convergence and second order accuracy of the proposed method.

  3. PhosSA: Fast and accurate phosphorylation site assignment algorithm for mass spectrometry data.

    PubMed

    Saeed, Fahad; Pisitkun, Trairak; Hoffert, Jason D; Rashidian, Sara; Wang, Guanghui; Gucek, Marjan; Knepper, Mark A

    2013-11-07

    Phosphorylation site assignment of high throughput tandem mass spectrometry (LC-MS/MS) data is one of the most common and critical aspects of phosphoproteomics. Correctly assigning phosphorylated residues helps us understand their biological significance. The design of common search algorithms (such as Sequest, Mascot etc.) do not incorporate site assignment; therefore additional algorithms are essential to assign phosphorylation sites for mass spectrometry data. The main contribution of this study is the design and implementation of a linear time and space dynamic programming strategy for phosphorylation site assignment referred to as PhosSA. The proposed algorithm uses summation of peak intensities associated with theoretical spectra as an objective function. Quality control of the assigned sites is achieved using a post-processing redundancy criteria that indicates the signal-to-noise ratio properties of the fragmented spectra. The quality assessment of the algorithm was determined using experimentally generated data sets using synthetic peptides for which phosphorylation sites were known. We report that PhosSA was able to achieve a high degree of accuracy and sensitivity with all the experimentally generated mass spectrometry data sets. The implemented algorithm is shown to be extremely fast and scalable with increasing number of spectra (we report up to 0.5 million spectra/hour on a moderate workstation). The algorithm is designed to accept results from both Sequest and Mascot search engines. An executable is freely available at http://helixweb.nih.gov/ESBL/PhosSA/ for academic research purposes.

  4. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  5. The Fast-Casual Conundrum: Fast-Casual Restaurant Entrées Are Higher in Calories than Fast Food.

    PubMed

    Schoffman, Danielle E; Davidson, Charis R; Hales, Sarah B; Crimarco, Anthony E; Dahl, Alicia A; Turner-McGrievy, Gabrielle M

    2016-10-01

    Frequently eating fast food has been associated with consuming a diet high in calories, and there is a public perception that fast-casual restaurants (eg, Chipotle) are healthier than traditional fast food (eg, McDonald's). However, research has not examined whether fast-food entrées and fast-casual entrées differ in calorie content. The purpose of this study was to determine whether the caloric content of entrées at fast-food restaurants differed from that found at fast-casual restaurants. This study was a cross-sectional analysis of secondary data. Calorie information from 2014 for lunch and dinner entrées for fast-food and fast-casual restaurants was downloaded from the MenuStat database. Mean calories per entrée between fast-food restaurants and fast-casual restaurants and the proportion of restaurant entrées that fell into different calorie ranges were assessed. A t test was conducted to test the hypothesis that there was no difference between the average calories per entrée at fast-food and fast-casual restaurants. To examine the difference in distribution of entrées in different calorie ranges between fast-food and fast-casual restaurants, χ(2) tests were used. There were 34 fast-food and 28 fast-casual restaurants included in the analysis (n=3,193 entrées). Fast-casual entrées had significantly more calories per entrée (760±301 kcal) than fast-food entrées (561±268; P<0.0001). A greater proportion of fast-casual entrées compared with fast-food entrées exceeded the median of 640 kcal per entrée (P<0.0001). Although fast-casual entrées contained more calories than fast-food entrées in the study sample, future studies should compare actual purchasing patterns from these restaurants to determine whether the energy content or nutrient density of full meals (ie, entrées with sides and drinks) differs between fast-casual restaurants and fast-food restaurants. Calorie-conscious consumers should consider the calorie content of entrée items

  6. Combinational Reasoning of Quantitative Fuzzy Topological Relations for Simple Fuzzy Regions

    PubMed Central

    Liu, Bo; Li, Dajun; Xia, Yuanping; Ruan, Jian; Xu, Lili; Wu, Huanyi

    2015-01-01

    In recent years, formalization and reasoning of topological relations have become a hot topic as a means to generate knowledge about the relations between spatial objects at the conceptual and geometrical levels. These mechanisms have been widely used in spatial data query, spatial data mining, evaluation of equivalence and similarity in a spatial scene, as well as for consistency assessment of the topological relations of multi-resolution spatial databases. The concept of computational fuzzy topological space is applied to simple fuzzy regions to efficiently and more accurately solve fuzzy topological relations. Thus, extending the existing research and improving upon the previous work, this paper presents a new method to describe fuzzy topological relations between simple spatial regions in Geographic Information Sciences (GIS) and Artificial Intelligence (AI). Firstly, we propose a new definition for simple fuzzy line segments and simple fuzzy regions based on the computational fuzzy topology. And then, based on the new definitions, we also propose a new combinational reasoning method to compute the topological relations between simple fuzzy regions, moreover, this study has discovered that there are (1) 23 different topological relations between a simple crisp region and a simple fuzzy region; (2) 152 different topological relations between two simple fuzzy regions. In the end, we have discussed some examples to demonstrate the validity of the new method, through comparisons with existing fuzzy models, we showed that the proposed method can compute more than the existing models, as it is more expressive than the existing fuzzy models. PMID:25775452

  7. Towards fast and accurate temperature mapping with proton resonance frequency-based MR thermometry

    PubMed Central

    Yuan, Jing; Mei, Chang-Sheng; Panych, Lawrence P.; McDannold, Nathan J.; Madore, Bruno

    2012-01-01

    The capability to image temperature is a very attractive feature of MRI and has been actively exploited for guiding minimally-invasive thermal therapies. Among many MR-based temperature-sensitive approaches, proton resonance frequency (PRF) thermometry provides the advantage of excellent linearity of signal with temperature over a large temperature range. Furthermore, the PRF shift has been shown to be fairly independent of tissue type and thermal history. For these reasons, PRF method has evolved into the most widely used MR-based thermometry method. In the present paper, the basic principles of PRF-based temperature mapping will be reviewed, along with associated pulse sequence designs. Technical advancements aimed at increasing the imaging speed and/or temperature accuracy of PRF-based thermometry sequences, such as image acceleration, fat suppression, reduced field-of-view imaging, as well as motion tracking and correction, will be discussed. The development of accurate MR thermometry methods applicable to moving organs with non-negligible fat content represents a very challenging goal, but recent developments suggest that this goal may be achieved. If so, MR-guided thermal therapies may be expected to play an increasingly-important therapeutic and palliative role, as a minimally-invasive alternative to surgery. PMID:22773966

  8. Prolonged Nightly Fasting and Breast Cancer Prognosis

    PubMed Central

    Marinac, Catherine R.; Nelson, Sandahl H.; Breen, Caitlin I.; Hartman, Sheri J.; Natarajan, Loki; Pierce, John P.; Flatt, Shirley W.; Sears, Dorothy D.; Patterson, Ruth E.

    2016-01-01

    associated with a statistically significant higher risk of breast cancer mortality (hazard ratio, 1.21; 95% CI, 0.91-1.60) or a statistically significant higher risk of all-cause mortality (hazard ratio, 1.22; 95% CI, 0.95-1.56). In multivariable linear regression models, each 2-hour increase in the nightly fasting duration was associated with significantly lower hemoglobin A1c levels (β = −0.37; 95% CI, −0.72 to −0.01) and a longer duration of nighttime sleep (β = 0.20; 95% CI, 0.14-0.26). CONCLUSIONS AND RELEVANCE Prolonging the length of the nightly fasting interval may be a simple, nonpharmacologic strategy for reducing the risk of breast cancer recurrence. Improvements in glucoregulation and sleep may be mechanisms linking nightly fasting with breast cancer prognosis. PMID:27032109

  9. Prolonged Nightly Fasting and Breast Cancer Prognosis.

    PubMed

    Marinac, Catherine R; Nelson, Sandahl H; Breen, Caitlin I; Hartman, Sheri J; Natarajan, Loki; Pierce, John P; Flatt, Shirley W; Sears, Dorothy D; Patterson, Ruth E

    2016-08-01

    % CI, 0.91-1.60) or a statistically significant higher risk of all-cause mortality (hazard ratio, 1.22; 95% CI, 0.95-1.56). In multivariable linear regression models, each 2-hour increase in the nightly fasting duration was associated with significantly lower hemoglobin A1c levels (β = -0.37; 95% CI, -0.72 to -0.01) and a longer duration of nighttime sleep (β = 0.20; 95% CI, 0.14-0.26). Prolonging the length of the nightly fasting interval may be a simple, nonpharmacologic strategy for reducing the risk of breast cancer recurrence. Improvements in glucoregulation and sleep may be mechanisms linking nightly fasting with breast cancer prognosis.

  10. Simple gas chromatographic method for furfural analysis.

    PubMed

    Gaspar, Elvira M S M; Lopes, João F

    2009-04-03

    A new, simple, gas chromatographic method was developed for the direct analysis of 5-hydroxymethylfurfural (5-HMF), 2-furfural (2-F) and 5-methylfurfural (5-MF) in liquid and water soluble foods, using direct immersion SPME coupled to GC-FID and/or GC-TOF-MS. The fiber (DVB/CAR/PDMS) conditions were optimized: pH effect, temperature, adsorption and desorption times. The method is simple and accurate (RSD<8%), showed good recoveries (77-107%) and good limits of detection (GC-FID: 1.37 microgL(-1) for 2-F, 8.96 microgL(-1) for 5-MF, 6.52 microgL(-1) for 5-HMF; GC-TOF-MS: 0.3, 1.2 and 0.9 ngmL(-1) for 2-F, 5-MF and 5-HMF, respectively). It was applied to different commercial food matrices: honey, white, demerara, brown and yellow table sugars, and white and red balsamic vinegars. This one-step, sensitive and direct method for the analysis of furfurals will contribute to characterise and quantify their presence in the human diet.

  11. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2015-11-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  12. Wide dynamic logarithmic InGaAs sensor suitable for eye-safe active imaging

    NASA Astrophysics Data System (ADS)

    Ni, Yang; Bouvier, Christian; Arion, Bogdan; Noguier, Vincent

    2016-05-01

    In this paper, we present a simple method to analyze the injection efficiency of the photodiode interface circuit under fast shuttering conditions for active Imaging applications. This simple model has been inspired from the companion model for reactive elements largely used in CAD. In this paper, we demonstrate that traditional CTIA photodiode interface is not adequate for active imaging where fast and precise shuttering operation is necessary. Afterwards we present a direct amplification based photodiode interface which can provide an accurate and fast shuttering operation on photodiode. These considerations have been used in NIT's newly developed ROIC and corresponding SWIR sensors both in VGA 15um pitch (NSC1201) and also in QVGA 25um pitch (NSC1401).

  13. A simple protocol for NMR analysis of the enantiomeric purity of chiral hydroxylamines.

    PubMed

    Tickell, David A; Mahon, Mary F; Bull, Steven D; James, Tony D

    2013-02-15

    A practically simple three-component chiral derivatization protocol for determining the enantiopurity of chiral hydroxylamines by (1)H NMR spectroscopic analysis is described, involving their treatment with 2-formylphenylboronic acid and enantiopure BINOL to afford a mixture of diastereomeric nitrono-boronate esters whose ratio is an accurate reflection of the enantiopurity of the parent hydroxylamine.

  14. Simple, Sensitive and Accurate Multiplex Detection of Clinically Important Melanoma DNA Mutations in Circulating Tumour DNA with SERS Nanotags

    PubMed Central

    Wee, Eugene J.H.; Wang, Yuling; Tsao, Simon Chang-Hao; Trau, Matt

    2016-01-01

    Sensitive and accurate identification of specific DNA mutations can influence clinical decisions. However accurate diagnosis from limiting samples such as circulating tumour DNA (ctDNA) is challenging. Current approaches based on fluorescence such as quantitative PCR (qPCR) and more recently, droplet digital PCR (ddPCR) have limitations in multiplex detection, sensitivity and the need for expensive specialized equipment. Herein we describe an assay capitalizing on the multiplexing and sensitivity benefits of surface-enhanced Raman spectroscopy (SERS) with the simplicity of standard PCR to address the limitations of current approaches. This proof-of-concept method could reproducibly detect as few as 0.1% (10 copies, CV < 9%) of target sequences thus demonstrating the high sensitivity of the method. The method was then applied to specifically detect three important melanoma mutations in multiplex. Finally, the PCR/SERS assay was used to genotype cell lines and ctDNA from serum samples where results subsequently validated with ddPCR. With ddPCR-like sensitivity and accuracy yet at the convenience of standard PCR, we believe this multiplex PCR/SERS method could find wide applications in both diagnostics and research. PMID:27446486

  15. Simple, Sensitive and Accurate Multiplex Detection of Clinically Important Melanoma DNA Mutations in Circulating Tumour DNA with SERS Nanotags.

    PubMed

    Wee, Eugene J H; Wang, Yuling; Tsao, Simon Chang-Hao; Trau, Matt

    2016-01-01

    Sensitive and accurate identification of specific DNA mutations can influence clinical decisions. However accurate diagnosis from limiting samples such as circulating tumour DNA (ctDNA) is challenging. Current approaches based on fluorescence such as quantitative PCR (qPCR) and more recently, droplet digital PCR (ddPCR) have limitations in multiplex detection, sensitivity and the need for expensive specialized equipment. Herein we describe an assay capitalizing on the multiplexing and sensitivity benefits of surface-enhanced Raman spectroscopy (SERS) with the simplicity of standard PCR to address the limitations of current approaches. This proof-of-concept method could reproducibly detect as few as 0.1% (10 copies, CV < 9%) of target sequences thus demonstrating the high sensitivity of the method. The method was then applied to specifically detect three important melanoma mutations in multiplex. Finally, the PCR/SERS assay was used to genotype cell lines and ctDNA from serum samples where results subsequently validated with ddPCR. With ddPCR-like sensitivity and accuracy yet at the convenience of standard PCR, we believe this multiplex PCR/SERS method could find wide applications in both diagnostics and research.

  16. Fast Coding of Orientation in Primary Visual Cortex

    PubMed Central

    Shriki, Oren; Kohn, Adam; Shamir, Maoz

    2012-01-01

    Understanding how populations of neurons encode sensory information is a major goal of systems neuroscience. Attempts to answer this question have focused on responses measured over several hundred milliseconds, a duration much longer than that frequently used by animals to make decisions about the environment. How reliably sensory information is encoded on briefer time scales, and how best to extract this information, is unknown. Although it has been proposed that neuronal response latency provides a major cue for fast decisions in the visual system, this hypothesis has not been tested systematically and in a quantitative manner. Here we use a simple ‘race to threshold’ readout mechanism to quantify the information content of spike time latency of primary visual (V1) cortical cells to stimulus orientation. We find that many V1 cells show pronounced tuning of their spike latency to stimulus orientation and that almost as much information can be extracted from spike latencies as from firing rates measured over much longer durations. To extract this information, stimulus onset must be estimated accurately. We show that the responses of cells with weak tuning of spike latency can provide a reliable onset detector. We find that spike latency information can be pooled from a large neuronal population, provided that the decision threshold is scaled linearly with the population size, yielding a processing time of the order of a few tens of milliseconds. Our results provide a novel mechanism for extracting information from neuronal populations over the very brief time scales in which behavioral judgments must sometimes be made. PMID:22719237

  17. Highly accurate fast lung CT registration

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd

    2013-03-01

    Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.

  18. Simulating Eastern- and Central-Pacific Type ENSO Using a Simple Coupled Model

    NASA Astrophysics Data System (ADS)

    Fang, Xianghui; Zheng, Fei

    2018-06-01

    Severe biases exist in state-of-the-art general circulation models (GCMs) in capturing realistic central-Pacific (CP) El Niño structures. At the same time, many observational analyses have emphasized that thermocline (TH) feedback and zonal advective (ZA) feedback play dominant roles in the development of eastern-Pacific (EP) and CP El Niño-Southern Oscillation (ENSO), respectively. In this work, a simple linear air-sea coupled model, which can accurately depict the strength distribution of the TH and ZA feedbacks in the equatorial Pacific, is used to investigate these two types of El Niño. The results indicate that the model can reproduce the main characteristics of CP ENSO if the TH feedback is switched off and the ZA feedback is retained as the only positive feedback, confirming the dominant role played by ZA feedback in the development of CP ENSO. Further experiments indicate that, through a simple nonlinear control approach, many ENSO characteristics, including the existence of both CP and EP El Niño and the asymmetries between El Niño and La Niña, can be successfully captured using the simple linear air-sea coupled model. These analyses indicate that an accurate depiction of the climatological sea surface temperature distribution and the related ZA feedback, which are the subject of severe biases in GCMs, is very important in simulating a realistic CP El Niño.

  19. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-05-12

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  20. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGES

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  1. Spectroscopic vector analysis for fast pattern quality monitoring

    NASA Astrophysics Data System (ADS)

    Sohn, Younghoon; Ryu, Sungyoon; Lee, Chihoon; Yang, Yusin

    2018-03-01

    In semiconductor industry, fast and effective measurement of pattern variation has been key challenge for assuring massproduct quality. Pattern measurement techniques such as conventional CD-SEMs or Optical CDs have been extensively used, but these techniques are increasingly limited in terms of measurement throughput and time spent in modeling. In this paper we propose time effective pattern monitoring method through the direct spectrum-based approach. In this technique, a wavelength band sensitive to a specific pattern change is selected from spectroscopic ellipsometry signal scattered by pattern to be measured, and the amplitude and phase variation in the wavelength band are analyzed as a measurement index of the pattern change. This pattern change measurement technique is applied to several process steps and verified its applicability. Due to its fast and simple analysis, the methods can be adapted to the massive process variation monitoring maximizing measurement throughput.

  2. Insulin degludec/insulin aspart once daily in Type 2 diabetes: a comparison of simple or stepwise titration algorithms (BOOST® : SIMPLE USE).

    PubMed

    Park, S W; Bebakar, W M W; Hernandez, P G; Macura, S; Hersløv, M L; de la Rosa, R

    2017-02-01

    To compare the efficacy and safety of two titration algorithms for insulin degludec/insulin aspart (IDegAsp) administered once daily with metformin in participants with insulin-naïve Type 2 diabetes mellitus. This open-label, parallel-group, 26-week, multicentre, treat-to-target trial, randomly allocated participants (1:1) to two titration arms. The Simple algorithm titrated IDegAsp twice weekly based on a single pre-breakfast self-monitored plasma glucose (SMPG) measurement. The Stepwise algorithm titrated IDegAsp once weekly based on the lowest of three consecutive pre-breakfast SMPG measurements. In both groups, IDegAsp once daily was titrated to pre-breakfast plasma glucose values of 4.0-5.0 mmol/l. Primary endpoint was change from baseline in HbA 1c (%) after 26 weeks. Change in HbA 1c at Week 26 was IDegAsp Simple -14.6 mmol/mol (-1.3%) (to 52.4 mmol/mol; 6.9%) and IDegAsp Stepwise -11.9 mmol/mol (-1.1%) (to 54.7 mmol/mol; 7.2%). The estimated between-group treatment difference was -1.97 mmol/mol [95% confidence interval (CI) -4.1, 0.2] (-0.2%, 95% CI -0.4, 0.02), confirming the non-inferiority of IDegAsp Simple to IDegAsp Stepwise (non-inferiority limit of ≤ 0.4%). Mean reduction in fasting plasma glucose and 8-point SMPG profiles were similar between groups. Rates of confirmed hypoglycaemia were lower for IDegAsp Stepwise [2.1 per patient years of exposure (PYE)] vs. IDegAsp Simple (3.3 PYE) (estimated rate ratio IDegAsp Simple /IDegAsp Stepwise 1.8; 95% CI 1.1, 2.9). Nocturnal hypoglycaemia rates were similar between groups. No severe hypoglycaemic events were reported. In participants with insulin-naïve Type 2 diabetes mellitus, the IDegAsp Simple titration algorithm improved HbA 1c levels as effectively as a Stepwise titration algorithm. Hypoglycaemia rates were lower in the Stepwise arm. © 2016 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.

  3. Fast and accurate detection of cancer cell using a versatile three-channel plasmonic sensor

    NASA Astrophysics Data System (ADS)

    Hoseinian, M.; Ahmadi, A. R.; Bolorizadeh, M. A.

    2016-09-01

    Surface Plasmon Resonance (SPR) optical fiber sensors can be used as cost-effective small sized biosensors that are relatively simple to operate. Additionally, these instruments are label-free, hence rendering them highly sensitive to biological measurements. In this study, a three-channel microstructure optical fiber plasmonic-based portable biosensor is designed and analyzed using Finite Element Method. The proposed system is capable of determining changes in sample's refractive index with precision of order one thousandth. The biosensor measures three absorption resonance wavelengths of the analytes simultaneously. This property is one of the main advantages of the proposed biosensor since it reduces the error in the measured wavelength and enhances the accuracy of the results up to 10-5 m/RIU by reducing noise. In this paper, Jurkat cell, an indicator cell for leukemia cancer, is considered as the analyte; and its absorption resonance wavelengths as well as sensitivity in each channel are determined.

  4. Analysing simple motions using the Doppler effect—‘seeing’ sound

    NASA Astrophysics Data System (ADS)

    Stonawski, Tamás; Gálik, Tamás

    2017-01-01

    The Doppler effect has seen widespread use in the past hundred years. It is used for medical imaging, for measuring speed, temperature, direction, etc, and it makes the spatial relations of motion easy to map. The Doppler effect also allows GPS receivers to measure the speed of a vehicle significantly more accurately than dashboard speedometers. Its diverse applications have prompted us to revisit the simple motions from kinematics with the help of everyday objects in our experiments.

  5. A Simple Iterative Model Accurately Captures Complex Trapline Formation by Bumblebees Across Spatial Scales and Flower Arrangements

    PubMed Central

    Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353

  6. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  7. Simple method for quick estimation of aquifer hydrogeological parameters

    NASA Astrophysics Data System (ADS)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  8. Fast decomposition of two ultrasound longitudinal waves in cancellous bone using a phase rotation parameter for bone quality assessment: Simulation study.

    PubMed

    Taki, Hirofumi; Nagatani, Yoshiki; Matsukawa, Mami; Kanai, Hiroshi; Izumi, Shin-Ichi

    2017-10-01

    Ultrasound signals that pass through cancellous bone may be considered to consist of two longitudinal waves, which are called fast and slow waves. Accurate decomposition of these fast and slow waves is considered to be highly beneficial in determination of the characteristics of cancellous bone. In the present study, a fast decomposition method using a wave transfer function with a phase rotation parameter was applied to received signals that have passed through bovine bone specimens with various bone volume to total volume (BV/TV) ratios in a simulation study, where the elastic finite-difference time-domain method is used and the ultrasound wave propagated parallel to the bone axes. The proposed method succeeded to decompose both fast and slow waves accurately; the normalized residual intensity was less than -19.5 dB when the specimen thickness ranged from 4 to 7 mm and the BV/TV value ranged from 0.144 to 0.226. There was a strong relationship between the phase rotation value and the BV/TV value. The ratio of the peak envelope amplitude of the decomposed fast wave to that of the slow wave increased monotonically with increasing BV/TV ratio, indicating the high performance of the proposed method in estimation of the BV/TV value in cancellous bone.

  9. A pairwise maximum entropy model accurately describes resting-state human brain networks

    PubMed Central

    Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki

    2013-01-01

    The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410

  10. Accurate determination of segmented X-ray detector geometry

    PubMed Central

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton

    2015-01-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  11. Accurate determination of segmented X-ray detector geometry

    DOE PAGES

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; ...

    2015-10-22

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical formore » many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. Furthermore, we show that the refined detector geometry greatly improves the results of experiments.« less

  12. Simulating polar bear energetics during a seasonal fast using a mechanistic model.

    PubMed

    Mathewson, Paul D; Porter, Warren P

    2013-01-01

    In this study we tested the ability of a mechanistic model (Niche Mapper™) to accurately model adult, non-denning polar bear (Ursus maritimus) energetics while fasting during the ice-free season in the western Hudson Bay. The model uses a steady state heat balance approach, which calculates the metabolic rate that will allow an animal to maintain its core temperature in its particular microclimate conditions. Predicted weight loss for a 120 day fast typical of the 1990s was comparable to empirical studies of the population, and the model was able to reach a heat balance at the target metabolic rate for the entire fast, supporting use of the model to explore the impacts of climate change on polar bears. Niche Mapper predicted that all but the poorest condition bears would survive a 120 day fast under current climate conditions. When the fast extended to 180 days, Niche Mapper predicted mortality of up to 18% for males. Our results illustrate how environmental conditions, variation in animal properties, and thermoregulation processes may impact survival during extended fasts because polar bears were predicted to require additional energetic expenditure for thermoregulation during a 180 day fast. A uniform 3°C temperature increase reduced male mortality during a 180 day fast from 18% to 15%. Niche Mapper explicitly links an animal's energetics to environmental conditions and thus can be a valuable tool to help inform predictions of climate-related population changes. Since Niche Mapper is a generic model, it can make energetic predictions for other species threatened by climate change.

  13. Simulating Polar Bear Energetics during a Seasonal Fast Using a Mechanistic Model

    PubMed Central

    Mathewson, Paul D.; Porter, Warren P.

    2013-01-01

    In this study we tested the ability of a mechanistic model (Niche Mapper™) to accurately model adult, non-denning polar bear (Ursus maritimus) energetics while fasting during the ice-free season in the western Hudson Bay. The model uses a steady state heat balance approach, which calculates the metabolic rate that will allow an animal to maintain its core temperature in its particular microclimate conditions. Predicted weight loss for a 120 day fast typical of the 1990s was comparable to empirical studies of the population, and the model was able to reach a heat balance at the target metabolic rate for the entire fast, supporting use of the model to explore the impacts of climate change on polar bears. Niche Mapper predicted that all but the poorest condition bears would survive a 120 day fast under current climate conditions. When the fast extended to 180 days, Niche Mapper predicted mortality of up to 18% for males. Our results illustrate how environmental conditions, variation in animal properties, and thermoregulation processes may impact survival during extended fasts because polar bears were predicted to require additional energetic expenditure for thermoregulation during a 180 day fast. A uniform 3°C temperature increase reduced male mortality during a 180 day fast from 18% to 15%. Niche Mapper explicitly links an animal’s energetics to environmental conditions and thus can be a valuable tool to help inform predictions of climate-related population changes. Since Niche Mapper is a generic model, it can make energetic predictions for other species threatened by climate change. PMID:24019883

  14. Simple and Fast Sample Preparation Followed by Gas Chromatography-Tandem Mass Spectrometry (GC-MS/MS) for the Analysis of 2- and 4-Methylimidazole in Cola and Dark Beer.

    PubMed

    Choi, Sol Ji; Jung, Mun Yhung

    2017-04-01

    We have developed a simple and fast sample preparation technique in combination with a gas chromatography-tandem mass spectrometry (GC-MS/MS) for the quantification of 2-methylimidazole (2-MeI) and 4-methylimidazole (4-MeI) in colas and dark beers. Conventional sample preparation technique for GC-MS requires laborious and time-consuming steps consisting of sample concentration, pH adjustment, ion pair extraction, centrifugation, back-extraction, centrifugation, derivatization, and extraction. Our sample preparation technique consists of only 2 steps (in situ derivation and extraction) which requires less than 3 min. This method provided high linearity, low limit of detection and limit of quantification, high recovery, and high intra- and interday repeatability. It was found that internal standard method with diluted stable isotope (4-MeI-d 6 ) and 2-ethylimidazole (2-EI) could not correctly compensate the matrix effects. Thus, standard addition technique was used for the quantification of 2- and 4-MeI. The established method was successfully applied to colas and dark beers for the determination of 2-MeI and 4-MeI. The 4-MeI contents in colas and dark beers ranged from 8 to 319 μg/L and from trace to 417 μg/L, respectively. Small quantity (0 to 8 μg/L) of 2-MeI was found only in dark beers. The contents of 4-MeI (22 μg/L) in colas obtained from fast food restaurants were significantly lower than those (177 μg/L) in canned or bottled colas. © 2017 Institute of Food Technologists®.

  15. A simple and fast method based on mixed hemimicelles coated magnetite nanoparticles for simultaneous extraction of acidic and basic pollutants.

    PubMed

    Asgharinezhad, Ali Akbar; Ebrahimzadeh, Homeira

    2016-01-01

    One of the considerable and disputable areas in analytical chemistry is a single-step simultaneous extraction of acidic and basic pollutants. In this research, a simple and fast coextraction of acidic and basic pollutants (with different polarities) with the aid of magnetic dispersive micro-solid phase extraction based on mixed hemimicelles assembly was introduced for the first time. Cetyltrimethylammonium bromide (CTAB)-coated Fe3O4 nanoparticles as an efficient sorbent was successfully applied to adsorb 4-nitrophenol and 4-chlorophenol as two acidic and chlorinated aromatic amines as basic model compounds. Using a central composite design methodology combined with desirability function approach, the optimal experimental conditions were evaluated. The opted conditions were pH = 10; concentration of CTAB = 0.86 mmol L(-1); sorbent amount = 55.5 mg; sorption time = 11.0 min; no salt addition to the sample, type, and volume of the eluent = 120 μL methanol containing 5% acetic acid and 0.01 mol L(-1) HCl; and elution time = 1.0 min. Under the optimum conditions, detection limits and linear dynamic ranges were achieved in the range of 0.05-0.1 and 0.25-500 μg L(-1), respectively. The percent of extraction recoveries and relative standard deviations (n = 5) were in the range of 71.4-98.0 and 4.5-6.5, respectively. The performance of the optimized method was certified by coextraction of other acidic and basic compounds. Ultimately, the applicability of the method was successfully confirmed by the extraction and determination of the target analytes in various water samples, and satisfactory results were obtained.

  16. Accurately controlled sequential self-folding structures by polystyrene film

    NASA Astrophysics Data System (ADS)

    Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse

    2017-08-01

    Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.

  17. Fast restoration approach for motion blurred image based on deconvolution under the blurring paths

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Song, Jie; Hua, Xia

    2015-12-01

    For the real-time motion deblurring, it is of utmost importance to get a higher processing speed with about the same image quality. This paper presents a fast Richardson-Lucy motion deblurring approach to remove motion blur which rotates blurred image under blurring paths. Hence, the computational time is reduced sharply by using one-dimensional Fast Fourier Transform in one-dimensional Richardson-Lucy method. In order to obtain accurate transformational results, interpolation method is incorporated to fetch the gray values. Experiment results demonstrate that the proposed approach is efficient and effective to reduce motion blur under the blur paths.

  18. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  19. Fast Numerical Methods for the Design of Layered Photonic Structures with Rough Interfaces

    NASA Technical Reports Server (NTRS)

    Komarevskiy, Nikolay; Braginsky, Leonid; Shklover, Valery; Hafner, Christian; Lawson, John

    2011-01-01

    Modified boundary conditions (MBC) and a multilayer approach (MA) are proposed as fast and efficient numerical methods for the design of 1D photonic structures with rough interfaces. These methods are applicable for the structures, composed of materials with arbitrary permittivity tensor. MBC and MA are numerically validated on different types of interface roughness and permittivities of the constituent materials. The proposed methods can be combined with the 4x4 scattering matrix method as a field solver and an evolutionary strategy as an optimizer. The resulted optimization procedure is fast, accurate, numerically stable and can be used to design structures for various applications.

  20. Fast and simultaneous determination of 12 polyphenols in apple peel and pulp by using chemometrics-assisted high-performance liquid chromatography with diode array detection.

    PubMed

    Wang, Tong; Wu, Hai-Long; Xie, Li-Xia; Zhu, Li; Liu, Zhi; Sun, Xiao-Dong; Xiao, Rong; Yu, Ru-Qin

    2017-04-01

    In this work, a smart chemometrics-enhanced strategy, high-performance liquid chromatography, and diode array detection coupled with second-order calibration method based on alternating trilinear decomposition algorithm was proposed to simultaneously quantify 12 polyphenols in different kinds of apple peel and pulp samples. The proposed strategy proved to be a powerful tool to solve the problems of coelution, unknown interferences, and chromatographic shifts in the process of high-performance liquid chromatography analysis, making it possible for the determination of 12 polyphenols in complex apple matrices within 10 min under simple conditions of elution. The average recoveries with standard deviations, and figures of merit including sensitivity, selectivity, limit of detection, and limit of quantitation were calculated to validate the accuracy of the proposed method. Compared to the quantitative analysis results from the classic high-performance liquid chromatography method, the statistical and graphical analysis showed that our proposed strategy obtained more reliable results. All results indicated that our proposed method used in the quantitative analysis of apple polyphenols was an accurate, fast, universal, simple, and green one, and it was expected to be developed as an attractive alternative method for simultaneous determination of multitargeted analytes in complex matrices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Simple heuristic for the viscosity of polydisperse hard spheres

    NASA Astrophysics Data System (ADS)

    Farr, Robert S.

    2014-12-01

    We build on the work of Mooney [Colloids Sci. 6, 162 (1951)] to obtain an heuristic analytic approximation to the viscosity of a suspension any size distribution of hard spheres in a Newtonian solvent. The result agrees reasonably well with rheological data on monodispserse and bidisperse hard spheres, and also provides an approximation to the random close packing fraction of polydisperse spheres. The implied packing fraction is less accurate than that obtained by Farr and Groot [J. Chem. Phys. 131(24), 244104 (2009)], but has the advantage of being quick and simple to evaluate.

  2. Fast analysis of radionuclide decay chain migration

    NASA Astrophysics Data System (ADS)

    Chen, J. S.; Liang, C. P.; Liu, C. W.; Li, L.

    2014-12-01

    A novel tool for rapidly predicting the long-term plume behavior of an arbitrary length radionuclide decay chain is presented in this study. This fast tool is achieved based on generalized analytical solutions in compact format derived for a set of two-dimensional advection-dispersion equations coupled with sequential first-order decay reactions in groundwater system. The performance of the developed tool is evaluated by a numerical model using a Laplace transform finite difference scheme. The results of performance evaluation indicate that the developed model is robust and accurate. The developed model is then used to fast understand the transport behavior of a four-member radionuclide decay chain. Results show that the plume extents and concentration levels of any target radionuclide are very sensitive to longitudinal, transverse dispersion, decay rate constant and retardation factor. The developed model are useful tools for rapidly assessing the ecological and environmental impact of the accidental radionuclide releases such as the Fukushima nuclear disaster where multiple radionuclides leaked through the reactor, subsequently contaminating the local groundwater and ocean seawater in the vicinity of the nuclear plant.

  3. Penetration of fast projectiles into resistant media: From macroscopic to subatomic projectiles

    NASA Astrophysics Data System (ADS)

    Gaite, José

    2017-09-01

    The penetration of a fast projectile into a resistant medium is a complex process that is suitable for simple modeling, in which basic physical principles can be profitably employed. This study connects two different domains: the fast motion of macroscopic bodies in resistant media and the interaction of charged subatomic particles with matter at high energies, which furnish the two limit cases of the problem of penetrating projectiles of different sizes. These limit cases actually have overlapping applications; for example, in space physics and technology. The intermediate or mesoscopic domain finds application in atom cluster implantation technology. Here it is shown that the penetration of fast nano-projectiles is ruled by a slightly modified Newton's inertial quadratic force, namely, F ∼v 2 - β, where β vanishes as the inverse of projectile diameter. Factors essential to penetration depth are ratio of projectile to medium density and projectile shape.

  4. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  5. Fast and precise thermoregulation system in physiological brain slice experiment

    NASA Astrophysics Data System (ADS)

    Sheu, Y. H.; Young, M. S.

    1995-12-01

    We have developed a fast and precise thermoregulation system incorporated within a physiological experiment on a brain slice. The thermoregulation system is used to control the temperature of a recording chamber in which the brain slice is placed. It consists of a single-chip microcomputer, a set command module, a display module, and an FLC module. A fuzzy control algorithm was developed and a fuzzy logic controller then designed for achieving fast, smooth thermostatic performance and providing precise temperature control with accuracy to 0.1 °C, from room temperature through 42 °C (experimental temperature range). The fuzzy logic controller is implemented by microcomputer software and related peripheral hardware circuits. Six operating modes of thermoregulation are offered with the system and this can be further extended according to experimental needs. The test results of this study demonstrate that the fuzzy control method is easily implemented by a microcomputer and also verifies that this method provides a simple way to achieve fast and precise high-performance control of a nonlinear thermoregulation system in a physiological brain slice experiment.

  6. Purification of pharmaceutical preparations using thin-layer chromatography to obtain mass spectra with Direct Analysis in Real Time and accurate mass spectrometry.

    PubMed

    Wood, Jessica L; Steiner, Robert R

    2011-06-01

    Forensic analysis of pharmaceutical preparations requires a comparative analysis with a standard of the suspected drug in order to identify the active ingredient. Purchasing analytical standards can be expensive or unattainable from the drug manufacturers. Direct Analysis in Real Time (DART™) is a novel, ambient ionization technique, typically coupled with a JEOL AccuTOF™ (accurate mass) mass spectrometer. While a fast and easy technique to perform, a drawback of using DART™ is the lack of component separation of mixtures prior to ionization. Various in-house pharmaceutical preparations were purified using thin-layer chromatography (TLC) and mass spectra were subsequently obtained using the AccuTOF™- DART™ technique. Utilizing TLC prior to sample introduction provides a simple, low-cost solution to acquiring mass spectra of the purified preparation. Each spectrum was compared against an in-house molecular formula list to confirm the accurate mass elemental compositions. Spectra of purified ingredients of known pharmaceuticals were added to an in-house library for use as comparators for casework samples. Resolving isomers from one another can be accomplished using collision-induced dissociation after ionization. Challenges arose when the pharmaceutical preparation required an optimized TLC solvent to achieve proper separation and purity of the standard. Purified spectra were obtained for 91 preparations and included in an in-house drug standard library. Primary standards would only need to be purchased when pharmaceutical preparations not previously encountered are submitted for comparative analysis. TLC prior to DART™ analysis demonstrates a time efficient and cost saving technique for the forensic drug analysis community. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Design and Calibration of a Dispersive Imaging Spectrometer Adaptor for a Fast IR Camera on NSTX-U

    NASA Astrophysics Data System (ADS)

    Reksoatmodjo, Richard; Gray, Travis; Princeton Plasma Physics Laboratory Team

    2017-10-01

    A dispersive spectrometer adaptor was designed, constructed and calibrated for use on a fast infrared camera employed to measure temperatures on the lower divertor tiles of the NSTX-U tokamak. This adaptor efficiently and evenly filters and distributes long-wavelength infrared photons between 8.0 and 12.0 microns across the 128x128 pixel detector of the fast IR camera. By determining the width of these separated wavelength bands across the camera detector, and then determining the corresponding average photon count for each photon wavelength, a very accurate measurement of the temperature, and thus heat flux, of the divertor tiles can be calculated using Plank's law. This approach of designing an exterior dispersive adaptor for the fast IR camera allows accurate temperature measurements to be made of materials with unknown emissivity. Further, the relative simplicity and affordability of this adaptor design provides an attractive option over more expensive, slower, dispersive IR camera systems. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No. DE-AC02-09CH11466.

  8. A simple, fast and cheap non-SPE screening method for antibacterial residue analysis in milk and liver using liquid chromatography-tandem mass spectrometry.

    PubMed

    Martins, Magda Targa; Melo, Jéssica; Barreto, Fabiano; Hoff, Rodrigo Barcellos; Jank, Louise; Bittencourt, Michele Soares; Arsand, Juliana Bazzan; Schapoval, Elfrides Eva Scherman

    2014-11-01

    In routine laboratory work, screening methods for multiclass analysis can process a large number of samples in a short time. The main challenge is to develop a methodology to detect as many different classes of residues as possible, combined with speed and low cost. An efficient technique for the analysis of multiclass antibacterial residues (fluoroquinolones, tetracyclines, sulfonamides and trimethoprim) was developed based on simple, environment-friendly extraction for bovine milk, cattle and poultry liver. Acidified ethanol was used as an extracting solvent for milk samples. Liver samples were treated using EDTA-washed sand for cell disruption, methanol:water and acidified acetonitrile as extracting solvent. A total of 24 antibacterial residues were detected and confirmed using liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS), at levels between 10, 25 and 50% of the maximum residue limit (MRL). For liver samples a metabolite (sulfaquinoxaline-OH) was also monitored. A validation procedure was conducted for screening purposes in accordance with European Union requirements (2002/657/EC). The detection capability (CCβ) false compliant rate was less than 5% at the lowest level for each residue. Specificity and ruggedness were also discussed. Incurred and routine samples were analyzed and the method was successfully applied. The results proved that this method can be an important tool in routine analysis, since it is very fast and reliable. Copyright © 2014. Published by Elsevier B.V.

  9. Determination of Caffeine in Beverages by High Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    DiNunzio, James E.

    1985-01-01

    Describes the equipment, procedures, and results for the determination of caffeine in beverages by high performance liquid chromatography. The method is simple, fast, accurate, and, because sample preparation is minimal, it is well suited for use in a teaching laboratory. (JN)

  10. Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.

    PubMed

    Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P

    2016-07-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Fusion of microlitre water-in-oil droplets for simple, fast and green chemical assays.

    PubMed

    Chiu, S-H; Urban, P L

    2015-08-07

    A simple format for microscale chemical assays is proposed. It does not require the use of test tubes, microchips or microtiter plates. Microlitre-range (ca. 0.7-5.0 μL) aqueous droplets are generated by a commercial micropipette in a non-polar matrix inside a Petri dish. When two droplets are pipetted nearby, they spontaneously coalesce within seconds, priming a chemical reaction. Detection of the reaction product is accomplished by colorimetry, spectrophotometry, or fluorimetry using simple light-emitting diode (LED) arrays as the sources of monochromatic light, while chemiluminescence detection of the analytes present in single droplets is conducted in the dark. A smartphone camera is used as the detector. The limits of detection obtained for the developed in-droplet assays are estimated to be: 1.4 nmol (potassium permanganate by colorimetry), 1.4 pmol (fluorescein by fluorimetry), and 580 fmol (sodium hypochlorite by chemiluminescence detection). The format has successfully been used to monitor the progress of chemical and biochemical reactions over time with sub-second resolution. A semi-quantitative analysis of ascorbic acid using Tillman's reagent is presented. A few tens of individual droplets can be scanned in parallel. Rapid switching of the LED light sources with different wavelengths enables a spectral analysis of multiple droplets. Very little solid waste is produced. The assay matrix is readily recycled, thus the volume of liquid waste produced each time is also very small (typically, 1-10 μL per analysis). Various water-immiscible translucent liquids can be used as the reaction matrix: including silicone oil, 1-octanol as well as soybean cooking oil.

  12. Efficient and accurate causal inference with hidden confounders from genome-transcriptome variation data

    PubMed Central

    2017-01-01

    Mapping gene expression as a quantitative trait using whole genome-sequencing and transcriptome analysis allows to discover the functional consequences of genetic variation. We developed a novel method and ultra-fast software Findr for higly accurate causal inference between gene expression traits using cis-regulatory DNA variations as causal anchors, which improves current methods by taking into consideration hidden confounders and weak regulations. Findr outperformed existing methods on the DREAM5 Systems Genetics challenge and on the prediction of microRNA and transcription factor targets in human lymphoblastoid cells, while being nearly a million times faster. Findr is publicly available at https://github.com/lingfeiwang/findr. PMID:28821014

  13. Electron theory of fast and ultrafast dissipative magnetization dynamics.

    PubMed

    Fähnle, M; Illg, C

    2011-12-14

    For metallic magnets we review the experimental and electron-theoretical investigations of fast magnetization dynamics (on a timescale of ns to 100 ps) and of laser-pulse-induced ultrafast dynamics (few hundred fs). It is argued that for both situations the dominant contributions to the dissipative part of the dynamics arise from the excitation of electron-hole pairs and from the subsequent relaxation of these pairs by spin-dependent scattering processes, which transfer angular momentum to the lattice. By effective field theories (generalized breathing and bubbling Fermi-surface models) it is shown that the Gilbert equation of motion, which is often used to describe the fast dissipative magnetization dynamics, must be extended in several aspects. The basic assumptions of the Elliott-Yafet theory, which is often used to describe the ultrafast spin relaxation after laser-pulse irradiation, are discussed very critically. However, it is shown that for Ni this theory probably yields a value for the spin-relaxation time T(1) in good agreement with the experimental value. A relation between the quantity α characterizing the damping of the fast dynamics in simple situations and the time T(1) is derived. © 2011 IOP Publishing Ltd

  14. A Quasiphysics Intelligent Model for a Long Range Fast Tool Servo

    PubMed Central

    Liu, Qiang; Zhou, Xiaoqin; Lin, Jieqiong; Xu, Pengzi; Zhu, Zhiwei

    2013-01-01

    Accurately modeling the dynamic behaviors of fast tool servo (FTS) is one of the key issues in the ultraprecision positioning of the cutting tool. Herein, a quasiphysics intelligent model (QPIM) integrating a linear physics model (LPM) and a radial basis function (RBF) based neural model (NM) is developed to accurately describe the dynamic behaviors of a voice coil motor (VCM) actuated long range fast tool servo (LFTS). To identify the parameters of the LPM, a novel Opposition-based Self-adaptive Replacement Differential Evolution (OSaRDE) algorithm is proposed which has been proved to have a faster convergence mechanism without compromising with the quality of solution and outperform than similar evolution algorithms taken for consideration. The modeling errors of the LPM and the QPIM are investigated by experiments. The modeling error of the LPM presents an obvious trend component which is about ±1.15% of the full span range verifying the efficiency of the proposed OSaRDE algorithm for system identification. As for the QPIM, the trend component in the residual error of LPM can be well suppressed, and the error of the QPIM maintains noise level. All the results verify the efficiency and superiority of the proposed modeling and identification approaches. PMID:24163627

  15. Simple to complex modeling of breathing volume using a motion sensor.

    PubMed

    John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-06-01

    To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Two simple models of classical heat pumps.

    PubMed

    Marathe, Rahul; Jayannavar, A M; Dhar, Abhishek

    2007-03-01

    Motivated by recent studies of models of particle and heat quantum pumps, we study similar simple classical models and examine the possibility of heat pumping. Unlike many of the usual ratchet models of molecular engines, the models we study do not have particle transport. We consider a two-spin system and a coupled oscillator system which exchange heat with multiple heat reservoirs and which are acted upon by periodic forces. The simplicity of our models allows accurate numerical and exact solutions and unambiguous interpretation of results. We demonstrate that while both our models seem to be built on similar principles, one is able to function as a heat pump (or engine) while the other is not.

  17. Fasting for weight loss: an effective strategy or latest dieting trend?

    PubMed

    Johnstone, A

    2015-05-01

    With the increasing obesity epidemic comes the search for effective dietary approaches for calorie restriction and weight loss. Here I examine whether fasting is the latest 'fad diet' as portrayed in popular media and discuss whether it is a safe and effective approach or whether it is an idiosyncratic diet trend that promotes short-term weight loss, with no concern for long-term weight maintenance. Fasting has long been used under historical and experimental conditions and has recently been popularised by 'intermittent fasting' or 'modified fasting' regimes, in which a very low-calorie allowance is allowed, on alternate days (ADF) or 2 days a week (5:2 diet), where 'normal' eating is resumed on non-diet days. It is a simple concept, which makes it easy to follow with no difficult calorie counting every other day. This approach does seem to promote weight loss, but is linked to hunger, which can be a limiting factor for maintaining food restriction. The potential health benefits of fasting can be related to both the acute food restriction and chronic influence of weight loss; the long-term effect of chronic food restriction in humans is not yet clear, but may be a potentially interesting future dietary strategy for longevity, particularly given the overweight epidemic. One approach does not fit all in the quest to achieve body weight control, but this could be a dietary strategy for consideration. With the obesity epidemic comes the search for dietary strategies to (i) prevent weight gain, (ii) promote weight loss and (iii) prevent weight regain. With over half of the population of the United Kingdom and other developed countries being collectively overweight or obese, there is considerable pressure to achieve these goals, from both a public health and a clinical perspective. Certainly not one dietary approach will solve these complex problems. Although there is some long-term success with gastric surgical options for morbid obesity, there is still a requirement

  18. Fast and accurate imputation of summary statistics enhances evidence of functional enrichment.

    PubMed

    Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P; Patterson, Nick; Price, Alkes L

    2014-10-15

    Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1-5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case-control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of [Formula: see text] association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis without 1000G

  19. Fast and accurate imputation of summary statistics enhances evidence of functional enrichment

    PubMed Central

    Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P.; Patterson, Nick; Price, Alkes L.

    2014-01-01

    Motivation: Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. Results: In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1–5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case–control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of χ2 association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis

  20. Diagnostic Performance of 48-Hour Fasting Test and Insulin Surrogates in Patients With Suspected Insulinoma.

    PubMed

    Ueda, Keijiro; Kawabe, Ken; Lee, Lingaku; Tachibana, Yuichi; Fujimori, Nao; Igarashi, Hisato; Oda, Yoshinao; Jensen, Robert T; Takayanagi, Ryoichi; Ito, Tetsuhide

    2017-04-01

    This study aimed to evaluate the usefulness of the 48-hour fasting test and insulin surrogates followed by a glucagon stimulatory test (GST) for the diagnosis of insulinoma. Thirty-five patients with suspected insulinoma who underwent 48-hour fasting test and GST were retrospectively included in our study: 15 patients with surgically proven insulinomas and 20 patients in whom insulinoma was clinically ruled out. We determined the duration of the fasting test, plasma glucose levels, serum levels of immunoreactive insulin and C-peptide, and insulin surrogates (serum levels of β-hydroxybutyrate, free fatty acid, and response of plasma glucose to intravenous glucagon [ΔPG]) at the end of the fast. The sensitivity and specificity of the 48-hour fasting test were 100.0% and 80.0%, respectively, for the diagnosis of insulinoma. When the 48-hour fasting test and immunoreactive insulin, C-peptide, or insulin surrogates were combined, the combination with GST showed the best results. The sensitivity, specificity, and accuracy rate were 93.3%, 95.0%, and 94.3%, respectively, with 1 false-negative case and 1 false-positive case occurring. A more accurate and less invasive diagnosis of insulinoma was possible by combining the 48-hour fasting test with the GST, compared with the existing method.

  1. Unexpectedly increased anorexigenic postprandial responses of PYY and GLP-1 to fast ice cream consumption in adult patients with Prader-Willi syndrome.

    PubMed

    Rigamonti, A E; Bini, S; Grugni, G; Agosti, F; De Col, A; Mallone, M; Cella, S G; Sartorio, A

    2014-10-01

    The effect of eating rate on the release of anorexigenic gut peptides in Prader-Willi syndrome (PWS), a neurogenetic disorder clinically characterized by hyperphagia and excessive obesity, has not been investigated so far. Postprandial PYY and GLP-1 levels to fast (5 min) and slow (30 min) ice cream consumption were measured in PWS adult patients and age-matched patients with simple obesity and normal-weighted subjects. Visual analog scales (VASs) were used to evaluate the subjective feelings of hunger and satiety. Fast ice cream consumption stimulated GLP-1 release in normal subjects, a greater increase being observed with slow feeding. Fast or slow feeding did not change circulating levels of GLP-1 in obese patients, while, unexpectedly, fast feeding (but not slow feeding) stimulated GLP-1 release in PWS patients. Plasma PYY concentrations increased in all groups, irrespective of the eating rate. Slow feeding was more effective in stimulating PYY release in normal subjects, while fast feeding was more effective in PWS patients. Slow feeding evoked a lower hunger and higher satiety compared with fast feeding in normal subjects, this finding being not evident in obese patients. Unexpectedly, fast feeding evoked a lower hunger and higher satiety in PWS patients in comparison with slow feeding. Fast feeding leads to higher concentrations of anorexigenic gut peptides and favours satiety in PWS adult patients, this pattern being not evident in age-matched patients with simple obesity, thus suggesting the existence of a different pathophysiological substrate in these two clinical conditions. © 2013 John Wiley & Sons Ltd.

  2. Fast Responding Oxygen Sensor For Respiratorial Analysis

    NASA Astrophysics Data System (ADS)

    Karpf, Hellfried H.; Kroneis, H. W.; Marsoner, Hermann J.; Metzler, H.; Gravenstein, N.

    1990-02-01

    Breath-by-breath monitoring of the partial pressure of oxygen is the main interest for the development of a fast responding optical oxygen sensor. Monitoring the P02 finds its main interest in critical care, in artificial respiration, in breath by breath determination of respiratorial coefficients and in pulmonarial examinations. The requirements arising from these and similar applications are high precision, high long term stability, and time constants in the range of less than 0.1 sec. In order to cope with these requirements, we investigated different possibilities of fast P02-measurements by means of optical sensors based on fluorescence quenching. The experimental set up is simple: a rigid transparent layer is coated with a thin layer of an hydrophobic polymer which has a high permeability for oxygen. The oxygen sensitive indicator material is embedded into this polymer. An experimental set up showed time constants of 30 milliseconds. The lifetime is in the range of several months. Testing of our test equipment by an independent working group resulted in surprisingly good correlation with data obtained by mass spectroscopy.

  3. Fast temporal correlation between hard X-ray and ultraviolet continuum brightenings

    NASA Technical Reports Server (NTRS)

    Machado, Marcos E.; Mauas, Pablo J.

    1986-01-01

    Recent Solar Maximum Mission (SMM) observations have shown fast and simultaneous increases in hard X-rays (HXR, E25 keV) and ultraviolet continuum (UVC, lambda lambda approx. equals 1600 and 1388 A) radiation. A simple and natural explanation is given for this phenomenon to happen, which does not involve extreme conditions for energy transport processes, and confirms earlier results on the effect of XUV photoionization in the solar atmosphere.

  4. Kinetics of Fast Atoms in the Terrestrial Atmosphere

    NASA Technical Reports Server (NTRS)

    Kharchenko, Vasili A.; Dalgarno, A.; Mellott, Mary (Technical Monitor)

    2002-01-01

    This report summarizes our investigations performed under NASA Grant NAG5-8058. The three-year research supported by the Geospace Sciences SR&T program (Ionospheric, Thermospheric, and Mesospheric Physics) has been designed to investigate fluxes of energetic oxygen and nitrogen atoms in the terrestrial thermosphere. Fast atoms are produced due to absorption of the solar radiation and due to coupling between the ionosphere and the neutral thermospheric gas. We have investigated the impact of hot oxygen and nitrogen atoms on the thermal balance, chemistry and radiation properties of the terrestrial thermosphere. Our calculations have been focused on the accurate quantitative description of the thermalization of O and N energetic atoms in collisions with atom and molecules of the ambient neutral gas. Upward fluxes of oxygen and nitrogen atoms, the rate of atmospheric heating by hot oxygen atoms, and the energy input into translational and rotational-vibrational degrees of atmospheric molecules have been evaluated. Altitude profiles of hot oxygen and nitrogen atoms have been analyzed and compared with available observational data. Energetic oxygen atoms in the terrestrial atmosphere have been investigated for decades, but insufficient information on the kinetics of fast atmospheric atoms has been a main obstacle for the interpretation of observational data and modeling of the hot geocorona. The recent development of accurate computational methods of the collisional kinetics is seen as an important step in the quantitative description of hot atoms in the thermosphere. Modeling of relaxation processes in the terrestrial atmosphere has incorporated data of recent observations, and theoretical predictions have been tested by new laboratory measurements.

  5. A simple magic cup to inject excitement and curiosity in physics

    NASA Astrophysics Data System (ADS)

    Amir, Nazir

    2018-05-01

    This article highlights a simple demonstration kit that can be easily fabricated in Design & Technology (D&T) workshops to inject excitement and curiosity into students’ learning of physics concepts such as density and optics. Using an ice cream cup from a fast food restaurant and a transparent circular acrylic piece, students can be guided to make a ‘magic’ cup, while at the same time get inquisitive about the physics behind the magic. The project highlights a way of linking physics to D&T in a feasible manner which can motivate and engage students.

  6. Real-time detection of fast and thermal neutrons in radiotherapy with CMOS sensors.

    PubMed

    Arbor, Nicolas; Higueret, Stephane; Elazhar, Halima; Combe, Rodolphe; Meyer, Philippe; Dehaynin, Nicolas; Taupin, Florence; Husson, Daniel

    2017-03-07

    The peripheral dose distribution is a growing concern for the improvement of new external radiation modalities. Secondary particles, especially photo-neutrons produced by the accelerator, irradiate the patient more than tens of centimeters away from the tumor volume. However the out-of-field dose is still not estimated accurately by the treatment planning softwares. This study demonstrates the possibility of using a specially designed CMOS sensor for fast and thermal neutron monitoring in radiotherapy. The 14 microns-thick sensitive layer and the integrated electronic chain of the CMOS are particularly suitable for real-time measurements in γ/n mixed fields. An experimental field size dependency of the fast neutron production rate, supported by Monte Carlo simulations and CR-39 data, has been observed. This dependency points out the potential benefits of a real-time monitoring of fast and thermal neutron during beam intensity modulated radiation therapies.

  7. Simple apparatus for polarization sensing of analytes

    NASA Astrophysics Data System (ADS)

    Gryczynski, Zygmunt; Gryczynski, Ignacy; Lakowicz, Joseph R.

    2000-09-01

    We describe a simple device for fluorescence sensing based on an unexpansive light source, a dual photocell and a Watson bridge. The emission is detected from two fluorescent samples, one of which changes intensity in response to the analyte. The emission from these two samples is observed through two orthogonally oriented polarizers and an analyzer polarizer. The latter polarizer is rotated to yield equal intensities from both sides of the dual photocell, as determined by a zero voltage from the Watson bridge. Using this device, we are able to measure fluorescein concentration to an accuracy near 2% at 1 (mu) M fluorescein, and pH values accurate to +/- 0.02 pH units. We also use this approach with a UV hand lamp and a glucose-sensitive protein to measure glucose concentrations near 2 (mu) M to an accuracy of +/- 0.1 (mu) M. This approach requires only simple electronics, which can be battery powered. Additionally, the method is generic, and can be applied with any fluorescent sample that displays a change in intensity. One can imagine this approach being used to develop portable point-of-care clinical devices.

  8. Simple X-ray versus ultrasonography examination in blunt chest trauma: effective tools of accurate diagnosis and considerations for rib fractures.

    PubMed

    Hwang, Eun Gu; Lee, Yunjung

    2016-12-01

    Simple radiography is the best diagnostic tool for rib fractures caused by chest trauma, but it has some limitations. Thus, other tools are also being used. The aims of this study were to investigate the effectiveness of ultrasonography (US) for identifying rib fractures and to identify influencing factors of its effectiveness. Between October 2003 and August 2007, 201 patients with blunt chest trauma were available to undergo chest radiographic and US examinations for diagnosis of rib fractures. The two modalities were compared in terms of effectiveness based on simple radiographic readings and US examination results. We also investigated the factors that influenced the effectiveness of US examination. Rib fractures were detected on radiography in 69 patients (34.3%) but not in 132 patients. Rib fractures were diagnosed by using US examination in 160 patients (84.6%). Of the 132 patients who showed no rib fractures on radiography, 92 showed rib fractures on US. Among the 69 patients of rib fracture detected on radiography, 33 had additional rib fractures detected on US. Of the patients, 76 (37.8%) had identical radiographic and US results, and 125 (62.2%) had fractures detected on US that were previously undetected on radiography or additional fractures detected on US. Age, duration until US examination, and fracture location were not significant influencing factors. However, in the group without detected fractures on radiography, US showed a more significant effectiveness than in the group with detected fractures on radiography ( P =0.003). US examination could detect unnoticed rib fractures on simple radiography. US examination is especially more effective in the group without detected fractures on radiography. More attention should be paid to patients with chest trauma who have no detected fractures on radiography.

  9. High-resolution nuclear magnetic resonance measurements in inhomogeneous magnetic fields: A fast two-dimensional J-resolved experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yuqing; Cai, Shuhui; Yang, Yu

    2016-03-14

    High spectral resolution in nuclear magnetic resonance (NMR) is a prerequisite for achieving accurate information relevant to molecular structures and composition assignments. The continuous development of superconducting magnets guarantees strong and homogeneous static magnetic fields for satisfactory spectral resolution. However, there exist circumstances, such as measurements on biological tissues and heterogeneous chemical samples, where the field homogeneity is degraded and spectral line broadening seems inevitable. Here we propose an NMR method, named intermolecular zero-quantum coherence J-resolved spectroscopy (iZQC-JRES), to face the challenge of field inhomogeneity and obtain desired high-resolution two-dimensional J-resolved spectra with fast acquisition. Theoretical analyses for this methodmore » are given according to the intermolecular multiple-quantum coherence treatment. Experiments on (a) a simple chemical solution and (b) an aqueous solution of mixed metabolites under externally deshimmed fields, and on (c) a table grape sample with intrinsic field inhomogeneity from magnetic susceptibility variations demonstrate the feasibility and applicability of the iZQC-JRES method. The application of this method to inhomogeneous chemical and biological samples, maybe in vivo samples, appears promising.« less

  10. Fast Two-Dimensional Bubble Analysis of Biopolymer Filamentous Networks Pore Size from Confocal Microscopy Thin Data Stacks

    PubMed Central

    Molteni, Matteo; Magatti, Davide; Cardinali, Barbara; Rocco, Mattia; Ferri, Fabio

    2013-01-01

    The average pore size ξ0 of filamentous networks assembled from biological macromolecules is one of the most important physical parameters affecting their biological functions. Modern optical methods, such as confocal microscopy, can noninvasively image such networks, but extracting a quantitative estimate of ξ0 is a nontrivial task. We present here a fast and simple method based on a two-dimensional bubble approach, which works by analyzing one by one the (thresholded) images of a series of three-dimensional thin data stacks. No skeletonization or reconstruction of the full geometry of the entire network is required. The method was validated by using many isotropic in silico generated networks of different structures, morphologies, and concentrations. For each type of network, the method provides accurate estimates (a few percent) of the average and the standard deviation of the three-dimensional distribution of the pore sizes, defined as the diameters of the largest spheres that can be fit into the pore zones of the entire gel volume. When applied to the analysis of real confocal microscopy images taken on fibrin gels, the method provides an estimate of ξ0 consistent with results from elastic light scattering data. PMID:23473499

  11. Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.

    PubMed

    Salis, Howard; Kaznessis, Yiannis

    2005-02-01

    The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.

  12. The Development and Validation of Novel, Simple High-Performance Liquid Chromatographic Method with Refractive Index Detector for Quantification of Memantine Hydrochloride in Dissolution Samples.

    PubMed

    Sawant, Tukaram B; Wakchaure, Vikas S; Rakibe, Udyakumar K; Musmade, Prashant B; Chaudhari, Bhata R; Mane, Dhananjay V

    2017-07-01

    The present study was aimed to develop an analytical method for quantification of memantine (MEM) hydrochloride in dissolution samples using high-performance liquid chromatography with refractive index (RI) detector. The chromatographic separation was achieved on C18 (250 × 4.5 mm, 5 μm) column using isocratic mobile phase comprises of buffer (pH 5.2):methanol (40:60 v/v) pumped at a flow rate of 1.0 mL/min. The column effluents were monitored using RI detector. The retention time of MEM was found to be ~6.5 ± 0.3 min. The developed chromatographic method was validated and found to be linear over the concentration range of 5.0-45.0 μg/mL for MEM. Mean recovery of MEM was found to be 99.2 ± 0.5% (w/w). The method was found to be simple, fast, precise and accurate, which can be utilized for the quantification of MEM in dissolution samples. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator

    NASA Astrophysics Data System (ADS)

    Wu, Baisheng; Liu, Weijia; Lim, C. W.

    2017-07-01

    A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.

  14. New correction procedures for the fast field program which extend its range

    NASA Technical Reports Server (NTRS)

    West, M.; Sack, R. A.

    1990-01-01

    A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.

  15. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  16. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    NASA Astrophysics Data System (ADS)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  17. Accurate image-charge method by the use of the residue theorem for core-shell dielectric sphere

    NASA Astrophysics Data System (ADS)

    Fu, Jing; Xu, Zhenli

    2018-02-01

    An accurate image-charge method (ICM) is developed for ionic interactions outside a core-shell structured dielectric sphere. Core-shell particles have wide applications for which the theoretical investigation requires efficient methods for the Green's function used to calculate pairwise interactions of ions. The ICM is based on an inverse Mellin transform from the coefficients of spherical harmonic series of the Green's function such that the polarization charge due to dielectric boundaries is represented by a series of image point charges and an image line charge. The residue theorem is used to accurately calculate the density of the line charge. Numerical results show that the ICM is promising in fast evaluation of the Green's function, and thus it is useful for theoretical investigations of core-shell particles. This routine can also be applicable for solving other problems with spherical dielectric interfaces such as multilayered media and Debye-Hückel equations.

  18. Evaluation and application of a fast module in a PLC based interlock and control system

    NASA Astrophysics Data System (ADS)

    Zaera-Sanz, M.

    2009-08-01

    The LHC Beam Interlock system requires a controller performing a simple matrix function to collect the different beam dump requests. To satisfy the expected safety level of the Interlock, the system should be robust and reliable. The PLC is a promising candidate to fulfil both aspects but too slow to meet the expected response time which is of the order of μseconds. Siemens has introduced a ``so called'' fast module (FM352-5 Boolean Processor). It provides independent and extremely fast control of a process within a larger control system using an onboard processor, a Field Programmable Gate Array (FPGA), to execute code in parallel which results in extremely fast scan times. It is interesting to investigate its features and to evaluate it as a possible candidate for the beam interlock system. This paper publishes the results of this study. As well, this paper could be useful for other applications requiring fast processing using a PLC.

  19. PconsD: ultra rapid, accurate model quality assessment for protein structure prediction.

    PubMed

    Skwark, Marcin J; Elofsson, Arne

    2013-07-15

    Clustering methods are often needed for accurately assessing the quality of modeled protein structures. Recent blind evaluation of quality assessment methods in CASP10 showed that there is little difference between many different methods as far as ranking models and selecting best model are concerned. When comparing many models, the computational cost of the model comparison can become significant. Here, we present PconsD, a fast, stream-computing method for distance-driven model quality assessment that runs on consumer hardware. PconsD is at least one order of magnitude faster than other methods of comparable accuracy. The source code for PconsD is freely available at http://d.pcons.net/. Supplementary benchmarking data are also available there. arne@bioinfo.se Supplementary data are available at Bioinformatics online.

  20. RCD+: Fast loop modeling server.

    PubMed

    López-Blanco, José Ramón; Canosa-Valls, Alejandro Jesús; Li, Yaohang; Chacón, Pablo

    2016-07-08

    Modeling loops is a critical and challenging step in protein modeling and prediction. We have developed a quick online service (http://rcd.chaconlab.org) for ab initio loop modeling combining a coarse-grained conformational search with a full-atom refinement. Our original Random Coordinate Descent (RCD) loop closure algorithm has been greatly improved to enrich the sampling distribution towards near-native conformations. These improvements include a new workflow optimization, MPI-parallelization and fast backbone angle sampling based on neighbor-dependent Ramachandran probability distributions. The server starts by efficiently searching the vast conformational space from only the loop sequence information and the environment atomic coordinates. The generated closed loop models are subsequently ranked using a fast distance-orientation dependent energy filter. Top ranked loops are refined with the Rosetta energy function to obtain accurate all-atom predictions that can be interactively inspected in an user-friendly web interface. Using standard benchmarks, the average root mean squared deviation (RMSD) is 0.8 and 1.4 Å for 8 and 12 residues loops, respectively, in the challenging modeling scenario in where the side chains of the loop environment are fully remodeled. These results are not only very competitive compared to those obtained with public state of the art methods, but also they are obtained ∼10-fold faster. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.