Science.gov

Sample records for accurate fast simple

  1. New simple method for fast and accurate measurement of volumes

    NASA Astrophysics Data System (ADS)

    Frattolillo, Antonio

    2006-04-01

    A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The

  2. A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood.

    PubMed

    Guindon, Stéphane; Gascuel, Olivier

    2003-10-01

    The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximum- likelihood principle, which clearly satisfies these requirements. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximum-likelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distance-based and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page: http://www.lirmm.fr/w3ifa/MAAS/.

  3. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples.

    PubMed

    Mackie, David M; Jahnke, Justin P; Benyamin, Marcus S; Sumner, James J

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users' purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells.

  4. Simple, fast, and accurate methodology for quantitative analysis using Fourier transform infrared spectroscopy, with bio-hybrid fuel cell examples

    PubMed Central

    Mackie, David M.; Jahnke, Justin P.; Benyamin, Marcus S.; Sumner, James J.

    2016-01-01

    The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users’ purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells. PMID:26977411

  5. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  6. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  7. Fast, accurate, robust and Open Source Brain Extraction Tool (OSBET)

    NASA Astrophysics Data System (ADS)

    Namias, R.; Donnelly Kehoe, P.; D'Amato, J. P.; Nagel, J.

    2015-12-01

    The removal of non-brain regions in neuroimaging is a critical task to perform a favorable preprocessing. The skull-stripping depends on different factors including the noise level in the image, the anatomy of the subject being scanned and the acquisition sequence. For these and other reasons, an ideal brain extraction method should be fast, accurate, user friendly, open-source and knowledge based (to allow for the interaction with the algorithm in case the expected outcome is not being obtained), producing stable results and making it possible to automate the process for large datasets. There are already a large number of validated tools to perform this task but none of them meets the desired characteristics. In this paper we introduced an open source brain extraction tool (OSBET), composed of four steps using simple well-known operations such as: optimal thresholding, binary morphology, labeling and geometrical analysis that aims to assemble all the desired features. We present an experiment comparing OSBET with other six state-of-the-art techniques against a publicly available dataset consisting of 40 T1-weighted 3D scans and their corresponding manually segmented images. OSBET gave both: a short duration with an excellent accuracy, getting the best Dice Coefficient metric. Further validation should be performed, for instance, in unhealthy population, to generalize its usage for clinical purposes.

  8. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    PubMed

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2]. PMID:27671933

  9. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    PubMed

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  10. Simple and accurate optical height sensor for wafer inspection systems

    NASA Astrophysics Data System (ADS)

    Shimura, Kei; Nakai, Naoya; Taniguchi, Koichi; Itoh, Masahide

    2016-02-01

    An accurate method for measuring the wafer surface height is required for wafer inspection systems to adjust the focus of inspection optics quickly and precisely. A method for projecting a laser spot onto the wafer surface obliquely and for detecting its image displacement using a one-dimensional position-sensitive detector is known, and a variety of methods have been proposed for improving the accuracy by compensating the measurement error due to the surface patterns. We have developed a simple and accurate method in which an image of a reticle with eight slits is projected on the wafer surface and its reflected image is detected using an image sensor. The surface height is calculated by averaging the coordinates of the images of the slits in both the two directions in the captured image. Pattern-related measurement error was reduced by applying the coordinates averaging to the multiple-slit-projection method. Accuracy of better than 0.35 μm was achieved for a patterned wafer at the reference height and ±0.1 mm from the reference height in a simple configuration.

  11. Fast and accurate line scanner based on white light interferometry

    NASA Astrophysics Data System (ADS)

    Lambelet, Patrick; Moosburger, Rudolf

    2013-04-01

    White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is

  12. Filtered schemes for Hamilton-Jacobi equations: A simple construction of convergent accurate difference schemes

    NASA Astrophysics Data System (ADS)

    Oberman, Adam M.; Salvador, Tiago

    2015-03-01

    We build a simple and general class of finite difference schemes for first order Hamilton-Jacobi (HJ) Partial Differential Equations. These filtered schemes are convergent to the unique viscosity solution of the equation. The schemes are accurate: we implement second, third and fourth order accurate schemes in one dimension and second order accurate schemes in two dimensions, indicating how to build higher order ones. They are also explicit, which means they can be solved using the fast sweeping method. The accuracy of the method is validated with computational results for the eikonal equation and other HJ equations in one and two dimensions, using filtered schemes made from standard centered differences, higher order upwinding and ENO interpolation.

  13. Accurate and fast fiber transfer delay measurement based on phase discrimination and frequency measurement

    NASA Astrophysics Data System (ADS)

    Dong, J. W.; Wang, B.; Gao, C.; Wang, L. J.

    2016-09-01

    An accurate and fast fiber transfer delay measurement method is demonstrated. As a key technique, a simple ambiguity resolving process based on phase discrimination and frequency measurement is used to overcome the contradiction between measurement accuracy and system complexity. The system achieves a high measurement accuracy of 0.2 ps with a 0.1 ps measurement resolution and a large dynamic range up to 50 km as well as no dead zone.

  14. A fast and accurate algorithm for diploid individual haplotype reconstruction.

    PubMed

    Wu, Jingli; Liang, Binbin

    2013-08-01

    Haplotypes can provide significant information in many research fields, including molecular biology and medical therapy. However, haplotyping is much more difficult than genotyping by using only biological techniques. With the development of sequencing technologies, it becomes possible to obtain haplotypes by combining sequence fragments. The haplotype reconstruction problem of diploid individual has received considerable attention in recent years. It assembles the two haplotypes for a chromosome given the collection of fragments coming from the two haplotypes. Fragment errors significantly increase the difficulty of the problem, and which has been shown to be NP-hard. In this paper, a fast and accurate algorithm, named FAHR, is proposed for haplotyping a single diploid individual. Algorithm FAHR reconstructs the SNP sites of a pair of haplotypes one after another. The SNP fragments that cover some SNP site are partitioned into two groups according to the alleles of the corresponding SNP site, and the SNP values of the pair of haplotypes are ascertained by using the fragments in the group that contains more SNP fragments. The experimental comparisons were conducted among the FAHR, the Fast Hare and the DGS algorithms by using the haplotypes on chromosome 1 of 60 individuals in CEPH samples, which were released by the International HapMap Project. Experimental results under different parameter settings indicate that the reconstruction rate of the FAHR algorithm is higher than those of the Fast Hare and the DGS algorithms, and the running time of the FAHR algorithm is shorter than those of the Fast Hare and the DGS algorithms. Moreover, the FAHR algorithm has high efficiency even for the reconstruction of long haplotypes and is very practical for realistic applications.

  15. An accurate and simple quantum model for liquid water.

    PubMed

    Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A

    2006-11-14

    The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics

  16. Fast and accurate determination of modularity and its effect size

    NASA Astrophysics Data System (ADS)

    Treviño, Santiago, III; Nyberg, Amy; Del Genio, Charo I.; Bassler, Kevin E.

    2015-02-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erdős-Rényi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a z-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  17. Learning accurate very fast decision trees from uncertain data streams

    NASA Astrophysics Data System (ADS)

    Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo

    2015-12-01

    Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.

  18. Learning fast accurate movements requires intact frontostriatal circuits

    PubMed Central

    Shabbott, Britne; Ravindran, Roshni; Schumacher, Joseph W.; Wasserman, Paula B.; Marder, Karen S.; Mazzoni, Pietro

    2013-01-01

    The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound), and difficulties in distinguishing learning deficits from execution impairments (performance confound). We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements. We addressed the definition and performance confounds by: (1) focusing on an operationally defined core element of motor skill learning (speed-accuracy learning), and (2) using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington's disease (HD), a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate) than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning. PMID:24312037

  19. Simple tunnel diode circuit for accurate zero crossing timing

    NASA Technical Reports Server (NTRS)

    Metz, A. J.

    1969-01-01

    Tunnel diode circuit, capable of timing the zero crossing point of bipolar pulses, provides effective design for a fast crossing detector. It combines a nonlinear load line with the diode to detect the zero crossing of a wide range of input waveshapes.

  20. A Simple and Accurate Method for Measuring Enzyme Activity.

    ERIC Educational Resources Information Center

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  1. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  2. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy

    NASA Astrophysics Data System (ADS)

    Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui

    2014-06-01

    Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.

  3. Toward accurate and fast iris segmentation for iris biometrics.

    PubMed

    He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao

    2009-09-01

    Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed. PMID:19574626

  4. Toward accurate and fast iris segmentation for iris biometrics.

    PubMed

    He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao

    2009-09-01

    Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.

  5. Progress in fast, accurate multi-scale climate simulations

    SciTech Connect

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  6. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  7. Progress in Fast, Accurate Multi-scale Climate Simulations

    SciTech Connect

    Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  8. Very Fast and Accurate Azimuth Disambiguation of Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Rudenko, G. V.; Anfinogentov, S. A.

    2014-05-01

    We present a method for fast and accurate azimuth disambiguation of vector magnetogram data regardless of the location of the analyzed region on the solar disk. The direction of the transverse field is determined with the principle of minimum deviation of the field from the reference (potential) field. The new disambiguation (NDA) code is examined on the well-known models of Metcalf et al. ( Solar Phys. 237, 267, 2006) and Leka et al. ( Solar Phys. 260, 83, 2009), and on an artificial model based on the observed magnetic field of AR 10930 (Rudenko, Myshyakov, and Anfinogentov, Astron. Rep. 57, 622, 2013). We compare Hinode/SOT-SP vector magnetograms of AR 10930 disambiguated with three codes: the NDA code, the nonpotential magnetic-field calculation (NPFC: Georgoulis, Astrophys. J. Lett. 629, L69, 2005), and the spherical minimum-energy method (Rudenko, Myshyakov, and Anfinogentov, Astron. Rep. 57, 622, 2013). We then illustrate the performance of NDA on SDO/HMI full-disk magnetic-field observations. We show that our new algorithm is more than four times faster than the fastest algorithm that provides the disambiguation with a satisfactory accuracy (NPFC). At the same time, its accuracy is similar to that of the minimum-energy method (a very slow algorithm). In contrast to other codes, the NDA code maintains high accuracy when the region to be analyzed is very close to the limb.

  9. Stonehenge: A Simple and Accurate Predictor of Lunar Eclipses

    NASA Astrophysics Data System (ADS)

    Challener, S.

    1999-12-01

    Over the last century, much has been written about the astronomical significance of Stonehenge. The rage peaked in the mid to late 1960s when new computer technology enabled astronomers to make the first complete search for celestial alignments. Because there are hundreds of rocks or holes at Stonehenge and dozens of bright objects in the sky, the quest was fraught with obvious statistical problems. A storm of controversy followed and the subject nearly vanished from print. Only a handful of these alignments remain compelling. Today, few astronomers and still fewer archaeologists would argue that Stonehenge served primarily as an observatory. Instead, Stonehenge probably served as a sacred meeting place, which was consecrated by certain celestial events. These would include the sun's risings and settings at the solstices and possibly some lunar risings as well. I suggest that Stonehenge was also used to predict lunar eclipses. While Hawkins and Hoyle also suggested that Stonehenge was used in this way, their methods are complex and they make use of only early, minor, or outlying areas of Stonehenge. In contrast, I suggest a way that makes use of the imposing, central region of Stonehenge; the area built during the final phase of activity. To predict every lunar eclipse without predicting eclipses that do not occur, I use the less familiar lunar cycle of 47 lunar months. By moving markers about the Sarsen Circle, the Bluestone Circle, and the Bluestone Horseshoe, all umbral lunar eclipses can be predicted accurately.

  10. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  11. A fast and accurate method for echocardiography strain rate imaging

    NASA Astrophysics Data System (ADS)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  12. Massively Parallel Processing for Fast and Accurate Stamping Simulations

    NASA Astrophysics Data System (ADS)

    Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu

    2005-08-01

    The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.

  13. Fast and Accurate Construction of Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  14. BBMap: A Fast, Accurate, Splice-Aware Aligner

    SciTech Connect

    Bushnell, Brian

    2014-03-17

    Alignment of reads is one of the primary computational tasks in bioinformatics. Of paramount importance to resequencing, alignment is also crucial to other areas - quality control, scaffolding, string-graph assembly, homology detection, assembly evaluation, error-correction, expression quantification, and even as a tool to evaluate other tools. An optimal aligner would greatly improve virtually any sequencing process, but optimal alignment is prohibitively expensive for gigabases of data. Here, we will present BBMap [1], a fast splice-aware aligner for short and long reads. We will demonstrate that BBMap has superior speed, sensitivity, and specificity to alternative high-throughput aligners bowtie2 [2], bwa [3], smalt, [4] GSNAP [5], and BLASR [6].

  15. A fast and accurate decoder for underwater acoustic telemetry

    NASA Astrophysics Data System (ADS)

    Ingraham, J. M.; Deng, Z. D.; Li, X.; Fu, T.; McMichael, G. A.; Trumbo, B. A.

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.

  16. A fast and accurate decoder for underwater acoustic telemetry.

    PubMed

    Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system. PMID:25085162

  17. A new simple multidomain fast multipole boundary element method

    NASA Astrophysics Data System (ADS)

    Huang, S.; Liu, Y. J.

    2016-09-01

    A simple multidomain fast multipole boundary element method (BEM) for solving potential problems is presented in this paper, which can be applied to solve a true multidomain problem or a large-scale single domain problem using the domain decomposition technique. In this multidomain BEM, the coefficient matrix is formed simply by assembling the coefficient matrices of each subdomain and the interface conditions between subdomains without eliminating any unknown variables on the interfaces. Compared with other conventional multidomain BEM approaches, this new approach is more efficient with the fast multipole method, regardless how the subdomains are connected. Instead of solving the linear system of equations directly, the entire coefficient matrix is partitioned and decomposed using Schur complement in this new approach. Numerical results show that the new multidomain fast multipole BEM uses fewer iterations in most cases with the iterative equation solver and less CPU time than the traditional fast multipole BEM in solving large-scale BEM models. A large-scale fuel cell model with more than 6 million elements was solved successfully on a cluster within 3 h using the new multidomain fast multipole BEM.

  18. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  19. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    SciTech Connect

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  20. Fast and Accurate Detection of Multiple Quantitative Trait Loci

    PubMed Central

    Nettelblad, Carl; Holmgren, Sverker

    2013-01-01

    Abstract We present a new computational scheme that enables efficient and reliable quantitative trait loci (QTL) scans for experimental populations. Using a standard brute-force exhaustive search effectively prohibits accurate QTL scans involving more than two loci to be performed in practice, at least if permutation testing is used to determine significance. Some more elaborate global optimization approaches, for example, DIRECT have been adopted earlier to QTL search problems. Dramatic speedups have been reported for high-dimensional scans. However, since a heuristic termination criterion must be used in these types of algorithms, the accuracy of the optimization process cannot be guaranteed. Indeed, earlier results show that a small bias in the significance thresholds is sometimes introduced. Our new optimization scheme, PruneDIRECT, is based on an analysis leading to a computable (Lipschitz) bound on the slope of a transformed objective function. The bound is derived for both infinite- and finite-size populations. Introducing a Lipschitz bound in DIRECT leads to an algorithm related to classical Lipschitz optimization. Regions in the search space can be permanently excluded (pruned) during the optimization process. Heuristic termination criteria can thus be avoided. Hence, PruneDIRECT has a well-defined error bound and can in practice be guaranteed to be equivalent to a corresponding exhaustive search. We present simulation results that show that for simultaneous mapping of three QTLS using permutation testing, PruneDIRECT is typically more than 50 times faster than exhaustive search. The speedup is higher for stronger QTL. This could be used to quickly detect strong candidate eQTL networks. PMID:23919387

  1. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  2. Fast and accurate predictions of covalent bonds in chemical space

    NASA Astrophysics Data System (ADS)

    Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  3. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  4. Effects of Fast Simple Numerical Calculation Training on Neural Systems.

    PubMed

    Takeuchi, Hikaru; Nagase, Tomomi; Taki, Yasuyuki; Sassa, Yuko; Hashizume, Hiroshi; Nouchi, Rui; Kawashima, Ryuta

    2016-01-01

    Cognitive training, including fast simple numerical calculation (FSNC), has been shown to improve performance on untrained processing speed and executive function tasks in the elderly. However, the effects of FSNC training on cognitive functions in the young and on neural mechanisms remain unknown. We investigated the effects of 1-week intensive FSNC training on cognitive function, regional gray matter volume (rGMV), and regional cerebral blood flow at rest (resting rCBF) in healthy young adults. FSNC training was associated with improvements in performance on simple processing speed, speeded executive functioning, and simple and complex arithmetic tasks. FSNC training was associated with a reduction in rGMV and an increase in resting rCBF in the frontopolar areas and a weak but widespread increase in resting rCBF in an anatomical cluster in the posterior region. These results provide direct evidence that FSNC training alone can improve performance on processing speed and executive function tasks as well as plasticity of brain structures and perfusion. Our results also indicate that changes in neural systems in the frontopolar areas may underlie these cognitive improvements.

  5. Effects of Fast Simple Numerical Calculation Training on Neural Systems

    PubMed Central

    Takeuchi, Hikaru; Nagase, Tomomi; Taki, Yasuyuki; Sassa, Yuko; Hashizume, Hiroshi; Nouchi, Rui; Kawashima, Ryuta

    2016-01-01

    Cognitive training, including fast simple numerical calculation (FSNC), has been shown to improve performance on untrained processing speed and executive function tasks in the elderly. However, the effects of FSNC training on cognitive functions in the young and on neural mechanisms remain unknown. We investigated the effects of 1-week intensive FSNC training on cognitive function, regional gray matter volume (rGMV), and regional cerebral blood flow at rest (resting rCBF) in healthy young adults. FSNC training was associated with improvements in performance on simple processing speed, speeded executive functioning, and simple and complex arithmetic tasks. FSNC training was associated with a reduction in rGMV and an increase in resting rCBF in the frontopolar areas and a weak but widespread increase in resting rCBF in an anatomical cluster in the posterior region. These results provide direct evidence that FSNC training alone can improve performance on processing speed and executive function tasks as well as plasticity of brain structures and perfusion. Our results also indicate that changes in neural systems in the frontopolar areas may underlie these cognitive improvements. PMID:26881117

  6. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues

    PubMed Central

    EL-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  7. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    PubMed

    El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  8. A Simple Dewar/Cryostat for Thermally Equilibrating Samples at Known Temperatures for Accurate Cryogenic Luminescence Measurements.

    PubMed

    Weaver, Phoebe G; Jagow, Devin M; Portune, Cameron M; Kenney, John W

    2016-01-01

    The design and operation of a simple liquid nitrogen Dewar/cryostat apparatus based upon a small fused silica optical Dewar, a thermocouple assembly, and a CCD spectrograph are described. The experiments for which this Dewar/cryostat is designed require fast sample loading, fast sample freezing, fast alignment of the sample, accurate and stable sample temperatures, and small size and portability of the Dewar/cryostat cryogenic unit. When coupled with the fast data acquisition rates of the CCD spectrograph, this Dewar/cryostat is capable of supporting cryogenic luminescence spectroscopic measurements on luminescent samples at a series of known, stable temperatures in the 77-300 K range. A temperature-dependent study of the oxygen quenching of luminescence in a rhodium(III) transition metal complex is presented as an example of the type of investigation possible with this Dewar/cryostat. In the context of this apparatus, a stable temperature for cryogenic spectroscopy means a luminescent sample that is thermally equilibrated with either liquid nitrogen or gaseous nitrogen at a known measureable temperature that does not vary (ΔT < 0.1 K) during the short time scale (~1-10 sec) of the spectroscopic measurement by the CCD. The Dewar/cryostat works by taking advantage of the positive thermal gradient dT/dh that develops above liquid nitrogen level in the Dewar where h is the height of the sample above the liquid nitrogen level. The slow evaporation of the liquid nitrogen results in a slow increase in h over several hours and a consequent slow increase in the sample temperature T over this time period. A quickly acquired luminescence spectrum effectively catches the sample at a constant, thermally equilibrated temperature. PMID:27501355

  9. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-07-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  10. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  11. Simple accurate approximations for the optical properties of metallic nanospheres and nanoshells.

    PubMed

    Schebarchov, Dmitri; Auguié, Baptiste; Le Ru, Eric C

    2013-03-28

    This work aims to provide simple and accurate closed-form approximations to predict the scattering and absorption spectra of metallic nanospheres and nanoshells supporting localised surface plasmon resonances. Particular attention is given to the validity and accuracy of these expressions in the range of nanoparticle sizes relevant to plasmonics, typically limited to around 100 nm in diameter. Using recent results on the rigorous radiative correction of electrostatic solutions, we propose a new set of long-wavelength polarizability approximations for both nanospheres and nanoshells. The improvement offered by these expressions is demonstrated with direct comparisons to other approximations previously obtained in the literature, and their absolute accuracy is tested against the exact Mie theory. PMID:23358525

  12. Odontoma-associated tooth impaction: accurate diagnosis with simple methods? Case report and literature review.

    PubMed

    Troeltzsch, Matthias; Liedtke, Jan; Troeltzsch, Volker; Frankenberger, Roland; Steiner, Timm; Troeltzsch, Markus

    2012-10-01

    Odontomas account for the largest fraction of odontogenic tumors and are frequent causes of tooth impaction. A case of a 13-year-old female patient with an odontoma-associated impaction of a mandibular molar is presented with a review of the literature. Preoperative planning involved simple and convenient methods such as clinical examination and panoramic radiography, which led to a diagnosis of complex odontoma and warranted surgical removal. The clinical diagnosis was confirmed histologically. Multidisciplinary consultation may enable the clinician to find the accurate diagnosis and appropriate therapy based on the clinical and radiographic appearance. Modern radiologic methods such as cone-beam computed tomography or computed tomography should be applied only for special cases, to decrease radiation.

  13. A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns

    NASA Astrophysics Data System (ADS)

    Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae

    2004-05-01

    Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.

  14. Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation

    NASA Astrophysics Data System (ADS)

    Poddar, Banibrata; Giurgiutiu, Victor

    2016-04-01

    Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.

  15. Automated Fast and Accurate Display Calibration Using ADT Compensated LCD for Mobile Phone

    NASA Astrophysics Data System (ADS)

    Han, Chan-Ho; Park, Kil-Houm

    Gamma correction is an essential function and is time consuming task in every display device such as CRT and LCD. And gray scale CCT reproduction in most LCD are quite different from those of standard CRT. An automated fast and accurate display adjusment method and system for gamma correction and for constant gray scale CCT calibration of mobile phone LCD is presented in this paper. We develop the test pattern disply and register control program in mobile phone and devleop automatic measure program in computer using spectroradimeter. The proposed system is maintain given gamma values and CCT values accuratly. In addition, This system is possible to fast mobile phone LCD adjusment within one hour.

  16. Fast, Accurate RF Propagation Modeling and Simulation Tool for Highly Cluttered Environments

    SciTech Connect

    Kuruganti, Phani Teja

    2007-01-01

    As network centric warfare and distributed operations paradigms unfold, there is a need for robust, fast wireless network deployment tools. These tools must take into consideration the terrain of the operating theater, and facilitate specific modeling of end to end network performance based on accurate RF propagation predictions. It is well known that empirical models can not provide accurate, site specific predictions of radio channel behavior. In this paper an event-driven wave propagation simulation is proposed as a computationally efficient technique for predicting critical propagation characteristics of RF signals in cluttered environments. Convincing validation and simulator performance studies confirm the suitability of this method for indoor and urban area RF channel modeling. By integrating our RF propagation prediction tool, RCSIM, with popular packetlevel network simulators, we are able to construct an end to end network analysis tool for wireless networks operated in built-up urban areas.

  17. Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise

    SciTech Connect

    Groeneboom, N. E.; Dahle, H.

    2014-03-10

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  18. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    PubMed

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  19. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms

    PubMed Central

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  20. Simple, rapid and accurate molecular diagnosis of acute promyelocytic leukemia by loop mediated amplification technology.

    PubMed

    Spinelli, Orietta; Rambaldi, Alessandro; Rigo, Francesca; Zanghì, Pamela; D'Agostini, Elena; Amicarelli, Giulia; Colotta, Francesco; Divona, Mariadomenica; Ciardi, Claudia; Coco, Francesco Lo; Minnucci, Giulia

    2015-01-01

    The diagnostic work-up of acute promyelocytic leukemia (APL) includes the cytogenetic demonstration of the t(15;17) translocation and/or the PML-RARA chimeric transcript by RQ-PCR or RT-PCR. This latter assays provide suitable results in 3-6 hours. We describe here two new, rapid and specific assays that detect PML-RARA transcripts, based on the RT-QLAMP (Reverse Transcription-Quenching Loop-mediated Isothermal Amplification) technology in which RNA retrotranscription and cDNA amplification are carried out in a single tube with one enzyme at one temperature, in fluorescence and real time format. A single tube triplex assay detects bcr1 and bcr3 PML-RARA transcripts along with GUS housekeeping gene. A single tube duplex assay detects bcr2 and GUSB. In 73 APL cases, these assays detected in 16 minutes bcr1, bcr2 and bcr3 transcripts. All 81 non-APL samples were negative by RT-QLAMP for chimeric transcripts whereas GUSB was detectable. In 11 APL patients in which RT-PCR yielded equivocal breakpoint type results, RT-QLAMP assays unequivocally and accurately defined the breakpoint type (as confirmed by sequencing). Furthermore, RT-QLAMP could amplify two bcr2 transcripts with particularly extended PML exon 6 deletions not amplified by RQ-PCR. RT-QLAMP reproducible sensitivity is 10(-3) for bcr1 and bcr3 and 10(-)2 for bcr2 thus making this assay particularly attractive at diagnosis and leaving RQ-PCR for the molecular monitoring of minimal residual disease during the follow up. In conclusion, PML-RARA RT-QLAMP compared to RT-PCR or RQ-PCR is a valid improvement to perform rapid, simple and accurate molecular diagnosis of APL. PMID:25815362

  1. Simple, rapid and accurate molecular diagnosis of acute promyelocytic leukemia by loop mediated amplification technology

    PubMed Central

    Spinelli, Orietta; Rambaldi, Alessandro; Rigo, Francesca; Zanghì, Pamela; D'Agostini, Elena; Amicarelli, Giulia; Colotta, Francesco; Divona, Mariadomenica; Ciardi, Claudia; Coco, Francesco Lo; Minnucci, Giulia

    2015-01-01

    The diagnostic work-up of acute promyelocytic leukemia (APL) includes the cytogenetic demonstration of the t(15;17) translocation and/or the PML-RARA chimeric transcript by RQ-PCR or RT-PCR. This latter assays provide suitable results in 3-6 hours. We describe here two new, rapid and specific assays that detect PML-RARA transcripts, based on the RT-QLAMP (Reverse Transcription-Quenching Loop-mediated Isothermal Amplification) technology in which RNA retrotranscription and cDNA amplification are carried out in a single tube with one enzyme at one temperature, in fluorescence and real time format. A single tube triplex assay detects bcr1 and bcr3 PML-RARA transcripts along with GUS housekeeping gene. A single tube duplex assay detects bcr2 and GUSB. In 73 APL cases, these assays detected in 16 minutes bcr1, bcr2 and bcr3 transcripts. All 81 non-APL samples were negative by RT-QLAMP for chimeric transcripts whereas GUSB was detectable. In 11 APL patients in which RT-PCR yielded equivocal breakpoint type results, RT-QLAMP assays unequivocally and accurately defined the breakpoint type (as confirmed by sequencing). Furthermore, RT-QLAMP could amplify two bcr2 transcripts with particularly extended PML exon 6 deletions not amplified by RQ-PCR. RT-QLAMP reproducible sensitivity is 10−3 for bcr1 and bcr3 and 10−2 for bcr2 thus making this assay particularly attractive at diagnosis and leaving RQ-PCR for the molecular monitoring of minimal residual disease during the follow up. In conclusion, PML-RARA RT-QLAMP compared to RT-PCR or RQ-PCR is a valid improvement to perform rapid, simple and accurate molecular diagnosis of APL. PMID:25815362

  2. Fast and accurate mock catalogue generation for low-mass galaxies

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe

    2016-06-01

    We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.

  3. A simple accurate chest-compression depth gauge using magnetic coils during cardiopulmonary resuscitation

    NASA Astrophysics Data System (ADS)

    Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio

    2015-12-01

    This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).

  4. A simple accurate chest-compression depth gauge using magnetic coils during cardiopulmonary resuscitation.

    PubMed

    Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio

    2015-12-01

    This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).

  5. Simple and accurate quantification of quantum dots via single-particle counting.

    PubMed

    Zhang, Chun-yang; Johnson, Lawrence W

    2008-03-26

    Quantification of quantum dots (QDs) is essential to the quality control of QD synthesis, development of QD-based LEDs and lasers, functionalizing of QDs with biomolecules, and engineering of QDs for biological applications. However, simple and accurate quantification of QD concentration in a variety of buffer solutions and in complex mixtures still remains a critical technological challenge. Here, we introduce a new methodology for quantification of QDs via single-particle counting, which is conceptually different from established UV-vis absorption and fluorescence spectrum techniques where large amounts of purified QDs are needed and specific absorption coefficient or quantum yield values are necessary for measurements. We demonstrate that single-particle counting allows us to nondiscriminately quantify different kinds of QDs by their distinct fluorescence burst counts in a variety of buffer solutions regardless of their composition, structure, and surface modifications, and without the necessity of absorption coefficient and quantum yield values. This single-particle counting can also unambiguously quantify individual QDs in a complex mixture, which is practically impossible for both UV-vis absorption and fluorescence spectrum measurements. Importantly, the application of this single-particle counting is not just limited to QDs but also can be extended to fluorescent microspheres, quantum dot-based microbeads, and fluorescent nano rods, some of which currently lack efficient quantification methods.

  6. Fast and accurate short read alignment with Burrows–Wheeler transform

    PubMed Central

    Li, Heng; Durbin, Richard

    2009-01-01

    Motivation: The enormous amount of short reads generated by the new DNA sequencing technologies call for the development of fast and accurate read alignment programs. A first generation of hash table-based methods has been developed, including MAQ, which is accurate, feature rich and fast enough to align short reads from a single individual. However, MAQ does not support gapped alignment for single-end reads, which makes it unsuitable for alignment of longer reads where indels may occur frequently. The speed of MAQ is also a concern when the alignment is scaled up to the resequencing of hundreds of individuals. Results: We implemented Burrows-Wheeler Alignment tool (BWA), a new read alignment package that is based on backward search with Burrows–Wheeler Transform (BWT), to efficiently align short sequencing reads against a large reference sequence such as the human genome, allowing mismatches and gaps. BWA supports both base space reads, e.g. from Illumina sequencing machines, and color space reads from AB SOLiD machines. Evaluations on both simulated and real data suggest that BWA is ∼10–20× faster than MAQ, while achieving similar accuracy. In addition, BWA outputs alignment in the new standard SAM (Sequence Alignment/Map) format. Variant calling and other downstream analyses after the alignment can be achieved with the open source SAMtools software package. Availability: http://maq.sourceforge.net Contact: rd@sanger.ac.uk PMID:19451168

  7. Fast and accurate dating of nuclear events using La-140/Ba-140 isotopic activity ratio.

    PubMed

    Yamba, Kassoum; Sanogo, Oumar; Kalinowski, Martin B; Nikkinen, Mika; Koulidiati, Jean

    2016-06-01

    This study reports on a fast and accurate assessment of zero time of certain nuclear events using La-140/Ba-140 isotopic activity ratio. For a non-steady nuclear fission reaction, the dating is not possible. For the hypothesis of a nuclear explosion and for a release from a steady state nuclear fission reaction the zero-times will differ. This assessment is fast, because we propose some constants that can be used directly for the calculation of zero time and its upper and lower age limits. The assessment is accurate because of the calculation of zero time using a mathematical method, namely the weighted least-squares method, to evaluate an average value of the age of a nuclear event. This was done using two databases that exhibit differences between the values of some nuclear parameters. As an example, the calculation method is applied for the detection of radionuclides La-140 and Ba-140 in May 2010 at the radionuclides station JPP37 (Okinawa Island, Japan).

  8. Continuum heterogeneous biofilm model--a simple and accurate method for effectiveness factor determination.

    PubMed

    Gonzo, Elio Emilio; Wuertz, Stefan; Rajal, Veronica B

    2012-07-01

    We present a novel analytical approach to describe biofilm processes considering continuum variation of both biofilm density and substrate effective diffusivity. A simple perturbation and matching technique was used to quantify biofilm activity using the steady-state diffusion-reaction equation with continuum variable substrate effective diffusivity and biofilm density, along the coordinate normal to the biofilm surface. The procedure allows prediction of an effectiveness factor, η, defined as the ratio between the observed rate of substrate utilization (reaction rate with diffusion resistance) and the rate of substrate utilization without diffusion limitation. Main assumptions are that (i) the biofilm is a continuum, (ii) substrate is transferred by diffusion only and is consumed only by microorganisms at a rate according to Monod kinetics, (iii) biofilm density and substrate effective diffusivity change in the x direction, (iv) the substrate concentration above the biofilm surface is known, and (v) the substratum is impermeable. With this approach one can evaluate, in a fast and efficient way, the effect of different parameters that characterize a heterogeneous biofilm and the kinetics of the rate of substrate consumption on the behavior of the biological system. Based on a comparison of η profiles the activity of a homogeneous biofilm could be as much as 47.8% higher than that of a heterogeneous biofilm, under the given conditions. A comparison of η values estimated for first order kinetics and η values obtained by numerical techniques showed a maximum deviation of 1.75% in a narrow range of modified Thiele modulus values. When external mass transfer resistance, is also considered, a global effectiveness factor, η(0) , can be calculated. The main advantage of the approach lies in the analytical expression for the calculation of the intrinsic effectiveness factor η and its implementation in a computer program. For the test cases studied convergence was

  9. Continuum heterogeneous biofilm model--a simple and accurate method for effectiveness factor determination.

    PubMed

    Gonzo, Elio Emilio; Wuertz, Stefan; Rajal, Veronica B

    2012-07-01

    We present a novel analytical approach to describe biofilm processes considering continuum variation of both biofilm density and substrate effective diffusivity. A simple perturbation and matching technique was used to quantify biofilm activity using the steady-state diffusion-reaction equation with continuum variable substrate effective diffusivity and biofilm density, along the coordinate normal to the biofilm surface. The procedure allows prediction of an effectiveness factor, η, defined as the ratio between the observed rate of substrate utilization (reaction rate with diffusion resistance) and the rate of substrate utilization without diffusion limitation. Main assumptions are that (i) the biofilm is a continuum, (ii) substrate is transferred by diffusion only and is consumed only by microorganisms at a rate according to Monod kinetics, (iii) biofilm density and substrate effective diffusivity change in the x direction, (iv) the substrate concentration above the biofilm surface is known, and (v) the substratum is impermeable. With this approach one can evaluate, in a fast and efficient way, the effect of different parameters that characterize a heterogeneous biofilm and the kinetics of the rate of substrate consumption on the behavior of the biological system. Based on a comparison of η profiles the activity of a homogeneous biofilm could be as much as 47.8% higher than that of a heterogeneous biofilm, under the given conditions. A comparison of η values estimated for first order kinetics and η values obtained by numerical techniques showed a maximum deviation of 1.75% in a narrow range of modified Thiele modulus values. When external mass transfer resistance, is also considered, a global effectiveness factor, η(0) , can be calculated. The main advantage of the approach lies in the analytical expression for the calculation of the intrinsic effectiveness factor η and its implementation in a computer program. For the test cases studied convergence was

  10. Fast and Accurate Semiautomatic Segmentation of Individual Teeth from Dental CT Images

    PubMed Central

    Kang, Ho Chul; Choi, Chankyu; Shin, Juneseuk; Lee, Jeongjin; Shin, Yeong-Gil

    2015-01-01

    DIn this paper, we propose a fast and accurate semiautomatic method to effectively distinguish individual teeth from the sockets of teeth in dental CT images. Parameter values of thresholding and shapes of the teeth are propagated to the neighboring slice, based on the separated teeth from reference images. After the propagation of threshold values and shapes of the teeth, the histogram of the current slice was analyzed. The individual teeth are automatically separated and segmented by using seeded region growing. Then, the newly generated separation information is iteratively propagated to the neighboring slice. Our method was validated by ten sets of dental CT scans, and the results were compared with the manually segmented result and conventional methods. The average error of absolute value of volume measurement was 2.29 ± 0.56%, which was more accurate than conventional methods. Boosting up the speed with the multicore processors was shown to be 2.4 times faster than a single core processor. The proposed method identified the individual teeth accurately, demonstrating that it can give dentists substantial assistance during dental surgery. PMID:26413143

  11. Fast and Accurate Semiautomatic Segmentation of Individual Teeth from Dental CT Images.

    PubMed

    Kang, Ho Chul; Choi, Chankyu; Shin, Juneseuk; Lee, Jeongjin; Shin, Yeong-Gil

    2015-01-01

    In this paper, we propose a fast and accurate semiautomatic method to effectively distinguish individual teeth from the sockets of teeth in dental CT images. Parameter values of thresholding and shapes of the teeth are propagated to the neighboring slice, based on the separated teeth from reference images. After the propagation of threshold values and shapes of the teeth, the histogram of the current slice was analyzed. The individual teeth are automatically separated and segmented by using seeded region growing. Then, the newly generated separation information is iteratively propagated to the neighboring slice. Our method was validated by ten sets of dental CT scans, and the results were compared with the manually segmented result and conventional methods. The average error of absolute value of volume measurement was 2.29 ± 0.56%, which was more accurate than conventional methods. Boosting up the speed with the multicore processors was shown to be 2.4 times faster than a single core processor. The proposed method identified the individual teeth accurately, demonstrating that it can give dentists substantial assistance during dental surgery. PMID:26413143

  12. A Simple and Fast Spline Filtering Algorithm for Surface Metrology

    PubMed Central

    Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei

    2015-01-01

    Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement. PMID:26958443

  13. Fast and simple spectral FLIM for biochemical and medical imaging.

    PubMed

    Popleteeva, Marina; Haas, Kalina T; Stoppa, David; Pancheri, Lucio; Gasparini, Leonardo; Kaminski, Clemens F; Cassidy, Liam D; Venkitaraman, Ashok R; Esposito, Alessandro

    2015-09-01

    Spectrally resolved fluorescence lifetime imaging microscopy (λFLIM) has powerful potential for biochemical and medical imaging applications. However, long acquisition times, low spectral resolution and complexity of λFLIM often narrow its use to specialized laboratories. Therefore, we demonstrate here a simple spectral FLIM based on a solid-state detector array providing in-pixel histrogramming and delivering faster acquisition, larger dynamic range, and higher spectral elements than state-of-the-art λFLIM. We successfully apply this novel microscopy system to biochemical and medical imaging demonstrating that solid-state detectors are a key strategic technology to enable complex assays in biomedical laboratories and the clinic.

  14. Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes

    DOE PAGES

    Sjostrom, Travis; Daligault, Jerome

    2014-10-10

    Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lowermore » temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.« less

  15. Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes

    SciTech Connect

    Sjostrom, Travis; Daligault, Jerome

    2014-10-10

    Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lower temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.

  16. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  17. Fast and accurate read mapping with approximate seeds and multiple backtracking

    PubMed Central

    Siragusa, Enrico; Weese, David; Reinert, Knut

    2013-01-01

    We present Masai, a read mapper representing the state-of-the-art in terms of speed and accuracy. Our tool is an order of magnitude faster than RazerS 3 and mrFAST, 2–4 times faster and more accurate than Bowtie 2 and BWA. The novelties of our read mapper are filtration with approximate seeds and a method for multiple backtracking. Approximate seeds, compared with exact seeds, increase filtration specificity while preserving sensitivity. Multiple backtracking amortizes the cost of searching a large set of seeds by taking advantage of the repetitiveness of next-generation sequencing data. Combined together, these two methods significantly speed up approximate search on genomic data sets. Masai is implemented in C++ using the SeqAn library. The source code is distributed under the BSD license and binaries for Linux, Mac OS X and Windows can be freely downloaded from http://www.seqan.de/projects/masai. PMID:23358824

  18. TagGD: fast and accurate software for DNA Tag generation and demultiplexing.

    PubMed

    Costea, Paul Igor; Lundeberg, Joakim; Akan, Pelin

    2013-01-01

    Multiplexing is of vital importance for utilizing the full potential of next generation sequencing technologies. We here report TagGD (DNA-based Tag Generator and Demultiplexor), a fully-customisable, fast and accurate software package that can generate thousands of barcodes satisfying user-defined constraints and can guarantee full demultiplexing accuracy. The barcodes are designed to minimise their interference with the experiment. Insertion, deletion and substitution events are considered when designing and demultiplexing barcodes. 20,000 barcodes of length 18 were designed in 5 minutes and 2 million barcoded Illumina HiSeq-like reads generated with an error rate of 2% were demultiplexed with full accuracy in 5 minutes. We believe that our software meets a central demand in the current high-throughput biology and can be utilised in any field with ample sample abundance. The software is available on GitHub (https://github.com/pelinakan/UBD.git).

  19. CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval

    PubMed Central

    Karim, Rezaul; Aziz, Mohd. Momin Al; Shatabda, Swakkhar; Rahman, M. Sohel; Mia, Md. Abul Kashem; Zaman, Farhana; Rakin, Salman

    2015-01-01

    The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226

  20. Fast, simple and efficient assembly of nanolayered materials and devices

    NASA Astrophysics Data System (ADS)

    Merrill, M. H.; Sun, C. T.

    2009-02-01

    A new method of 'directed' self-assembly is demonstrated that has the potential to simply and quickly build nanostructured materials and devices. Called spin-spray layer-by-layer self-assembly (SSLbL), it is a modification of the well-known layer-by-layer method (LbL). Using SSLbL, it is possible to create and stack nanometre-thick, uniform layers containing a wide variety of different polymers, nanoparticles, or colloids in less than 25 s per bilayer, orders of magnitude faster than traditional LbL. This is done by modifying traditional dipping LbL to a system where carefully chosen volumes of polymer or colloidal solutions are sprayed directly on a rotating substrate. SSLbL is also much less wasteful of valuable nanoparticles and polymers than LbL. It is shown that in contrast to less than 1% material usage found in LbL, SSLbL has material usage efficiency up to 50%, and this can be further improved. Another direct result of the spin-spray modification is simple control of the in-plane structure of nanolayered films using masks, which is demonstrated. Such capability opens up the possibility of simply and inexpensively building complete nanocomposite devices with both vertical and lateral organization.

  1. Simple and fast annealing synthesis of titanium dioxide nanostructures

    NASA Astrophysics Data System (ADS)

    Kim, Hansoo; Park, Jongbok; Ryu, Yeontack; Yu, Choongho

    2010-02-01

    Titanium dioxide (TiO2) has been intensively studied due to its useful applications such as dye-sensitized solar cells and electrodes in lithium ion batteries. In this study diverse TiO2 nanostructures were synthesized by a simplified synthetic method. Since it does not require a high reaction temperature or complicated processes it can be useful for producing a large quantity of TiO2 nanomaterials at very low temperatures. Crucial synthesis conditions such as eutectic catalyst (copper), growth temperatures, and annealing time were systematically investigated. Only 30 minutes annealing at 850 ^oC was enough to produce densely-packed ˜ 10 μm long nanowires (˜ 100 nm diameter), and a longer reaction time changed morphology from wires to belts. The nanostructures were identified to be rutile structure with the 110 growth direction by x-ray and electron diffraction. Our simple but effective method can be utilized for other metal oxide nanowires, especially with materials of a high melting temperature. )

  2. Fast and accurate border detection in dermoscopy images using statistical region merging

    NASA Astrophysics Data System (ADS)

    Celebi, M. Emre; Kingravi, Hassan A.; Iyatomi, Hitoshi; Lee, JeongKyu; Aslandogan, Y. Alp; Van Stoecker, William; Moss, Randy; Malters, Joseph M.; Marghoob, Ashfaq A.

    2007-03-01

    As a result of advances in skin imaging technology and the development of suitable image processing techniques during the last decade, there has been a significant increase of interest in the computer-aided diagnosis of melanoma. Automated border detection is one of the most important steps in this procedure, since the accuracy of the subsequent steps crucially depends on it. In this paper, a fast and unsupervised approach to border detection in dermoscopy images of pigmented skin lesions based on the Statistical Region Merging algorithm is presented. The method is tested on a set of 90 dermoscopy images. The border detection error is quantified by a metric in which a set of dermatologist-determined borders is used as the ground-truth. The proposed method is compared to six state-of-the-art automated methods (optimized histogram thresholding, orientation-sensitive fuzzy c-means, gradient vector flow snakes, dermatologist-like tumor extraction algorithm, meanshift clustering, and the modified JSEG method) and borders determined by a second dermatologist. The results demonstrate that the presented method achieves both fast and accurate border detection in dermoscopy images.

  3. Approximate likelihood-ratio test for branches: A fast, accurate, and powerful alternative.

    PubMed

    Anisimova, Maria; Gascuel, Olivier

    2006-08-01

    We revisit statistical tests for branches of evolutionary trees reconstructed upon molecular data. A new, fast, approximate likelihood-ratio test (aLRT) for branches is presented here as a competitive alternative to nonparametric bootstrap and Bayesian estimation of branch support. The aLRT is based on the idea of the conventional LRT, with the null hypothesis corresponding to the assumption that the inferred branch has length 0. We show that the LRT statistic is asymptotically distributed as a maximum of three random variables drawn from the chi(0)2 + chi(1)2 distribution. The new aLRT of interior branch uses this distribution for significance testing, but the test statistic is approximated in a slightly conservative but practical way as 2(l1- l2), i.e., double the difference between the maximum log-likelihood values corresponding to the best tree and the second best topological arrangement around the branch of interest. Such a test is fast because the log-likelihood value l2 is computed by optimizing only over the branch of interest and the four adjacent branches, whereas other parameters are fixed at their optimal values corresponding to the best ML tree. The performance of the new test was studied on simulated 4-, 12-, and 100-taxon data sets with sequences of different lengths. The aLRT is shown to be accurate, powerful, and robust to certain violations of model assumptions. The aLRT is implemented within the algorithm used by the recent fast maximum likelihood tree estimation program PHYML (Guindon and Gascuel, 2003).

  4. Simple but accurate GCM-free approach for quantifying anthropogenic climate change

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.

    2014-12-01

    We are so used to analysing the climate with the help of giant computer models (GCM's) that it is easy to get the impression that they are indispensable. Yet anthropogenic warming is so large (roughly 0.9oC) that it turns out that it is straightforward to quantify it with more empirically based methodologies that can be readily understood by the layperson. The key is to use the CO2 forcing as a linear surrogate for all the anthropogenic effects from 1880 to the present (implicitly including all effects due to Greenhouse Gases, aerosols and land use changes). To a good approximation, double the economic activity, double the effects. The relationship between the forcing and global mean temperature is extremely linear as can be seen graphically and understood without fancy statistics, [Lovejoy, 2014a] (see the attached figure and http://www.physics.mcgill.ca/~gang/Lovejoy.htm). To an excellent approximation, the deviations from the linear forcing - temperature relation can be interpreted as the natural variability. For example, this direct - yet accurate approach makes it graphically obvious that the "pause" or "hiatus" in the warming since 1998 is simply a natural cooling event that has roughly offset the anthropogenic warming [Lovejoy, 2014b]. Rather than trying to prove that the warming is anthropogenic, with a little extra work (and some nonlinear geophysics theory and pre-industrial multiproxies) we can disprove the competing theory that it is natural. This approach leads to the estimate that the probability of the industrial scale warming being a giant natural fluctuation is ≈0.1%: it can be dismissed. This destroys the last climate skeptic argument - that the models are wrong and the warming is natural. It finally allows for a closure of the debate. In this talk we argue that this new, direct, simple, intuitive approach provides an indispensable tool for communicating - and convincing - the public of both the reality and the amplitude of anthropogenic warming

  5. Highly accurate and fast optical penetration-based silkworm gender separation system

    NASA Astrophysics Data System (ADS)

    Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Chanhorm, Sataporn

    2015-07-01

    Based on our research work in the last five years, this paper highlights our innovative optical sensing system that can identify and separate silkworm gender highly suitable for sericulture industry. The key idea relies on our proposed optical penetration concepts and once combined with simple image processing operations leads to high accuracy in identifying of silkworm gender. Inside the system, there are electronic and mechanical parts that assist in controlling the overall system operation, processing the optical signal, and separating the female from male silkworm pupae. With current system performance, we achieve a very highly accurate more than 95% in identifying gender of silkworm pupae with an average system operational speed of 30 silkworm pupae/minute. Three of our systems are already in operation at Thailand's Queen Sirikit Sericulture Centers.

  6. FastME 2.0: A Comprehensive, Accurate, and Fast Distance-Based Phylogeny Inference Program.

    PubMed

    Lefort, Vincent; Desper, Richard; Gascuel, Olivier

    2015-10-01

    FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/).

  7. FastME 2.0: A Comprehensive, Accurate, and Fast Distance-Based Phylogeny Inference Program

    PubMed Central

    Lefort, Vincent; Desper, Richard; Gascuel, Olivier

    2015-01-01

    FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/). PMID:26130081

  8. FastME 2.0: A Comprehensive, Accurate, and Fast Distance-Based Phylogeny Inference Program.

    PubMed

    Lefort, Vincent; Desper, Richard; Gascuel, Olivier

    2015-10-01

    FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/). PMID:26130081

  9. A fast GNU method to draw accurate scientific illustrations for taxonomy.

    PubMed

    Montesanto, Giuseppe

    2015-01-01

    Nowadays only digital figures are accepted by the most important journals of taxonomy. These may be produced by scanning conventional drawings, made with high precision technical ink-pens, which normally use capillary cartridge and various line widths. Digital drawing techniques that use vector graphics, have already been described in literature to support scientists in drawing figures and plates for scientific illustrations; these techniques use many different software and hardware devices. The present work gives step-by-step instructions on how to make accurate line drawings with a new procedure that uses bitmap graphics with the GNU Image Manipulation Program (GIMP). This method is noteworthy: it is very accurate, producing detailed lines at the highest resolution; the raster lines appear as realistic ink-made drawings; it is faster than the traditional way of making illustrations; everyone can use this simple technique; this method is completely free as it does not use expensive and licensed software and it can be used with different operating systems. The method has been developed drawing figures of terrestrial isopods and some examples are here given.

  10. A fast GNU method to draw accurate scientific illustrations for taxonomy

    PubMed Central

    Montesanto, Giuseppe

    2015-01-01

    Abstract Nowadays only digital figures are accepted by the most important journals of taxonomy. These may be produced by scanning conventional drawings, made with high precision technical ink-pens, which normally use capillary cartridge and various line widths. Digital drawing techniques that use vector graphics, have already been described in literature to support scientists in drawing figures and plates for scientific illustrations; these techniques use many different software and hardware devices. The present work gives step-by-step instructions on how to make accurate line drawings with a new procedure that uses bitmap graphics with the GNU Image Manipulation Program (GIMP). This method is noteworthy: it is very accurate, producing detailed lines at the highest resolution; the raster lines appear as realistic ink-made drawings; it is faster than the traditional way of making illustrations; everyone can use this simple technique; this method is completely free as it does not use expensive and licensed software and it can be used with different operating systems. The method has been developed drawing figures of terrestrial isopods and some examples are here given. PMID:26261449

  11. A Simple yet Accurate Method for Students to Determine Asteroid Rotation Periods from Fragmented Light Curve Data

    ERIC Educational Resources Information Center

    Beare, R. A.

    2008-01-01

    Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…

  12. Fast and Accurate Large-Scale Detection of β-Lactamase Genes Conferring Antibiotic Resistance.

    PubMed

    Lee, Jae Jin; Lee, Jung Hun; Kwon, Dae Beom; Jeon, Jeong Ho; Park, Kwang Seung; Lee, Chang-Ro; Lee, Sang Hee

    2015-10-01

    Fast detection of β-lactamase (bla) genes allows improved surveillance studies and infection control measures, which can minimize the spread of antibiotic resistance. Although several molecular diagnostic methods have been developed to detect limited bla gene types, these methods have significant limitations, such as their failure to detect almost all clinically available bla genes. We developed a fast and accurate molecular method to overcome these limitations using 62 primer pairs, which were designed through elaborate optimization processes. To verify the ability of this large-scale bla detection method (large-scaleblaFinder), assays were performed on previously reported bacterial control isolates/strains. To confirm the applicability of the large-scaleblaFinder, the assays were performed on unreported clinical isolates. With perfect specificity and sensitivity in 189 control isolates/strains and 403 clinical isolates, the large-scaleblaFinder detected almost all clinically available bla genes. Notably, the large-scaleblaFinder detected 24 additional unreported bla genes in the isolates/strains that were previously studied, suggesting that previous methods detecting only limited types of bla genes can miss unexpected bla genes existing in pathogenic bacteria, and our method has the ability to detect almost all bla genes existing in a clinical isolate. The ability of large-scaleblaFinder to detect bla genes on a large scale enables prompt application to the detection of almost all bla genes present in bacterial pathogens. The widespread use of the large-scaleblaFinder in the future will provide an important aid for monitoring the emergence and dissemination of bla genes and minimizing the spread of resistant bacteria. PMID:26169415

  13. Fast and Accurate Large-Scale Detection of β-Lactamase Genes Conferring Antibiotic Resistance

    PubMed Central

    Lee, Jae Jin; Lee, Jung Hun; Kwon, Dae Beom; Jeon, Jeong Ho; Park, Kwang Seung; Lee, Chang-Ro

    2015-01-01

    Fast detection of β-lactamase (bla) genes allows improved surveillance studies and infection control measures, which can minimize the spread of antibiotic resistance. Although several molecular diagnostic methods have been developed to detect limited bla gene types, these methods have significant limitations, such as their failure to detect almost all clinically available bla genes. We developed a fast and accurate molecular method to overcome these limitations using 62 primer pairs, which were designed through elaborate optimization processes. To verify the ability of this large-scale bla detection method (large-scaleblaFinder), assays were performed on previously reported bacterial control isolates/strains. To confirm the applicability of the large-scaleblaFinder, the assays were performed on unreported clinical isolates. With perfect specificity and sensitivity in 189 control isolates/strains and 403 clinical isolates, the large-scaleblaFinder detected almost all clinically available bla genes. Notably, the large-scaleblaFinder detected 24 additional unreported bla genes in the isolates/strains that were previously studied, suggesting that previous methods detecting only limited types of bla genes can miss unexpected bla genes existing in pathogenic bacteria, and our method has the ability to detect almost all bla genes existing in a clinical isolate. The ability of large-scaleblaFinder to detect bla genes on a large scale enables prompt application to the detection of almost all bla genes present in bacterial pathogens. The widespread use of the large-scaleblaFinder in the future will provide an important aid for monitoring the emergence and dissemination of bla genes and minimizing the spread of resistant bacteria. PMID:26169415

  14. [Fast and accurate extraction of ring-down time in cavity ring-down spectroscopy].

    PubMed

    Wang, Dan; Hu, Ren-Zhi; Xie, Pin-Hua; Qin, Min; Ling, Liu-Yi; Duan, Jun

    2014-10-01

    Research is conducted to accurate and efficient algorithms for extracting ring-down time (r) in cavity ring-down spectroscopy (CRDS) which is used to measure NO3 radical in the atmosphere. Fast and accurate extraction of ring-down time guarantees more precise and higher speed of measurement. In this research, five kinds of commonly used algorithms are selected to extract ring-down time which respectively are fast Fourier transform (FFT) algorithm, discrete Fourier transform (DFT) algorithm, linear regression of the sum (LRS) algorithm, Levenberg-Marquardt (LM) algorithm and least squares (LS) algorithm. Simulated ring-down signals with various amplitude levels of white noises are fitted by using five kinds of the above-mentioned algorithms, and comparison and analysis is conducted to the fitting results of five kinds of algorithms from four respects: the vulnerability to noises, the accuracy and precision of the fitting, the speed of the fitting and preferable fitting ring-down signal waveform length The research results show that Levenberg-Marquardt algorithm and linear regression of the sum algorithm are able to provide more precise results and prove to have higher noises immunity, and by comparison, the fitting speed of Leven- berg-Marquardt algorithm turns out to be slower. In addition, by analysis of simulated ring-down signals, five to ten times of ring-down time is selected to be the best fitting waveform length because in this case, standard deviation of fitting results of five kinds of algorithms proves to be the minimum. External modulation diode laser and cavity which consists of two high reflectivity mirrors are used to construct a cavity ring-down spectroscopy detection system. According to our experimental conditions, in which the noise level is 0.2%, linear regression of the sum algorithm and Levenberg-Marquardt algorithm are selected to process experimental data. The experimental results show that the accuracy and precision of linear regression of

  15. Fast and accurate search for non-coding RNA pseudoknot structures in genomes

    PubMed Central

    Huang, Zhibin; Wu, Yong; Robertson, Joseph; Feng, Liang; Malmberg, Russell L.; Cai, Liming

    2008-01-01

    Motivation: Searching genomes for non-coding RNAs (ncRNAs) by their secondary structure has become an important goal for bioinformatics. For pseudoknot-free structures, ncRNA search can be effective based on the covariance model and CYK-type dynamic programming. However, the computational difficulty in aligning an RNA sequence to a pseudoknot has prohibited fast and accurate search of arbitrary RNA structures. Our previous work introduced a graph model for RNA pseudoknots and proposed to solve the structure–sequence alignment by graph optimization. Given k candidate regions in the target sequence for each of the n stems in the structure, we could compute a best alignment in time O(ktn) based upon a tree width t decomposition of the structure graph. However, to implement this method to programs that can routinely perform fast yet accurate RNA pseudoknot searches, we need novel heuristics to ensure that, without degrading the accuracy, only a small number of stem candidates need to be examined and a tree decomposition of a small tree width can always be found for the structure graph. Results: The current work builds on the previous one with newly developed preprocessing algorithms to reduce the values for parameters k and t and to implement the search method into a practical program, called RNATOPS, for RNA pseudoknot search. In particular, we introduce techniques, based on probabilistic profiling and distance penalty functions, which can identify for every stem just a small number k (e.g. k ≤ 10) of plausible regions in the target sequence to which the stem needs to align. We also devised a specialized tree decomposition algorithm that can yield tree decomposition of small tree width t (e.g. t ≤ 4) for almost all RNA structure graphs. Our experiments show that with RNATOPS it is possible to routinely search prokaryotic and eukaryotic genomes for specific RNA structures of medium to large sizes, including pseudoknots, with high sensitivity and high

  16. Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping

    NASA Astrophysics Data System (ADS)

    Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.

    2016-06-01

    High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.

  17. A simple technique for accurate and complete characterisation of a Fabry-Perot cavity.

    PubMed

    Locke, C R; Stuart, D; Ivanov, E N; Luiten, A N

    2009-11-23

    It has become a significant challenge to accurately characterise the properties of recently developed very high finesse optical resonators (F > 10(6)). A similar challenge is encountered when trying to measure the properties of cavities in which either the probing laser or the cavity length is intrinsically unstable. We demonstrate in this article the means by which the finesse, mode-matching, free spectral range, mirror transmissions and dispersion may be measured easily and accurately even when the laser or cavity has a relatively poor intrinsic frequency stability. PMID:19997438

  18. Fast and accurate inductance and coupling calculation for a multi-layer Nb process

    NASA Astrophysics Data System (ADS)

    Fourie, Coenrad J.; Takahashi, Akitomo; Yoshikawa, Nobuyuki

    2015-03-01

    Currently, fabrication processes for superconductive integrated circuits are moving to multiple wiring and shielding layers, some of which are placed below the main ground plane (GP) and device layers. The Advanced Industrial Science and Technology advanced process (ADP2) was the first such multi-layer Nb process with planarized passive transmission line and GP layers below the junction layer, and is at the time of writing still the most developed. This process allows complex circuit designs, and accurate inductance extraction helps to push the boundaries of the layouts possible. We show that the position of ground connections between ground layers influences the inductance of structures for which these GPs act as return path, and that this needs to be accounted for in modelling. However, due to the number of wiring layers and GPs, full layout modelling of large cells causes long calculation times. In this paper we discuss methods with which to reduce model size, and calibrate InductEx calculations using these methods against measured results. We show that model reduction followed by calibration results in fast calculation times while good accuracy is maintained. We also show that InductEx correctly handles coupling between conductors in a multi-layer layout, and how to model layouts to gauge unwanted coupling between power lines and single flux quantum electronics.

  19. A Fast and Accurate Sparse Continuous Signal Reconstruction by Homotopy DCD with Non-Convex Regularization

    PubMed Central

    Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong

    2014-01-01

    In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758

  20. IVUSAngio tool: a publicly available software for fast and accurate 3D reconstruction of coronary arteries.

    PubMed

    Doulaverakis, Charalampos; Tsampoulatidis, Ioannis; Antoniadis, Antonios P; Chatzizisis, Yiannis S; Giannopoulos, Andreas; Kompatsiaris, Ioannis; Giannoglou, George D

    2013-11-01

    There is an ongoing research and clinical interest in the development of reliable and easily accessible software for the 3D reconstruction of coronary arteries. In this work, we present the architecture and validation of IVUSAngio Tool, an application which performs fast and accurate 3D reconstruction of the coronary arteries by using intravascular ultrasound (IVUS) and biplane angiography data. The 3D reconstruction is based on the fusion of the detected arterial boundaries in IVUS images with the 3D IVUS catheter path derived from the biplane angiography. The IVUSAngio Tool suite integrates all the intermediate processing and computational steps and provides a user-friendly interface. It also offers additional functionality, such as automatic selection of the end-diastolic IVUS images, semi-automatic and automatic IVUS segmentation, vascular morphometric measurements, graphical visualization of the 3D model and export in a format compatible with other computer-aided design applications. Our software was applied and validated in 31 human coronary arteries yielding quite promising results. Collectively, the use of IVUSAngio Tool significantly reduces the total processing time for 3D coronary reconstruction. IVUSAngio Tool is distributed as free software, publicly available to download and use.

  1. PRIMAL: Fast and Accurate Pedigree-based Imputation from Sequence Data in a Founder Population

    PubMed Central

    Livne, Oren E.; Han, Lide; Alkorta-Aranburu, Gorka; Wentworth-Sheilds, William; Abney, Mark; Ober, Carole; Nicolae, Dan L.

    2015-01-01

    Founder populations and large pedigrees offer many well-known advantages for genetic mapping studies, including cost-efficient study designs. Here, we describe PRIMAL (PedigRee IMputation ALgorithm), a fast and accurate pedigree-based phasing and imputation algorithm for founder populations. PRIMAL incorporates both existing and original ideas, such as a novel indexing strategy of Identity-By-Descent (IBD) segments based on clique graphs. We were able to impute the genomes of 1,317 South Dakota Hutterites, who had genome-wide genotypes for ~300,000 common single nucleotide variants (SNVs), from 98 whole genome sequences. Using a combination of pedigree-based and LD-based imputation, we were able to assign 87% of genotypes with >99% accuracy over the full range of allele frequencies. Using the IBD cliques we were also able to infer the parental origin of 83% of alleles, and genotypes of deceased recent ancestors for whom no genotype information was available. This imputed data set will enable us to better study the relative contribution of rare and common variants on human phenotypes, as well as parental origin effect of disease risk alleles in >1,000 individuals at minimal cost. PMID:25735005

  2. Novel Accurate and Fast Optic Disc Detection in Retinal Images With Vessel Distribution and Directional Characteristics.

    PubMed

    Zhang, Dongbo; Zhao, Yuanyuan

    2016-01-01

    A novel accurate and fast optic disc (OD) detection method is proposed by using vessel distribution and directional characteristics. A feature combining three vessel distribution characteristics, i.e., local vessel density, compactness, and uniformity, is designed to find possible horizontal coordinate of OD. Then, according to the global vessel direction characteristic, a General Hough Transformation is introduced to identify the vertical coordinate of OD. By confining the possible OD vertical range and by simplifying vessel structure with blocks, we greatly decrease the computational cost of the algorithm. Four public datasets have been tested. The OD localization accuracy lies from 93.8% to 99.7%, when 8-20% vessel detection results are adopted to achieve OD detection. Average computation times for STARE images are about 3.4-11.5 s, which relate to image size. The proposed method shows satisfactory robustness on both normal and diseased images. It is better than many previous methods with respect to accuracy and efficiency.

  3. SMARTIES: User-friendly codes for fast and accurate calculations of light scattering by spheroids

    NASA Astrophysics Data System (ADS)

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2016-05-01

    We provide a detailed user guide for SMARTIES, a suite of MATLAB codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a MATLAB implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarized, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for non-spherical particles with a challenging aspect ratio and/or refractive index contrast.

  4. FAMSA: Fast and accurate multiple sequence alignment of huge protein families

    PubMed Central

    Deorowicz, Sebastian; Debudaj-Grabysz, Agnieszka; Gudyś, Adam

    2016-01-01

    Rapid development of modern sequencing platforms has contributed to the unprecedented growth of protein families databases. The abundance of sets containing hundreds of thousands of sequences is a formidable challenge for multiple sequence alignment algorithms. The article introduces FAMSA, a new progressive algorithm designed for fast and accurate alignment of thousands of protein sequences. Its features include the utilization of the longest common subsequence measure for determining pairwise similarities, a novel method of evaluating gap costs, and a new iterative refinement scheme. What matters is that its implementation is highly optimized and parallelized to make the most of modern computer platforms. Thanks to the above, quality indicators, i.e. sum-of-pairs and total-column scores, show FAMSA to be superior to competing algorithms, such as Clustal Omega or MAFFT for datasets exceeding a few thousand sequences. Quality does not compromise on time or memory requirements, which are an order of magnitude lower than those in the existing solutions. For example, a family of 415519 sequences was analyzed in less than two hours and required no more than 8 GB of RAM. FAMSA is available for free at http://sun.aei.polsl.pl/REFRESH/famsa. PMID:27670777

  5. Accurate, fast and cost-effective diagnostic test for monosomy 1p36 using real-time quantitative PCR.

    PubMed

    Cunha, Pricila da Silva; Pena, Heloisa B; D'Angelo, Carla Sustek; Koiffmann, Celia P; Rosenfeld, Jill A; Shaffer, Lisa G; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5-0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  6. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    PubMed Central

    Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341

  7. A Simple, Accurate Model for Alkyl Adsorption on Late Transition Metals

    SciTech Connect

    Montemore, Matthew M.; Medlin, James W.

    2013-01-18

    A simple model that predicts the adsorption energy of an arbitrary alkyl in the high-symmetry sites of late transition metal fcc(111) and related surfaces is presented. The model makes predictions based on a few simple attributes of the adsorbate and surface, including the d-shell filling and the matrix coupling element, as well as the adsorption energy of methyl in the top sites. We use the model to screen surfaces for alkyl chain-growth properties and to explain trends in alkyl adsorption strength, site preference, and vibrational softening.

  8. Two fast and accurate heuristic RBF learning rules for data classification.

    PubMed

    Rouhani, Modjtaba; Javan, Dawood S

    2016-03-01

    This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance. PMID:26797472

  9. Two fast and accurate heuristic RBF learning rules for data classification.

    PubMed

    Rouhani, Modjtaba; Javan, Dawood S

    2016-03-01

    This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance.

  10. Fast and Accurate Microplate Method (Biolog MT2) for Detection of Fusarium Fungicides Resistance/Sensitivity

    PubMed Central

    Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia

    2016-01-01

    The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (BiologTM) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole

  11. Fast and Accurate Microplate Method (Biolog MT2) for Detection of Fusarium Fungicides Resistance/Sensitivity.

    PubMed

    Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia

    2016-01-01

    The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (Biolog(TM)) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole

  12. RAP: Accurate and Fast Motif Finding Based on Protein-Binding Microarray Data

    PubMed Central

    Orenstein, Yaron; Mick, Eran

    2013-01-01

    Abstract The novel high-throughput technology of protein-binding microarrays (PBMs) measures binding intensity of a transcription factor to thousands of DNA probe sequences. Several algorithms have been developed to extract binding-site motifs from these data. Such motifs are commonly represented by positional weight matrices. Previous studies have shown that the motifs produced by these algorithms are either accurate in predicting in vitro binding or similar to previously published motifs, but not both. In this work, we present a new simple algorithm to infer binding-site motifs from PBM data. It outperforms prior art both in predicting in vitro binding and in producing motifs similar to literature motifs. Our results challenge previous claims that motifs with lower information content are better models for transcription-factor binding specificity. Moreover, we tested the effect of motif length and side positions flanking the “core” motif in the binding site. We show that side positions have a significant effect and should not be removed, as commonly done. A large drop in the results quality of all methods is observed between in vitro and in vivo binding prediction. The software is available on acgt.cs.tau.ac.il/rap. PMID:23464877

  13. Fast and accurate simulations of diffusion-weighted MRI signals for the evaluation of acquisition sequences

    NASA Astrophysics Data System (ADS)

    Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime

    2016-03-01

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.

  14. Simple and accurate methods for quantifying deformation, disruption, and development in biological tissues

    PubMed Central

    Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros

    2014-01-01

    When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601

  15. NINJA-OPS: Fast Accurate Marker Gene Alignment Using Concatenated Ribosomes.

    PubMed

    Al-Ghalith, Gabriel A; Montassier, Emmanuel; Ward, Henry N; Knights, Dan

    2016-01-01

    The explosion of bioinformatics technologies in the form of next generation sequencing (NGS) has facilitated a massive influx of genomics data in the form of short reads. Short read mapping is therefore a fundamental component of next generation sequencing pipelines which routinely match these short reads against reference genomes for contig assembly. However, such techniques have seldom been applied to microbial marker gene sequencing studies, which have mostly relied on novel heuristic approaches. We propose NINJA Is Not Just Another OTU-Picking Solution (NINJA-OPS, or NINJA for short), a fast and highly accurate novel method enabling reference-based marker gene matching (picking Operational Taxonomic Units, or OTUs). NINJA takes advantage of the Burrows-Wheeler (BW) alignment using an artificial reference chromosome composed of concatenated reference sequences, the "concatesome," as the BW input. Other features include automatic support for paired-end reads with arbitrary insert sizes. NINJA is also free and open source and implements several pre-filtering methods that elicit substantial speedup when coupled with existing tools. We applied NINJA to several published microbiome studies, obtaining accuracy similar to or better than previous reference-based OTU-picking methods while achieving an order of magnitude or more speedup and using a fraction of the memory footprint. NINJA is a complete pipeline that takes a FASTA-formatted input file and outputs a QIIME-formatted taxonomy-annotated BIOM file for an entire MiSeq run of human gut microbiome 16S genes in under 10 minutes on a dual-core laptop.

  16. NINJA-OPS: Fast Accurate Marker Gene Alignment Using Concatenated Ribosomes

    PubMed Central

    Al-Ghalith, Gabriel A.; Montassier, Emmanuel; Ward, Henry N.; Knights, Dan

    2016-01-01

    The explosion of bioinformatics technologies in the form of next generation sequencing (NGS) has facilitated a massive influx of genomics data in the form of short reads. Short read mapping is therefore a fundamental component of next generation sequencing pipelines which routinely match these short reads against reference genomes for contig assembly. However, such techniques have seldom been applied to microbial marker gene sequencing studies, which have mostly relied on novel heuristic approaches. We propose NINJA Is Not Just Another OTU-Picking Solution (NINJA-OPS, or NINJA for short), a fast and highly accurate novel method enabling reference-based marker gene matching (picking Operational Taxonomic Units, or OTUs). NINJA takes advantage of the Burrows-Wheeler (BW) alignment using an artificial reference chromosome composed of concatenated reference sequences, the “concatesome,” as the BW input. Other features include automatic support for paired-end reads with arbitrary insert sizes. NINJA is also free and open source and implements several pre-filtering methods that elicit substantial speedup when coupled with existing tools. We applied NINJA to several published microbiome studies, obtaining accuracy similar to or better than previous reference-based OTU-picking methods while achieving an order of magnitude or more speedup and using a fraction of the memory footprint. NINJA is a complete pipeline that takes a FASTA-formatted input file and outputs a QIIME-formatted taxonomy-annotated BIOM file for an entire MiSeq run of human gut microbiome 16S genes in under 10 minutes on a dual-core laptop. PMID:26820746

  17. An automated, fast and accurate registration method to link stranded seeds in permanent prostate implants

    NASA Astrophysics Data System (ADS)

    Westendorp, Hendrik; Nuver, Tonnis T.; Moerland, Marinus A.; Minken, André W.

    2015-10-01

    The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant.

  18. Simple and fast determination of perfluorinated compounds in Taihu Lake by SPE-UHPLC-MS/MS.

    PubMed

    Zhu, Pengfei; Ling, Xia; Liu, Wenwei; Kong, Lingcan; Yao, Yuyang

    2016-09-15

    A simple and fast analytical method for determination of eleven Polyfluorinated Compounds (PFCs) in source water was developed in the present work. The water sample was prepared without filtered through microfiltration membrane and 500mL of source water was enriched by the solid phase extraction (SPE). The targent compounds were analyzed by ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS). The optimized analytical method was validated in terms of recovery, precision and method detection limits (MDLs). The recovery values after correction with the corresponding labeled standard were between 97.3 and 113.0% for samples spiked at 5ng/L, 10ng/L and 20ng/L. All PFCs showed good linearity and the linear correlation coefficient was over 0.99. The precisions were 1.0-9.0% (n=6). As the result of the enrichment, the MDL values ranged from 0.03 to 1.9ng/L and were enough for analysis of the trace levels of PFCs in the Taihu Lake. The method was further validated in determining the source water and the results showed that PFHxS, PFHxA, PFOA and PFOS were the primary PFCs in Taihu Lake which might be different from the other researches. The method can be used for determination of PFCs in water with a stable recovery, good reproducibility, low detection limit, less solvent consumption, time saving and labor saving. To our knowledge, this is the first method that describes the effect of the filter membrane on the determination of PFCs in water which might acquire more accurate concentration of PFCs in Taihu Lake. PMID:27454901

  19. Simple yet accurate noncontact device for measuring the radius of curvature of a spherical mirror

    SciTech Connect

    Spiridonov, Maxim; Toebaert, David

    2006-09-10

    An easily reproducible device is demonstrated to be capable of measuring the radii of curvature of spherical mirrors, both convex and concave, without resorting to high-end interferometric or tactile devices. The former are too elaborate for our purposes,and the latter cannot be used due to the delicate nature of the coatings applied to mirrors used in high-power CO2 laser applications. The proposed apparatus is accurate enough to be useful to anyone using curved optics and needing a quick way to assess the values of the radii of curvature, be it for entrance quality control or trouble shooting an apparently malfunctioning optical system. Specifically, the apparatus was designed for checking 50 mm diameter resonator(typically flat or tens of meters concave) and telescope (typically some meters convex and concave) mirrors for a high-power CO2 laser, but it can easily be adapted to any other type of spherical mirror by a straightforward resizing.

  20. Simple yet accurate noncontact device for measuring the radius of curvature of a spherical mirror

    NASA Astrophysics Data System (ADS)

    Spiridonov, Maxim; Toebaert, David

    2006-09-01

    An easily reproducible device is demonstrated to be capable of measuring the radii of curvature of spherical mirrors, both convex and concave, without resorting to high-end interferometric or tactile devices. The former are too elaborate for our purposes, and the latter cannot be used due to the delicate nature of the coatings applied to mirrors used in high-power CO2 laser applications. The proposed apparatus is accurate enough to be useful to anyone using curved optics and needing a quick way to assess the values of the radii of curvature, be it for entrance quality control or trouble shooting an apparently malfunctioning optical system. Specifically, the apparatus was designed for checking 50 mm diameter resonator (typically flat or tens of meters concave) and telescope (typically some meters convex and concave) mirrors for a high-power CO2 laser, but it can easily be adapted to any other type of spherical mirror by a straightforward resizing.

  1. Communication: Simple and accurate uniform electron gas correlation energy for the full range of densities

    NASA Astrophysics Data System (ADS)

    Chachiyo, Teepanis

    2016-07-01

    A simple correlation energy functional for the uniform electron gas is derived based on the second-order Moller-Plesset perturbation theory. It can reproduce the known correlation functional in the high-density limit, while in the mid-density range maintaining a good agreement with the near-exact correlation energy of the uniform electron gas to within 2 × 10-3 hartree. The correlation energy is a function of a density parameter rs and is of the form a * ln ( 1 + /b r s + /b rs 2 ) . The constants "a" and "b" are derived from the known correlation functional in the high-density limit. Comparisons to the Ceperley-Alder's near-exact Quantum Monte Carlo results and the Vosko-Wilk-Nusair correlation functional are also reported.

  2. A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat.

    PubMed

    Liu, Jian; Li, Dezhang; Liu, Xinzijian

    2016-07-14

    We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.

  3. A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Li, Dezhang; Liu, Xinzijian

    2016-07-01

    We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.

  4. Global climate modeling of Saturn's atmosphere: fast and accurate radiative transfer and exploration of seasonal variability

    NASA Astrophysics Data System (ADS)

    Guerlet, Sandrine; Spiga, A.; Sylvestre, M.; Fouchet, T.; Millour, E.; Wordsworth, R.; Leconte, J.; Forget, F.

    2013-10-01

    Recent observations of Saturn’s stratospheric thermal structure and composition revealed new phenomena: an equatorial oscillation in temperature, reminiscent of the Earth's Quasi-Biennal Oscillation ; strong meridional contrasts of hydrocarbons ; a warm “beacon” associated with the powerful 2010 storm. Those signatures cannot be reproduced by 1D photochemical and radiative models and suggest that atmospheric dynamics plays a key role. This motivated us to develop a complete 3D General Circulation Model (GCM) for Saturn, based on the LMDz hydrodynamical core, to explore the circulation, seasonal variability, and wave activity in Saturn's atmosphere. In order to closely reproduce Saturn's radiative forcing, a particular emphasis was put in obtaining fast and accurate radiative transfer calculations. Our radiative model uses correlated-k distributions and spectral discretization tailored for Saturn's atmosphere. We include internal heat flux, ring shadowing and aerosols. We will report on the sensitivity of the model to spectral discretization, spectroscopic databases, and aerosol scenarios (varying particle sizes, opacities and vertical structures). We will also discuss the radiative effect of the ring shadowing on Saturn's atmosphere. We will present a comparison of temperature fields obtained with this new radiative equilibrium model to that inferred from Cassini/CIRS observations. In the troposphere, our model reproduces the observed temperature knee caused by heating at the top of the tropospheric aerosol layer. In the lower stratosphere (20mbar

  5. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    PubMed

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  6. Simple and accurate modelling of the gravitational potential produced by thick and thin exponential discs

    NASA Astrophysics Data System (ADS)

    Smith, R.; Flynn, C.; Candlish, G. N.; Fellhauer, M.; Gibson, B. K.

    2015-04-01

    We present accurate models of the gravitational potential produced by a radially exponential disc mass distribution. The models are produced by combining three separate Miyamoto-Nagai discs. Such models have been used previously to model the disc of the Milky Way, but here we extend this framework to allow its application to discs of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disc treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disc by <0.4 per cent out to 4 disc scalelengths, and <1.9 per cent out to 10 disc scalelengths. We tabulate fitting parameters which facilitate construction of exponential discs for any scalelength, and a wide range of disc thickness (a user-friendly, web-based interface is also available). Our recipe is well suited for numerical modelling of the tidal effects of a giant disc galaxy on star clusters or dwarf galaxies. We consider three worked examples; the Milky Way thin and thick disc, and a discy dwarf galaxy.

  7. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    PubMed Central

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  8. Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.

    PubMed

    Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P

    2016-07-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. PMID:27154647

  9. A simple accurate method to predict time of ponding under variable intensity rainfall

    NASA Astrophysics Data System (ADS)

    Assouline, S.; Selker, J. S.; Parlange, J.-Y.

    2007-03-01

    The prediction of the time to ponding following commencement of rainfall is fundamental to hydrologic prediction of flood, erosion, and infiltration. Most of the studies to date have focused on prediction of ponding resulting from simple rainfall patterns. This approach was suitable to rainfall reported as average values over intervals of up to a day but does not take advantage of knowledge of the complex patterns of actual rainfall now commonly recorded electronically. A straightforward approach to include the instantaneous rainfall record in the prediction of ponding time and excess rainfall using only the infiltration capacity curve is presented. This method is tested against a numerical solution of the Richards equation on the basis of an actual rainfall record. The predicted time to ponding showed mean error ≤7% for a broad range of soils, with and without surface sealing. In contrast, the standard predictions had average errors of 87%, and worst-case errors exceeding a factor of 10. In addition to errors intrinsic in the modeling framework itself, errors that arise from averaging actual rainfall records over reporting intervals were evaluated. Averaging actual rainfall records observed in Israel over periods of as little as 5 min significantly reduced predicted runoff (75% for the sealed sandy loam and 46% for the silty clay loam), while hourly averaging gave complete lack of prediction of ponding in some of the cases.

  10. Growing degree hours - a simple, accurate, and precise protocol to approximate growing heat summation for grapevines

    NASA Astrophysics Data System (ADS)

    Gu, S.

    2016-08-01

    Despite its low accuracy and consistency, growing degree days (GDD) has been widely used to approximate growing heat summation (GHS) for regional classification and phenological prediction. GDD is usually calculated from the mean of daily minimum and maximum temperatures (GDDmm) above a growing base temperature ( T gb). To determine approximation errors and accuracy, daily and cumulative GDDmm was compared to GDD based on daily average temperature (GDDavg), growing degree hours (GDH) based on hourly temperatures, and growing degree minutes (GDM) based on minute-by-minute temperatures. Finite error, due to the difference between measured and true temperatures above T gb is large in GDDmm but is negligible in GDDavg, GDH, and GDM, depending only upon the number of measured temperatures used for daily approximation. Hidden negative error, due to the temperatures below T gb when being averaged for approximation intervals larger than measuring interval, is large in GDDmm and GDDavg but is negligible in GDH and GDM. Both GDH and GDM improve GHS approximation accuracy over GDDmm or GDDavg by summation of multiple integration rectangles to reduce both finite and hidden negative errors. GDH is proposed as the standardized GHS approximation protocol, providing adequate accuracy and high precision independent upon T gb while requiring simple data recording and processing.

  11. Simple and fast cosine approximation method for computer-generated hologram calculation.

    PubMed

    Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ito, Tomoyoshi

    2015-12-14

    The cosine function is a heavy computational operation in computer-generated hologram (CGH) calculation; therefore, it is implemented by substitution methods such as a look-up table. However, the computational load and required memory space of such methods are still large. In this study, we propose a simple and fast cosine function approximation method for CGH calculation. As a result, we succeeded in creating CGH with sufficient quality and made the calculation time 1.6 times as fast at maximum compared to using the look-up table of the cosine function on CPU implementation.

  12. An accurate tool for the fast generation of dark matter halo catalogues

    NASA Astrophysics Data System (ADS)

    Monaco, P.; Sefusatti, E.; Borgani, S.; Crocce, M.; Fosalba, P.; Sheth, R. K.; Theuns, T.

    2013-08-01

    We present a new parallel implementation of the PINpointing Orbit Crossing-Collapsed HIerarchical Objects (PINOCCHIO) algorithm, a quick tool, based on Lagrangian Perturbation Theory, for the hierarchical build-up of dark matter (DM) haloes in cosmological volumes. To assess its ability to predict halo correlations on large scales, we compare its results with those of an N-body simulation of a 3 h-1 Gpc box sampled with 20483 particles taken from the MICE suite, matching the same seeds for the initial conditions. Thanks to the Fastest Fourier Transforms in the West (FFTW) libraries and to the relatively simple design, the code shows very good scaling properties. The CPU time required by PINOCCHIO is a tiny fraction (˜1/2000) of that required by the MICE simulation. Varying some of PINOCCHIO numerical parameters allows one to produce a universal mass function that lies in the range allowed by published fits, although it underestimates the MICE mass function of Friends-of-Friends (FoF) haloes in the high-mass tail. We compare the matter-halo and the halo-halo power spectra with those of the MICE simulation and find that these two-point statistics are well recovered on large scales. In particular, when catalogues are matched in number density, agreement within 10 per cent is achieved for the halo power spectrum. At scales k > 0.1 h Mpc-1, the inaccuracy of the Zel'dovich approximation in locating halo positions causes an underestimate of the power spectrum that can be modelled as a Gaussian factor with a damping scale of d = 3 h-1 Mpc at z = 0, decreasing at higher redshift. Finally, a remarkable match is obtained for the reduced halo bispectrum, showing a good description of non-linear halo bias. Our results demonstrate the potential of PINOCCHIO as an accurate and flexible tool for generating large ensembles of mock galaxy surveys, with interesting applications for the analysis of large galaxy redshift surveys.

  13. A simple yet accurate correction for winner's curse can predict signals discovered in much larger genome scans

    PubMed Central

    Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin

    2016-01-01

    Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203

  14. A simple and accurate grading system for orthoiodohippurate renal scans in the assessment of post-transplant renal function

    SciTech Connect

    Zaki, S.K.; Bretan, P.N.; Go, R.T.; Rehm, P.K.; Streem, S.B.; Novick, A.C. )

    1990-06-01

    Orthoiodohippurate renal scanning has proved to be a reliable, noninvasive method for the evaluation and followup of renal allograft function. However, a standardized system for grading renal function with this test is not available. We propose a simple grading system to distinguish the different functional phases of hippurate scanning in renal transplant recipients. This grading system was studied in 138 patients who were evaluated 1 week after renal transplantation. There was a significant correlation between the isotope renographic functional grade and clinical correlates of allograft function such as the serum creatinine level (p = 0.0001), blood urea nitrogen level (p = 0.0001), urine output (p = 0.005) and need for hemodialysis (p = 0.007). We recommend this grading system as a simple and accurate method to interpret orthoiodohippurate renal scans in the evaluation and followup of renal allograft recipients.

  15. The expansion in Gegenbauer polynomials: A simple method for the fast computation of the Gegenbauer coefficients

    NASA Astrophysics Data System (ADS)

    De Micheli, Enrico; Viano, Giovanni Alberto

    2013-04-01

    We present a simple and fast algorithm for the computation of the Gegenbauer transform, which is known to be very useful in the development of spectral methods for the numerical solution of ordinary and partial differential equations of physical interest. We prove that the coefficients of the expansion of a function f(x) in Gegenbauer (also known as ultraspherical) polynomials coincide with the Fourier coefficients of a suitable integral transform of the function f(x). This allows to compute N Gegenbauer coefficients in O(Nlog2N) operations by means of a single Fast Fourier Transform of the integral transform of f(x). We also show that the inverse Gegenbauer transform is expressible as the Abel-type transform of a suitable Fourier series. This fact produces a novel algorithm for the fast evaluation of Gegenbauer expansions.

  16. LETTERS AND COMMENTS: A trigonometric approximation for the tension in the string of a simple pendulum accurate for all amplitudes

    NASA Astrophysics Data System (ADS)

    Lima, F. M. S.

    2009-11-01

    In a previous work, O'Connell (Phys. Teach. 40, 24 (2002)) investigated the time dependence of the tension in the string of a simple pendulum oscillating within the small-angle regime. In spite of the approximation sin θ ≈ θ being accurate only for amplitudes below 7°, his experimental results are for a pendulum oscillating with an amplitude of about 18°, therefore beyond the small-angle regime. This lapse may also be found in some textbooks, laboratory manuals and internet. By noting that the exact analytical solution for this problem involves the so-called Jacobi elliptic functions, which are unknown to most students (even instructors), I take into account a sinusoidal approximate solution for the pendulum equation I introduced in a recent work (Eur. J. Phys. 29 1091 (2008)) for deriving a simple trigonometric approximation for the tension valid for all possible amplitudes. This approximation is compared to both the O'Connell and the exact results, revealing that it is accurate enough for analysing large-angle pendulum experiments.

  17. Many Is Better Than One: An Integration of Multiple Simple Strategies for Accurate Lung Segmentation in CT Images.

    PubMed

    Shi, Zhenghao; Ma, Jiejue; Zhao, Minghua; Liu, Yonghong; Feng, Yaning; Zhang, Ming; He, Lifeng; Suzuki, Kenji

    2016-01-01

    Accurate lung segmentation is an essential step in developing a computer-aided lung disease diagnosis system. However, because of the high variability of computerized tomography (CT) images, it remains a difficult task to accurately segment lung tissue in CT slices using a simple strategy. Motived by the aforementioned, a novel CT lung segmentation method based on the integration of multiple strategies was proposed in this paper. Firstly, in order to avoid noise, the input CT slice was smoothed using the guided filter. Then, the smoothed slice was transformed into a binary image using an optimized threshold. Next, a region growing strategy was employed to extract thorax regions. Then, lung regions were segmented from the thorax regions using a seed-based random walk algorithm. The segmented lung contour was then smoothed and corrected with a curvature-based correction method on each axis slice. Finally, with the lung masks, the lung region was automatically segmented from a CT slice. The proposed method was validated on a CT database consisting of 23 scans, including a number of 883 2D slices (the number of slices per scan is 38 slices), by comparing it to the commonly used lung segmentation method. Experimental results show that the proposed method accurately segmented lung regions in CT slices. PMID:27635395

  18. Many Is Better Than One: An Integration of Multiple Simple Strategies for Accurate Lung Segmentation in CT Images

    PubMed Central

    Zhao, Minghua; Liu, Yonghong; Feng, Yaning; Zhang, Ming; He, Lifeng; Suzuki, Kenji

    2016-01-01

    Accurate lung segmentation is an essential step in developing a computer-aided lung disease diagnosis system. However, because of the high variability of computerized tomography (CT) images, it remains a difficult task to accurately segment lung tissue in CT slices using a simple strategy. Motived by the aforementioned, a novel CT lung segmentation method based on the integration of multiple strategies was proposed in this paper. Firstly, in order to avoid noise, the input CT slice was smoothed using the guided filter. Then, the smoothed slice was transformed into a binary image using an optimized threshold. Next, a region growing strategy was employed to extract thorax regions. Then, lung regions were segmented from the thorax regions using a seed-based random walk algorithm. The segmented lung contour was then smoothed and corrected with a curvature-based correction method on each axis slice. Finally, with the lung masks, the lung region was automatically segmented from a CT slice. The proposed method was validated on a CT database consisting of 23 scans, including a number of 883 2D slices (the number of slices per scan is 38 slices), by comparing it to the commonly used lung segmentation method. Experimental results show that the proposed method accurately segmented lung regions in CT slices. PMID:27635395

  19. 3ARM: A Fast, Accurate Radiative Transfer Model for Use in Climate Models

    NASA Technical Reports Server (NTRS)

    Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.

    1996-01-01

    A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.

  20. 3ARM: A Fast, Accurate Radiative Transfer Model for use in Climate Models

    NASA Technical Reports Server (NTRS)

    Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.

    1996-01-01

    A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.

  1. Nurse initiated thrombolysis in the accident and emergency department: safe, accurate, and faster than fast track

    PubMed Central

    Heath, S; Bain, R; Andrews, A; Chida, S; Kitchen, S; Walters, M

    2003-01-01

    Objective: To reduce the time between arrival at hospital of a patient with acute myocardial infarction and administration of thrombolytic therapy (door to needle time) by the introduction of nurse initiated thrombolysis in the accident and emergency department. Methods: Two acute chest pain nurse specialists (ACPNS) based in A&E for 62.5 hours of the week were responsible for initiating thrombolysis in the A&E department. The service reverts to a "fast track" system outside of these hours, with the on call medical team prescribing thrombolysis on the coronary care unit. Prospectively gathered data were analysed for a nine month period and a head to head comparison made between the mean and median door to needle times for both systems of thrombolysis delivery. Results: Data from 91 patients were analysed; 43 (47%) were thrombolysed in A&E by the ACPNS and 48 (53%) were thrombolysed in the coronary care unit by the on call medical team. The ACPNS achieved a median door to needle time of 23 minutes (IQR=17 to 32) compared with 56 minutes (IQR=34 to 79.5) for the fast track. The proportion of patients thrombolysed in 30 minutes by the ACPNS and fast track system was 72% (31 of 43) and 21% (10 of 48) respectively (difference=51%, 95% confidence intervals 34% to 69%, p<0.05). Conclusion: Diagnosis of acute myocardial infarction and administration of thrombolysis by experienced cardiology nurses in A&E is a safe and effective strategy for reducing door to needle times, even when compared with a conventional fast track system. PMID:12954678

  2. A Simple Spectrophotometric Method for the Determination of Thiobarbituric Acid Reactive Substances in Fried Fast Foods

    PubMed Central

    Zeb, Alam; Ullah, Fareed

    2016-01-01

    A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries. PMID:27123360

  3. A Simple Spectrophotometric Method for the Determination of Thiobarbituric Acid Reactive Substances in Fried Fast Foods.

    PubMed

    Zeb, Alam; Ullah, Fareed

    2016-01-01

    A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries. PMID:27123360

  4. Woods: A fast and accurate functional annotator and classifier of genomic and metagenomic sequences.

    PubMed

    Sharma, Ashok K; Gupta, Ankit; Kumar, Sanjiv; Dhakan, Darshan B; Sharma, Vineet K

    2015-07-01

    Functional annotation of the gigantic metagenomic data is one of the major time-consuming and computationally demanding tasks, which is currently a bottleneck for the efficient analysis. The commonly used homology-based methods to functionally annotate and classify proteins are extremely slow. Therefore, to achieve faster and accurate functional annotation, we have developed an orthology-based functional classifier 'Woods' by using a combination of machine learning and similarity-based approaches. Woods displayed a precision of 98.79% on independent genomic dataset, 96.66% on simulated metagenomic dataset and >97% on two real metagenomic datasets. In addition, it performed >87 times faster than BLAST on the two real metagenomic datasets. Woods can be used as a highly efficient and accurate classifier with high-throughput capability which facilitates its usability on large metagenomic datasets. PMID:25863333

  5. A simplified hydroethidine method for fast and accurate detection of superoxide production in isolated mitochondria.

    PubMed

    Back, Patricia; Matthijssens, Filip; Vanfleteren, Jacques R; Braeckman, Bart P

    2012-04-01

    Because superoxide is involved in various physiological processes, many efforts have been made to improve its accurate quantification. We optimized and validated a superoxide-specific and -sensitive detection method. The protocol is based on fluorescence detection of the superoxide-specific hydroethidine (HE) oxidation product, 2-hydroxyethidium. We established a method for the quantification of superoxide production in isolated mitochondria without the need for acetone extraction and purification chromatography as described in previous studies.

  6. Nanoparticle film deposition using a simple and fast centrifuge sedimentation method

    NASA Astrophysics Data System (ADS)

    Markelonis, Andrew R.; Wang, Joanna S.; Ullrich, Bruno; Wai, Chien M.; Brown, Gail J.

    2015-04-01

    Colloidal nanoparticles (NPs) can be deposited uniformly on flat or rough and uneven substrate surfaces employing a standard centrifuge and common solvents. This method is suitable for depositing different types of nanoparticles on a variety of substrates including glass, silicon wafer, aluminum foil, copper sheet, polymer film, plastic, and paper, etc. The thickness of the films can be controlled by the amount of the colloidal nanoparticle solution used in the preparation. The method offers a fast and simple procedure compared to other currently known nanoparticle deposition techniques for studying the optical properties of nanoparticle films.

  7. A new, fast, and simple DNA extraction method for HLA and VNTR genotyping by PCR amplification.

    PubMed

    Planelles, D; Llopis, F; Puig, N; Montoro, J A

    1996-01-01

    In the present study a new DNA extraction method is described. The new protocol, which uses caprylic acid for isolating DNA, is technically simple and very fast, as it enables us to obtain DNA from peripheral blood in only 10 minutes. Moreover, DNA preparations obtained with this procedure can be effectively used for HLA class II and variable number tandem repeat genotyping by polymerase chain reaction, so the new method is well suited for routine clinical use in any type of analysis requiring DNA typing for individual characterization.

  8. Simple and fast screening of G-quadruplex ligands with electrochemical detection system.

    PubMed

    Fan, Qiongxuan; Li, Chao; Tao, Yaqin; Mao, Xiaoxia; Li, Genxi

    2016-11-01

    Small molecules that may facilitate and stabilize the formation of G-quadruplexes can be used for cancer treatments, because the G-quadruplex structure can inhibit the activity of telomerase, an enzyme over-expressed in many cancer cells. Therefore, there is considerable interest in developing a simple and high-performance method for screening small molecules binding to G-quadruplex. Here, we have designed a simple electrochemical approach to screen such ligands based on the fact that the formation and stabilization of G-quadruplex by ligand may inhibit electron transfer of redox species to electrode surface. As a proof-of-concept study, two types of classical G-quadruplex ligands, TMPyP4 and BRACO-19, are studied in this work, which demonstrates that this method is fast and robust and it may be applied to screen G-quadruplex ligands for anticancer drugs testing and design in the future. PMID:27591598

  9. Simple and fast screening of G-quadruplex ligands with electrochemical detection system.

    PubMed

    Fan, Qiongxuan; Li, Chao; Tao, Yaqin; Mao, Xiaoxia; Li, Genxi

    2016-11-01

    Small molecules that may facilitate and stabilize the formation of G-quadruplexes can be used for cancer treatments, because the G-quadruplex structure can inhibit the activity of telomerase, an enzyme over-expressed in many cancer cells. Therefore, there is considerable interest in developing a simple and high-performance method for screening small molecules binding to G-quadruplex. Here, we have designed a simple electrochemical approach to screen such ligands based on the fact that the formation and stabilization of G-quadruplex by ligand may inhibit electron transfer of redox species to electrode surface. As a proof-of-concept study, two types of classical G-quadruplex ligands, TMPyP4 and BRACO-19, are studied in this work, which demonstrates that this method is fast and robust and it may be applied to screen G-quadruplex ligands for anticancer drugs testing and design in the future.

  10. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  11. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  12. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.

  13. A fast accurate approximation method with multigrid solver for two-dimensional fractional sub-diffusion equation

    NASA Astrophysics Data System (ADS)

    Lin, Xue-lei; Lu, Xin; Ng, Micheal K.; Sun, Hai-Wei

    2016-10-01

    A fast accurate approximation method with multigrid solver is proposed to solve a two-dimensional fractional sub-diffusion equation. Using the finite difference discretization of fractional time derivative, a block lower triangular Toeplitz matrix is obtained where each main diagonal block contains a two-dimensional matrix for the Laplacian operator. Our idea is to make use of the block ɛ-circulant approximation via fast Fourier transforms, so that the resulting task is to solve a block diagonal system, where each diagonal block matrix is the sum of a complex scalar times the identity matrix and a Laplacian matrix. We show that the accuracy of the approximation scheme is of O (ɛ). Because of the special diagonal block structure, we employ the multigrid method to solve the resulting linear systems. The convergence of the multigrid method is studied. Numerical examples are presented to illustrate the accuracy of the proposed approximation scheme and the efficiency of the proposed solver.

  14. READSCAN: a fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    PubMed Central

    Rashid, Mamoon; Pain, Arnab

    2013-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: arnab.pain@kaust.edu.sa or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23193222

  15. FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+

    NASA Astrophysics Data System (ADS)

    Sahoo, B. K.

    2010-12-01

    We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.

  16. RapGene: a fast and accurate strategy for synthetic gene assembly in Escherichia coli

    PubMed Central

    Zampini, Massimiliano; Stevens, Pauline Rees; Pachebat, Justin A.; Kingston-Smith, Alison; Mur, Luis A. J.; Hayes, Finbarr

    2015-01-01

    The ability to assemble DNA sequences de novo through efficient and powerful DNA fabrication methods is one of the foundational technologies of synthetic biology. Gene synthesis, in particular, has been considered the main driver for the emergence of this new scientific discipline. Here we describe RapGene, a rapid gene assembly technique which was successfully tested for the synthesis and cloning of both prokaryotic and eukaryotic genes through a ligation independent approach. The method developed in this study is a complete bacterial gene synthesis platform for the quick, accurate and cost effective fabrication and cloning of gene-length sequences that employ the widely used host Escherichia coli. PMID:26062748

  17. A localized basis that allows fast and accurate second order Moller-Plesset calculations

    SciTech Connect

    Subotnik, Joseph E.; Head-Gordon, Martin

    2004-10-27

    We present a method for computing a basis of localized orthonormal orbitals (both occupied and virtual), in whose representation the Fock matrix is extremely diagonal-dominant. The existence of these orbitals is shown empirically to be sufficient for achieving highly accurate MP@ energies, calculated according to Kapuy's method. This method (which we abbreviate KMP2), which involves a different partitioning of the n-electron Hamiltonian, scales at most quadratically with potential for linearity in the number of electrons. As such, we believe the KMP2 algorithm presented here could be the basis of a viable approach to local correlation calculations.

  18. Fast and accurate low-dimensional reduction of biophysically detailed neuron models.

    PubMed

    Marasco, Addolorata; Limongiello, Alessandro; Migliore, Michele

    2012-01-01

    Realistic modeling of neurons are quite successful in complementing traditional experimental techniques. However, their networks require a computational power beyond the capabilities of current supercomputers, and the methods used so far to reduce their complexity do not take into account the key features of the cells nor critical physiological properties. Here we introduce a new, automatic and fast method to map realistic neurons into equivalent reduced models running up to > 40 times faster while maintaining a very high accuracy of the membrane potential dynamics during synaptic inputs, and a direct link with experimental observables. The mapping of arbitrary sets of synaptic inputs, without additional fine tuning, would also allow the convenient and efficient implementation of a new generation of large-scale simulations of brain regions reproducing the biological variability observed in real neurons, with unprecedented advances to understand higher brain functions. PMID:23226594

  19. Unbounded Binary Search for a Fast and Accurate Maximum Power Point Tracking

    NASA Astrophysics Data System (ADS)

    Kim, Yong Sin; Winston, Roland

    2011-12-01

    This paper presents a technique for maximum power point tracking (MPPT) of a concentrating photovoltaic system using cell level power optimization. Perturb and observe (P&O) has been a standard for an MPPT, but it introduces a tradeoff between the tacking speed and the accuracy of the maximum power delivered. The P&O algorithm is not suitable for a rapid environmental condition change by partial shading and self-shading due to its tracking time being linear to the length of the voltage range. Some of researches have been worked on fast tracking but they come with internal ad hoc parameters. In this paper, by using the proposed unbounded binary search algorithm for the MPPT, tracking time becomes a logarithmic function of the voltage search range without ad hoc parameters.

  20. Fast and accurate low-dimensional reduction of biophysically detailed neuron models.

    PubMed

    Marasco, Addolorata; Limongiello, Alessandro; Migliore, Michele

    2012-01-01

    Realistic modeling of neurons are quite successful in complementing traditional experimental techniques. However, their networks require a computational power beyond the capabilities of current supercomputers, and the methods used so far to reduce their complexity do not take into account the key features of the cells nor critical physiological properties. Here we introduce a new, automatic and fast method to map realistic neurons into equivalent reduced models running up to > 40 times faster while maintaining a very high accuracy of the membrane potential dynamics during synaptic inputs, and a direct link with experimental observables. The mapping of arbitrary sets of synaptic inputs, without additional fine tuning, would also allow the convenient and efficient implementation of a new generation of large-scale simulations of brain regions reproducing the biological variability observed in real neurons, with unprecedented advances to understand higher brain functions.

  1. The SPECIES and ORGANISMS Resources for Fast and Accurate Identification of Taxonomic Names in Text

    PubMed Central

    Fanini, Lucia; Faulwetter, Sarah; Pavloudi, Christina; Vasileiadou, Aikaterini; Arvanitidis, Christos; Jensen, Lars Juhl

    2013-01-01

    The exponential growth of the biomedical literature is making the need for efficient, accurate text-mining tools increasingly clear. The identification of named biological entities in text is a central and difficult task. We have developed an efficient algorithm and implementation of a dictionary-based approach to named entity recognition, which we here use to identify names of species and other taxa in text. The tool, SPECIES, is more than an order of magnitude faster and as accurate as existing tools. The precision and recall was assessed both on an existing gold-standard corpus and on a new corpus of 800 abstracts, which were manually annotated after the development of the tool. The corpus comprises abstracts from journals selected to represent many taxonomic groups, which gives insights into which types of organism names are hard to detect and which are easy. Finally, we have tagged organism names in the entire Medline database and developed a web resource, ORGANISMS, that makes the results accessible to the broad community of biologists. The SPECIES software is open source and can be downloaded from http://species.jensenlab.org along with dictionary files and the manually annotated gold-standard corpus. The ORGANISMS web resource can be found at http://organisms.jensenlab.org. PMID:23823062

  2. Fast and accurate sensitivity analysis of IMPT treatment plans using Polynomial Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Perkó, Zoltán; van der Voort, Sebastian R.; van de Water, Steven; Hartman, Charlotte M. H.; Hoogeman, Mischa; Lathouwers, Danny

    2016-06-01

    The highly conformal planned dose distribution achievable in intensity modulated proton therapy (IMPT) can severely be compromised by uncertainties in patient setup and proton range. While several robust optimization approaches have been presented to address this issue, appropriate methods to accurately estimate the robustness of treatment plans are still lacking. To fill this gap we present Polynomial Chaos Expansion (PCE) techniques which are easily applicable and create a meta-model of the dose engine by approximating the dose in every voxel with multidimensional polynomials. This Polynomial Chaos (PC) model can be built in an automated fashion relatively cheaply and subsequently it can be used to perform comprehensive robustness analysis. We adapted PC to provide among others the expected dose, the dose variance, accurate probability distribution of dose-volume histogram (DVH) metrics (e.g. minimum tumor or maximum organ dose), exact bandwidths of DVHs, and to separate the effects of random and systematic errors. We present the outcome of our verification experiments based on 6 head-and-neck (HN) patients, and exemplify the usefulness of PCE by comparing a robust and a non-robust treatment plan for a selected HN case. The results suggest that PCE is highly valuable for both research and clinical applications.

  3. Fast and accurate determination of 3D temperature distribution using fraction-step semi-implicit method

    NASA Astrophysics Data System (ADS)

    Cen, Wei; Hoppe, Ralph; Gu, Ning

    2016-09-01

    In this paper, we proposed a method to numerically determinate 3-dimensional thermal response due to electromagnetic exposure quickly and accurately. Due to the stability criterion the explicit finite-difference time-domain (FDTD) method works fast only if the spatial step is not set very small. In this paper, the semi-implicit Crank-Nicholson method for time domain discretization with unconditional time stability is proposed, where the idea of fractional steps method was utilized in 3-dimension so that an efficient numerical implementation is obtained. Compared with the explicit FDTD, with similar numerical precision, the proposed method takes less than 1/200 of the execution time.

  4. Fast, Accurate and Precise Mid-Sagittal Plane Location in 3D MR Images of the Brain

    NASA Astrophysics Data System (ADS)

    Bergo, Felipe P. G.; Falcão, Alexandre X.; Yasuda, Clarissa L.; Ruppert, Guilherme C. S.

    Extraction of the mid-sagittal plane (MSP) is a key step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, based on automatic segmentation of the brain and on heuristic maximization of the cerebro-spinal fluid within the MSP. The method is robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The method is also accurate with respect to MSP delineations done by a specialist. The method was evaluated on 64 MR images (36 pathological, 20 healthy, 8 synthetic), and it found a precise and accurate approximation of the MSP in all of them with a mean time of 60.0 seconds per image, mean angular variation within a same image (precision) of 1.26o and mean angular difference from specialist delineations (accuracy) of 1.64o.

  5. A Simple Transmission Electron Microscopy Method for Fast Thickness Characterization of Suspended Graphene and Graphite Flakes.

    PubMed

    Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus

    2016-02-01

    We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images. PMID:26915000

  6. A Simple Transmission Electron Microscopy Method for Fast Thickness Characterization of Suspended Graphene and Graphite Flakes.

    PubMed

    Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus

    2016-02-01

    We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images.

  7. A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows

    NASA Astrophysics Data System (ADS)

    Bijleveld, H. A.; Veldman, A. E. P.

    2014-12-01

    A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.

  8. A fast and accurate PCA based radiative transfer model: Extension to the broadband shortwave region

    NASA Astrophysics Data System (ADS)

    Kopparla, Pushkar; Natraj, Vijay; Spurr, Robert; Shia, Run-Lie; Crisp, David; Yung, Yuk L.

    2016-04-01

    Accurate radiative transfer (RT) calculations are necessary for many earth-atmosphere applications, from remote sensing retrieval to climate modeling. A Principal Component Analysis (PCA)-based spectral binning method has been shown to provide an order of magnitude increase in computational speed while maintaining an overall accuracy of 0.01% (compared to line-by-line calculations) over narrow spectral bands. In this paper, we have extended the PCA method for RT calculations over the entire shortwave region of the spectrum from 0.3 to 3 microns. The region is divided into 33 spectral fields covering all major gas absorption regimes. We find that the RT performance runtimes are shorter by factors between 10 and 100, while root mean square errors are of order 0.01%.

  9. Fast and accurate generation of ab initio quality atomic charges using nonparametric statistical regression.

    PubMed

    Rai, Brajesh K; Bakken, Gregory A

    2013-07-15

    We introduce a class of partial atomic charge assignment method that provides ab initio quality description of the electrostatics of bioorganic molecules. The method uses a set of models that neither have a fixed functional form nor require a fixed set of parameters, and therefore are capable of capturing the complexities of the charge distribution in great detail. Random Forest regression is used to build separate charge models for elements H, C, N, O, F, S, and Cl, using training data consisting of partial charges along with a description of their surrounding chemical environments; training set charges are generated by fitting to the b3lyp/6-31G* electrostatic potential (ESP) and are subsequently refined to improve consistency and transferability of the charge assignments. Using a set of 210 neutral, small organic molecules, the absolute hydration free energy calculated using these charges in conjunction with Generalized Born solvation model shows a low mean unsigned error, close to 1 kcal/mol, from the experimental data. Using another large and independent test set of chemically diverse organic molecules, the method is shown to accurately reproduce charge-dependent observables--ESP and dipole moment--from ab initio calculations. The method presented here automatically provides an estimate of potential errors in the charge assignment, enabling systematic improvement of these models using additional data. This work has implications not only for the future development of charge models but also in developing methods to describe many other chemical properties that require accurate representation of the electronic structure of the system.

  10. SU-E-T-373: A Motorized Stage for Fast and Accurate QA of Machine Isocenter

    SciTech Connect

    Moore, J; Velarde, E; Wong, J

    2014-06-01

    Purpose: Precision delivery of radiation dose relies on accurate knowledge of the machine isocenter under a variety of machine motions. This is typically determined by performing a Winston-Lutz test consisting of imaging a known object at multiple gantry/collimator/table angles and ensuring that the maximum offset is within specified tolerance. The first step in the Winston-Lutz test is careful placement of a ball bearing at the machine isocenter as determined by repeated imaging and shifting until accurate placement has been determined. Conventionally this is performed by adjusting a stage manually using vernier scales which carry the limitation that each adjustment must be done inside the treatment room with the risks of inaccurate adjustment of the scale and physical bumping of the table. It is proposed to use a motorized system controlled outside of the room to improve the required time and accuracy of these tests. Methods: The three dimensional vernier scales are replaced by three motors with accuracy of 1 micron and a range of 25.4mm connected via USB to a computer in the control room. Software is designed which automatically detects the motors and assigns them to proper axes and allows for small shifts to be entered and performed. Input values match calculated offsets in magnitude and sign to reduce conversion errors. Speed of setup, number of iterations to setup, and accuracy of final placement are assessed. Results: Automatic BB placement required 2.25 iterations and 13 minutes on average while manual placement required 3.76 iterations and 37.5 minutes. The average final XYZ offsets is 0.02cm, 0.01cm, 0.04cm for automatic setup and 0.04cm, 0.02cm, 0.04cm for manual setup. Conclusion: Automatic placement decreased time and repeat iterations for setup while improving placement accuracy. Automatic placement greatly reduces the time required to perform QA.

  11. Fast and accurate inference on gravitational waves from precessing compact binaries

    NASA Astrophysics Data System (ADS)

    Smith, Rory; Field, Scott E.; Blackburn, Kent; Haster, Carl-Johan; Pürrer, Michael; Raymond, Vivien; Schmidt, Patricia

    2016-08-01

    Inferring astrophysical information from gravitational waves emitted by compact binaries is one of the key science goals of gravitational-wave astronomy. In order to reach the full scientific potential of gravitational-wave experiments, we require techniques to mitigate the cost of Bayesian inference, especially as gravitational-wave signal models and analyses become increasingly sophisticated and detailed. Reduced-order models (ROMs) of gravitational waveforms can significantly reduce the computational cost of inference by removing redundant computations. In this paper, we construct the first reduced-order models of gravitational-wave signals that include the effects of spin precession, inspiral, merger, and ringdown in compact object binaries and that are valid for component masses describing binary neutron star, binary black hole, and mixed binary systems. This work utilizes the waveform model known as "IMRPhenomPv2." Our ROM enables the use of a fast reduced-order quadrature (ROQ) integration rule which allows us to approximate Bayesian probability density functions at a greatly reduced computational cost. We find that the ROQ rule can be used to speed-up inference by factors as high as 300 without introducing systematic bias. This corresponds to a reduction in computational time from around half a year to half a day for the longest duration and lowest mass signals. The ROM and ROQ rules are available with the main inference library of the LIGO Scientific Collaboration, LALInference.

  12. Utilizing fast multipole expansions for efficient and accurate quantum-classical molecular dynamics simulations.

    PubMed

    Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul

    2015-03-14

    Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 10(3)-10(5) molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online. PMID:25770527

  13. Utilizing fast multipole expansions for efficient and accurate quantum-classical molecular dynamics simulations

    SciTech Connect

    Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul

    2015-03-14

    Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 10{sup 3}-10{sup 5} molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.

  14. Accurate and Fast Convergent Initial-Value Belief Propagation for Stereo Matching.

    PubMed

    Wang, Xiaofeng; Liu, Yiguang

    2015-01-01

    The belief propagation (BP) algorithm has some limitations, including ambiguous edges and textureless regions, and slow convergence speed. To address these problems, we present a novel algorithm that intrinsically improves both the accuracy and the convergence speed of BP. First, traditional BP generally consumes time due to numerous iterations. To reduce the number of iterations, inspired by the crucial importance of the initial value in nonlinear problems, a novel initial-value belief propagation (IVBP) algorithm is presented, which can greatly improve both convergence speed and accuracy. Second, .the majority of the existing research on BP concentrates on the smoothness term or other energy terms, neglecting the significance of the data term. In this study, a self-adapting dissimilarity data term (SDDT) is presented to improve the accuracy of the data term, which incorporates an additional gradient-based measure into the traditional data term, with the weight determined by the robust measure-based control function. Finally, this study explores the effective combination of local methods and global methods. The experimental results have demonstrated that our method performs well compared with the state-of-the-art BP and simultaneously holds better edge-preserving smoothing effects with fast convergence speed in the Middlebury and new 2014 Middlebury datasets. PMID:26349063

  15. Regular, Fast and Accurate Airborne In-Situ Methane Measurements Around the Tropopause

    NASA Astrophysics Data System (ADS)

    Dyroff, Christoph; Rauthe-Schöch, Armin; Schuck, Tanja J.; Zahn, Andreas

    2013-04-01

    We present a laser spectrometer for automated monthly measurements of methane (CH4) mixing ratios aboard the CARIBIC passenger aircraft. The instrument is based on a commercial fast methane analyzer (FMA, Los Gatos Res.), which was modified for fully unattended employment. A laboratory characterization was performed and the results with emphasis on the precision, cross sensitivity to H2O, and accuracy are presented. An in-flight calibration strategy is described, that utilizes CH4 measurements obtained from flask samples taken during the same flights. By statistical comparison of the in-situ measurements with the flask samples we derive a total uncetrainty estimate of ~ 3.85 ppbv (1?) around the tropopause, and ~ 12.4 ppbv (1?) during aircraft ascent and descent. Data from the first two years of airborne operation are presented that span a large part of the northern hemispheric upper troposphere and lowermost stratosphere, with occasional crossings of the tropics on flights to southern Africa. With its high spatial resolution and high accuracy this data set is unprecedented in the highly important atmospheric layer of the tropopause.

  16. Automated system for fast and accurate analysis of SF6 injected in the surface ocean.

    PubMed

    Koo, Chul-Min; Lee, Kitack; Kim, Miok; Kim, Dae-Ok

    2005-11-01

    This paper describes an automated sampling and analysis system for the shipboard measurement of dissolved sulfur hexafluoride (SF6) in surface marine environments into which SF6 has been deliberately released. This underway system includes a gas chromatograph associated with an electron capture detector, a fast and highly efficient SF6-extraction device, a global positioning system, and a data acquisition system based on Visual Basic 6.0/C 6.0. This work is distinct from previous studies in that it quantifies the efficiency of the SF6-extraction device and its carryover effect and examines the effect of surfactant on the SF6-extraction efficiency. Measurements can be continuously performed on seawater samples taken from a seawater line installed onboard a research vessel. The system runs on an hourly cycle during which one set of four SF6 standards is measured and SF6 derived from the seawater stream is subsequently analyzed for the rest of each 1 h period. This state-of-art system was successfully used to trace a water mass carrying Cochlodinium polykrikoides, which causes harmful algal blooms (HAB) in the coastal waters of southern Korea. The successful application of this analysis system in tracing the HAB-infected water mass suggests that the SF6 detection method described in this paper will improve the quality of the future study of biogeochemical processes in the marine environment. PMID:16294883

  17. Automated system for fast and accurate analysis of SF6 injected in the surface ocean.

    PubMed

    Koo, Chul-Min; Lee, Kitack; Kim, Miok; Kim, Dae-Ok

    2005-11-01

    This paper describes an automated sampling and analysis system for the shipboard measurement of dissolved sulfur hexafluoride (SF6) in surface marine environments into which SF6 has been deliberately released. This underway system includes a gas chromatograph associated with an electron capture detector, a fast and highly efficient SF6-extraction device, a global positioning system, and a data acquisition system based on Visual Basic 6.0/C 6.0. This work is distinct from previous studies in that it quantifies the efficiency of the SF6-extraction device and its carryover effect and examines the effect of surfactant on the SF6-extraction efficiency. Measurements can be continuously performed on seawater samples taken from a seawater line installed onboard a research vessel. The system runs on an hourly cycle during which one set of four SF6 standards is measured and SF6 derived from the seawater stream is subsequently analyzed for the rest of each 1 h period. This state-of-art system was successfully used to trace a water mass carrying Cochlodinium polykrikoides, which causes harmful algal blooms (HAB) in the coastal waters of southern Korea. The successful application of this analysis system in tracing the HAB-infected water mass suggests that the SF6 detection method described in this paper will improve the quality of the future study of biogeochemical processes in the marine environment.

  18. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    SciTech Connect

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  19. RRTMGP: A fast and accurate radiation code for the next decade

    NASA Astrophysics Data System (ADS)

    Mlawer, E. J.; Pincus, R.; Wehe, A.; Delamere, J.

    2015-12-01

    Atmospheric radiative processes are key drivers of the Earth's climate and must be accurately represented in global circulations models (GCMs) to allow faithful simulations of the planet's past, present, and future. The radiation code RRTMG is widely utilized by global modeling centers for both climate and weather predictions, but it has become increasingly out-of-date. The code's structure is not well suited for the current generation of computer architectures and its stored absorption coefficients are not consistent with the most recent spectroscopic information. We are developing a new broadband radiation code for the current generation of computational architectures. This code, called RRTMGP, will be a completely restructured and modern version of RRTMG. The new code preserves the strengths of the existing RRTMG parameterization, especially the high accuracy of the k-distribution treatment of absorption by gases, but the entire code is being rewritten to provide highly efficient computation across a range of architectures. Our redesign includes refactoring the code into discrete kernels corresponding to fundamental computational elements (e.g. gas optics), optimizing the code for operating on multiple columns in parallel, simplifying the subroutine interface, revisiting the existing gas optics interpolation scheme to reduce branching, and adding flexibility with respect to run-time choices of streams, need for consideration of scattering, aerosol and cloud optics, etc. The result of the proposed development will be a single, well-supported and well-validated code amenable to optimization across a wide range of platforms. Our main emphasis is on highly-parallel platforms including Graphical Processing Units (GPUs) and Many-Integrated-Core processors (MICs), which experience shows can accelerate broadband radiation calculations by as much as a factor of fifty. RRTMGP will provide highly efficient and accurate radiative fluxes calculations for coupled global

  20. Multi-stencils fast marching methods: a highly accurate solution to the eikonal equation on cartesian domains.

    PubMed

    Hassouna, M Sabry; Farag, A A

    2007-09-01

    A wide range of computer vision applications require an accurate solution of a particular Hamilton- Jacobi (HJ) equation, known as the Eikonal equation. In this paper, we propose an improved version of the fast marching method (FMM) that is highly accurate for both 2D and 3D Cartesian domains. The new method is called multi-stencils fast marching (MSFM), which computes the solution at each grid point by solving the Eikonal equation along several stencils and then picks the solution that satisfies the upwind condition. The stencils are centered at each grid point and cover its entire nearest neighbors. In 2D space, 2 stencils cover the 8-neighbors of the point, while in 3D space, 6 stencils cover its 26-neighbors. For those stencils that are not aligned with the natural coordinate system, the Eikonal equation is derived using directional derivatives and then solved using higher order finite difference schemes. The accuracy of the proposed method over the state-of-the-art FMM-based techniques has been demonstrated through comprehensive numerical experiments.

  1. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  2. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  3. Spectroscopic Method for Fast and Accurate Group A Streptococcus Bacteria Detection.

    PubMed

    Schiff, Dillon; Aviv, Hagit; Rosenbaum, Efraim; Tischler, Yaakov R

    2016-02-16

    Rapid and accurate detection of pathogens is paramount to human health. Spectroscopic techniques have been shown to be viable methods for detecting various pathogens. Enhanced methods of Raman spectroscopy can discriminate unique bacterial signatures; however, many of these require precise conditions and do not have in vivo replicability. Common biological detection methods such as rapid antigen detection tests have high specificity but do not have high sensitivity. Here we developed a new method of bacteria detection that is both highly specific and highly sensitive by combining the specificity of antibody staining and the sensitivity of spectroscopic characterization. Bacteria samples, treated with a fluorescent antibody complex specific to Streptococcus pyogenes, were volumetrically normalized according to their Raman bacterial signal intensity and characterized for fluorescence, eliciting a positive result for samples containing Streptococcus pyogenes and a negative result for those without. The normalized fluorescence intensity of the Streptococcus pyogenes gave a signal that is up to 16.4 times higher than that of other bacteria samples for bacteria stained in solution and up to 12.7 times higher in solid state. This method can be very easily replicated for other bacteria species using suitable antibody-dye complexes. In addition, this method shows viability for in vivo detection as it requires minute amounts of bacteria, low laser excitation power, and short integration times in order to achieve high signal.

  4. Spectroscopic Method for Fast and Accurate Group A Streptococcus Bacteria Detection.

    PubMed

    Schiff, Dillon; Aviv, Hagit; Rosenbaum, Efraim; Tischler, Yaakov R

    2016-02-16

    Rapid and accurate detection of pathogens is paramount to human health. Spectroscopic techniques have been shown to be viable methods for detecting various pathogens. Enhanced methods of Raman spectroscopy can discriminate unique bacterial signatures; however, many of these require precise conditions and do not have in vivo replicability. Common biological detection methods such as rapid antigen detection tests have high specificity but do not have high sensitivity. Here we developed a new method of bacteria detection that is both highly specific and highly sensitive by combining the specificity of antibody staining and the sensitivity of spectroscopic characterization. Bacteria samples, treated with a fluorescent antibody complex specific to Streptococcus pyogenes, were volumetrically normalized according to their Raman bacterial signal intensity and characterized for fluorescence, eliciting a positive result for samples containing Streptococcus pyogenes and a negative result for those without. The normalized fluorescence intensity of the Streptococcus pyogenes gave a signal that is up to 16.4 times higher than that of other bacteria samples for bacteria stained in solution and up to 12.7 times higher in solid state. This method can be very easily replicated for other bacteria species using suitable antibody-dye complexes. In addition, this method shows viability for in vivo detection as it requires minute amounts of bacteria, low laser excitation power, and short integration times in order to achieve high signal. PMID:26752013

  5. WaveQ3D: Fast and accurate acoustic transmission loss (TL) eigenrays, in littoral environments

    NASA Astrophysics Data System (ADS)

    Reilly, Sean M.

    This study defines a new 3D Gaussian ray bundling acoustic transmission loss model in geodetic coordinates: latitude, longitude, and altitude. This approach is designed to lower the computation burden of computing accurate environmental effects in sonar training application by eliminating the need to transform the ocean environment into a collection of Nx2D Cartesian radials. This approach also improves model accuracy by incorporating real world 3D effects, like horizontal refraction, into the model. This study starts with derivations for a 3D variant of Gaussian ray bundles in this coordinate system. To verify the accuracy of this approach, acoustic propagation predictions of transmission loss, time of arrival, and propagation direction are compared to analytic solutions and other models. To validate the model's ability to predict real world phenomena, predictions of transmission loss and propagation direction are compared to at-sea measurements, in an environment where strong horizontal refraction effect have been observed. This model has been integrated into U.S. Navy active sonar training system applications, where testing has demonstrated its ability to improve transmission loss calculation speed without sacrificing accuracy.

  6. LinkImpute: Fast and Accurate Genotype Imputation for Nonmodel Organisms.

    PubMed

    Money, Daniel; Gardner, Kyle; Migicovsky, Zoë; Schwaninger, Heidi; Zhong, Gan-Yuan; Myles, Sean

    2015-11-01

    Obtaining genome-wide genotype data from a set of individuals is the first step in many genomic studies, including genome-wide association and genomic selection. All genotyping methods suffer from some level of missing data, and genotype imputation can be used to fill in the missing data and improve the power of downstream analyses. Model organisms like human and cattle benefit from high-quality reference genomes and panels of reference genotypes that aid in imputation accuracy. In nonmodel organisms, however, genetic and physical maps often are either of poor quality or are completely absent, and there are no panels of reference genotypes available. There is therefore a need for imputation methods designed specifically for nonmodel organisms in which genomic resources are poorly developed and marker order is unreliable or unknown. Here we introduce LinkImpute, a software package based on a k-nearest neighbor genotype imputation method, LD-kNNi, which is designed for unordered markers. No physical or genetic maps are required, and it is designed to work on unphased genotype data from heterozygous species. It exploits the fact that markers useful for imputation often are not physically close to the missing genotype but rather distributed throughout the genome. Using genotyping-by-sequencing data from diverse and heterozygous accessions of apples, grapes, and maize, we compare LD-kNNi with several genotype imputation methods and show that LD-kNNi is fast, comparable in accuracy to the best-existing methods, and exhibits the least bias in allele frequency estimates.

  7. LinkImpute: Fast and Accurate Genotype Imputation for Nonmodel Organisms

    PubMed Central

    Money, Daniel; Gardner, Kyle; Migicovsky, Zoë; Schwaninger, Heidi; Zhong, Gan-Yuan; Myles, Sean

    2015-01-01

    Obtaining genome-wide genotype data from a set of individuals is the first step in many genomic studies, including genome-wide association and genomic selection. All genotyping methods suffer from some level of missing data, and genotype imputation can be used to fill in the missing data and improve the power of downstream analyses. Model organisms like human and cattle benefit from high-quality reference genomes and panels of reference genotypes that aid in imputation accuracy. In nonmodel organisms, however, genetic and physical maps often are either of poor quality or are completely absent, and there are no panels of reference genotypes available. There is therefore a need for imputation methods designed specifically for nonmodel organisms in which genomic resources are poorly developed and marker order is unreliable or unknown. Here we introduce LinkImpute, a software package based on a k-nearest neighbor genotype imputation method, LD-kNNi, which is designed for unordered markers. No physical or genetic maps are required, and it is designed to work on unphased genotype data from heterozygous species. It exploits the fact that markers useful for imputation often are not physically close to the missing genotype but rather distributed throughout the genome. Using genotyping-by-sequencing data from diverse and heterozygous accessions of apples, grapes, and maize, we compare LD-kNNi with several genotype imputation methods and show that LD-kNNi is fast, comparable in accuracy to the best-existing methods, and exhibits the least bias in allele frequency estimates. PMID:26377960

  8. HyRec: A Fast and Highly Accurate Primordial Hydrogen and Helium Recombination Code

    NASA Astrophysics Data System (ADS)

    Ali-Haïmoud, Yacine; Hirata, Christopher M.

    2010-11-01

    We present a state-of-the-art primordial recombination code, HyRec, including all the physical effects that have been shown to significantly affect recombination. The computation of helium recombination includes simple analytic treatments of hydrogen continuum opacity in the He I 2 1P - 1 1S line, the He I] 2 3P - 1 1S line, and treats feedback between these lines within the on-the-spot approximation. Hydrogen recombination is computed using the effective multilevel atom method, virtually accounting for an infinite number of excited states. We account for two-photon transitions from 2s and higher levels as well as frequency diffusion in Lyman-alpha with a full radiative transfer calculation. We present a new method to evolve the radiation field simultaneously with the level populations and the free electron fraction. These computations are sped up by taking advantage of the particular sparseness pattern of the equations describing the radiative transfer. The computation time for a full recombination history is ~2 seconds. This makes our code well suited for inclusion in Monte Carlo Markov chains for cosmological parameter estimation from upcoming high-precision cosmic microwave background anisotropy measurements.

  9. Fast and accurate nonenzymatic copying of an RNA-like synthetic genetic polymer.

    PubMed

    Zhang, Shenglong; Blain, J Craig; Zielinska, Daria; Gryaznov, Sergei M; Szostak, Jack W

    2013-10-29

    Recent advances suggest that it may be possible to construct simple artificial cells from two subsystems: a self-replicating cell membrane and a self-replicating genetic polymer. Although multiple pathways for the growth and division of model protocell membranes have been characterized, no self-replicating genetic material is yet available. Nonenzymatic template-directed synthesis of RNA with activated ribonucleotide monomers has led to the copying of short RNA templates; however, these reactions are generally slow (taking days to weeks) and highly error prone. N3'-P5'-linked phosphoramidate DNA (3'-NP-DNA) is similar to RNA in its overall duplex structure, and is attractive as an alternative to RNA because the high reactivity of its corresponding monomers allows rapid and efficient copying of all four nucleobases on homopolymeric RNA and DNA templates. Here we show that both homopolymeric and mixed-sequence 3'-NP-DNA templates can be copied into complementary 3'-NP-DNA sequences. G:T and A:C wobble pairing leads to a high error rate, but the modified nucleoside 2-thiothymidine suppresses wobble pairing. We show that the 2-thiothymidine modification increases both polymerization rate and fidelity in the copying of a 3'-NP-DNA template into a complementary strand of 3'-NP-DNA. Our results suggest that 3'-NP-DNA has the potential to serve as the genetic material of artificial biological systems. PMID:24101473

  10. Fast and accurate semantic annotation of bioassays exploiting a hybrid of machine learning and user confirmation

    PubMed Central

    Bunin, Barry A.; Litterman, Nadia K.; Schürer, Stephan C.; Visser, Ubbo

    2014-01-01

    Bioinformatics and computer aided drug design rely on the curation of a large number of protocols for biological assays that measure the ability of potential drugs to achieve a therapeutic effect. These assay protocols are generally published by scientists in the form of plain text, which needs to be more precisely annotated in order to be useful to software methods. We have developed a pragmatic approach to describing assays according to the semantic definitions of the BioAssay Ontology (BAO) project, using a hybrid of machine learning based on natural language processing, and a simplified user interface designed to help scientists curate their data with minimum effort. We have carried out this work based on the premise that pure machine learning is insufficiently accurate, and that expecting scientists to find the time to annotate their protocols manually is unrealistic. By combining these approaches, we have created an effective prototype for which annotation of bioassay text within the domain of the training set can be accomplished very quickly. Well-trained annotations require single-click user approval, while annotations from outside the training set domain can be identified using the search feature of a well-designed user interface, and subsequently used to improve the underlying models. By drastically reducing the time required for scientists to annotate their assays, we can realistically advocate for semantic annotation to become a standard part of the publication process. Once even a small proportion of the public body of bioassay data is marked up, bioinformatics researchers can begin to construct sophisticated and useful searching and analysis algorithms that will provide a diverse and powerful set of tools for drug discovery researchers. PMID:25165633

  11. Fast and accurate semantic annotation of bioassays exploiting a hybrid of machine learning and user confirmation.

    PubMed

    Clark, Alex M; Bunin, Barry A; Litterman, Nadia K; Schürer, Stephan C; Visser, Ubbo

    2014-01-01

    Bioinformatics and computer aided drug design rely on the curation of a large number of protocols for biological assays that measure the ability of potential drugs to achieve a therapeutic effect. These assay protocols are generally published by scientists in the form of plain text, which needs to be more precisely annotated in order to be useful to software methods. We have developed a pragmatic approach to describing assays according to the semantic definitions of the BioAssay Ontology (BAO) project, using a hybrid of machine learning based on natural language processing, and a simplified user interface designed to help scientists curate their data with minimum effort. We have carried out this work based on the premise that pure machine learning is insufficiently accurate, and that expecting scientists to find the time to annotate their protocols manually is unrealistic. By combining these approaches, we have created an effective prototype for which annotation of bioassay text within the domain of the training set can be accomplished very quickly. Well-trained annotations require single-click user approval, while annotations from outside the training set domain can be identified using the search feature of a well-designed user interface, and subsequently used to improve the underlying models. By drastically reducing the time required for scientists to annotate their assays, we can realistically advocate for semantic annotation to become a standard part of the publication process. Once even a small proportion of the public body of bioassay data is marked up, bioinformatics researchers can begin to construct sophisticated and useful searching and analysis algorithms that will provide a diverse and powerful set of tools for drug discovery researchers. PMID:25165633

  12. HapCompass: A Fast Cycle Basis Algorithm for Accurate Haplotype Assembly of Sequence Data

    PubMed Central

    Aguiar, Derek

    2012-01-01

    Abstract Genome assembly methods produce haplotype phase ambiguous assemblies due to limitations in current sequencing technologies. Determining the haplotype phase of an individual is computationally challenging and experimentally expensive. However, haplotype phase information is crucial in many bioinformatics workflows such as genetic association studies and genomic imputation. Current computational methods of determining haplotype phase from sequence data—known as haplotype assembly—have difficulties producing accurate results for large (1000 genomes-type) data or operate on restricted optimizations that are unrealistic considering modern high-throughput sequencing technologies. We present a novel algorithm, HapCompass, for haplotype assembly of densely sequenced human genome data. The HapCompass algorithm operates on a graph where single nucleotide polymorphisms (SNPs) are nodes and edges are defined by sequence reads and viewed as supporting evidence of co-occurring SNP alleles in a haplotype. In our graph model, haplotype phasings correspond to spanning trees. We define the minimum weighted edge removal optimization on this graph and develop an algorithm based on cycle basis local optimizations for resolving conflicting evidence. We then estimate the amount of sequencing required to produce a complete haplotype assembly of a chromosome. Using these estimates together with metrics borrowed from genome assembly and haplotype phasing, we compare the accuracy of HapCompass, the Genome Analysis ToolKit, and HapCut for 1000 Genomes Project and simulated data. We show that HapCompass performs significantly better for a variety of data and metrics. HapCompass is freely available for download (www.brown.edu/Research/Istrail_Lab/). PMID:22697235

  13. Fast and Accurate Radiative Transfer Calculations Using Principal Component Analysis for (Exo-)Planetary Retrieval Models

    NASA Astrophysics Data System (ADS)

    Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.

    2015-12-01

    Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work

  14. Fast, accurate and easy-to-pipeline methods for amplicon sequence processing

    NASA Astrophysics Data System (ADS)

    Antonielli, Livio; Sessitsch, Angela

    2016-04-01

    Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.

  15. Fast and accurate approximate inference of transcript expression from RNA-seq data

    PubMed Central

    Hensman, James; Papastamoulis, Panagiotis; Glaus, Peter; Honkela, Antti; Rattray, Magnus

    2015-01-01

    Motivation: Assigning RNA-seq reads to their transcript of origin is a fundamental task in transcript expression estimation. Where ambiguities in assignments exist due to transcripts sharing sequence, e.g. alternative isoforms or alleles, the problem can be solved through probabilistic inference. Bayesian methods have been shown to provide accurate transcript abundance estimates compared with competing methods. However, exact Bayesian inference is intractable and approximate methods such as Markov chain Monte Carlo and Variational Bayes (VB) are typically used. While providing a high degree of accuracy and modelling flexibility, standard implementations can be prohibitively slow for large datasets and complex transcriptome annotations. Results: We propose a novel approximate inference scheme based on VB and apply it to an existing model of transcript expression inference from RNA-seq data. Recent advances in VB algorithmics are used to improve the convergence of the algorithm beyond the standard Variational Bayes Expectation Maximization algorithm. We apply our algorithm to simulated and biological datasets, demonstrating a significant increase in speed with only very small loss in accuracy of expression level estimation. We carry out a comparative study against seven popular alternative methods and demonstrate that our new algorithm provides excellent accuracy and inter-replicate consistency while remaining competitive in computation time. Availability and implementation: The methods were implemented in R and C++, and are available as part of the BitSeq project at github.com/BitSeq. The method is also available through the BitSeq Bioconductor package. The source code to reproduce all simulation results can be accessed via github.com/BitSeq/BitSeqVB_benchmarking. Contact: james.hensman@sheffield.ac.uk or panagiotis.papastamoulis@manchester.ac.uk or Magnus.Rattray@manchester.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online

  16. Design and development of a profilometer for the fast and accurate characterization of optical surfaces

    NASA Astrophysics Data System (ADS)

    Gómez-Pedrero, José A.; Rodríguez-Ibañez, Diego; Alonso, José; Quirgoa, Juan A.

    2015-09-01

    With the advent of techniques devised for the mass production of optical components made with surfaces of arbitrary form (also known as free form surfaces) in the last years, a parallel development of measuring systems adapted for these new kind of surfaces constitutes a real necessity for the industry. Profilometry is one of the preferred methods for the assessment of the quality of a surface, and is widely employed in the optical fabrication industry for the quality control of its products. In this work, we present the design, development and assembly of a new profilometer with five axis of movement, specifically suited to the measurement of medium size (up to 150 mm of diameter) "free-form" optical surfaces with sub-micrometer accuracy and low measuring times. The apparatus is formed by three X, Y, Z linear motorized positioners plus and additional angular and a tilt positioner employed to locate accurately the surface to be measured and the probe which can be a mechanical or an optical one, being optical one a confocal sensor based on chromatic aberration. Both optical and mechanical probes guarantee an accuracy lower than the micrometer in the determination of the surface height, thus ensuring an accuracy in the surface curvatures of the order of 0.01 D or better. An original calibration procedure based on the measurement of a precision sphere has been developed in order to correct the perpendicularity error between the axes of the linear positioners. To reduce the measuring time of the profilometer, a custom electronics, based on an Arduino™ controller, have been designed and produced in order to synchronize the five motorized positioners and the optical and mechanical probes so that a medium size surface (around 10 cm of diameter) with a dynamic range in curvatures of around 10 D, can be measured in less than 300 seconds (using three axes) keeping the resolution in height and curvature in the figures mentioned above.

  17. A Simple and Accurate Analysis of Conductivity Loss in Millimeter-Wave Helical Slow-Wave Structures

    NASA Astrophysics Data System (ADS)

    Datta, S. K.; Kumar, Lalit; Basu, B. N.

    2009-04-01

    Electromagnetic field analysis of a helix slow-wave structure was carried out and a closed form expression was derived for the inductance per unit length of the transmission-line equivalent circuit of the structure, taking into account the actual helix tape dimensions and surface current on the helix over the actual metallic area of the tape. The expression of the inductance per unit length, thus obtained, was used for estimating the increment in the inductance per unit length caused due to penetration of the magnetic flux into the conducting surfaces following Wheeler’s incremental inductance rule, which was subsequently interpreted for the attenuation constant of the propagating structure. The analysis was computationally simple and accurate, and accrues the accuracy of 3D electromagnetic analysis by allowing the use of dispersion characteristics obtainable from any standard electromagnetic modeling. The approach was benchmarked against measurement for two practical structures, and excellent agreement was observed. The analysis was subsequently applied to demonstrate the effects of conductivity on the attenuation constant of a typical broadband millimeter-wave helical slow-wave structure with respect to helix materials and copper plating on the helix, surface finish of the helix, dielectric loading effect and effect of high temperature operation - a comparative study of various such aspects are covered.

  18. Fast and accurate determination of K, Ca, and Mg in human serum by sector field ICP-MS.

    PubMed

    Yu, Lee L; Davis, W Clay; Nuevo Ordonez, Yoana; Long, Stephen E

    2013-11-01

    Electrolytes in serum are important biomarkers for skeletal and cellular health. The levels of electrolytes are monitored by measuring the Ca, Mg, K, and Na in blood serum. Many reference methods have been developed for the determination of Ca, Mg, and K in clinical measurements; however, isotope dilution thermal ionization mass spectrometry (ID-TIMS) has traditionally been the primary reference method serving as an anchor for traceability and accuracy to these secondary reference methods. The sample matrix must be separated before ID-TIMS measurements, which is a slow and tedious process that hindered the adoption of the technique in routine clinical measurements. We have developed a fast and accurate method for the determination of Ca, Mg, and K in serum by taking advantage of the higher mass resolution capability of the modern sector field inductively coupled plasma mass spectrometry (SF-ICP-MS). Each serum sample was spiked with a mixture containing enriched (44)Ca, (26)Mg, and (41)K, and the (42)Ca(+):(44)Ca(+), (24)Mg(+):(26)Mg(+), and (39)K(+):(41)K(+) ratios were measured. The Ca and Mg ratios were measured in medium resolution mode (m/Δm ≈ 4 500), and the K ratio in high resolution mode (m/Δm ≈ 10 000). Residual (40)Ar(1)H(+) interference was still observed but the deleterious effects of the interference were minimized by measuring the sample at K > 100 ng g(-1). The interferences of Sr(++) at the two Ca isotopes were less than 0.25 % of the analyte signal, and they were corrected with the (88)Sr(+) intensity by using the Sr(++):Sr(+) ratio. The sample preparation involved only simple dilutions, and the measurement using this sample preparation approach is known as dilution-and-shoot (DNS). The DNS approach was validated with samples prepared via the traditional acid digestion approach followed by ID-SF-ICP-MS measurement. DNS and digested samples of SRM 956c were measured with ID-SF-ICP-MS for quality assurance, and the results (mean

  19. Fast and simple scheme for generating NOON states of photons in circuit QED.

    PubMed

    Su, Qi-Ping; Yang, Chui-Ping; Zheng, Shi-Biao

    2014-01-01

    The generation, manipulation and fundamental understanding of entanglement lies at very heart of quantum mechanics. Among various types of entangled states, the NOON states are a kind of special quantum entangled states with two orthogonal component states in maximal superposition, which have a wide range of potential applications in quantum communication and quantum information processing. Here, we propose a fast and simple scheme for generating NOON states of photons in two superconducting resonators by using a single superconducting transmon qutrit. Because only one superconducting qutrit and two resonators are used, the experimental setup for this scheme is much simplified when compared with the previous proposals requiring a setup of two superconducting qutrits and three cavities. In addition, this scheme is easier and faster to implement than the previous proposals, which require using a complex microwave pulse, or a small pulse Rabi frequency in order to avoid nonresonant transitions.

  20. Simple, fast codebook training algorithm by entropy sequence for vector quantization

    NASA Astrophysics Data System (ADS)

    Pang, Chao-yang; Yao, Shaowen; Qi, Zhang; Sun, Shi-xin; Liu, Jingde

    2001-09-01

    The traditional training algorithm for vector quantization such as the LBG algorithm uses the convergence of distortion sequence as the condition of the end of algorithm. We presented a novel training algorithm for vector quantization in this paper. The convergence of the entropy sequence of each region sequence is employed as the condition of the end of the algorithm. Compared with the famous LBG algorithm, it is simple, fast and easy to be comprehended and controlled. We test the performance of the algorithm by typical test image Lena and Barb. The result shows that the PSNR difference between the algorithm and LBG is less than 0.1dB, but the running time of it is at most one second of LBG.

  1. Fast and simple method for Goss texture evaluation by neutron diffraction

    NASA Astrophysics Data System (ADS)

    Kucerakova, M.; Kolařík, K.; Čapek, J.; Vratislav, S.; Kalvoda, L.

    2016-09-01

    Requirement of low power losses is one of the crucial demands laid on properties of electric steel sheets used in construction of various magnetic circuits. For cold-rolled grain- oriented (CRGO) Fe-3%Si sheets used in majority of power distribution transformers, the Goss texture {110}<001> is known to provide the best utility properties (low power loses, high magnetic permeability). Due to the coarse grain size of CRGO steel, neutron diffraction (ND) is dominantly used to characterize the sheets' texture in order to achieve statistically significant data. In this paper, we present a fast and simple method for characterization of Goss texture perfection level in CRGO steel sheets based on monochromatic ND. The method is tested on 8 samples differing in fabrication technology and magnetic properties. Satisfactory performance of the method and its suitability for a detail texture analyses is tested by juxtaposition of the obtained textural and the magnetic characteristics measured by Barkhausen method.

  2. A fast and simple population code for orientation in primate V1

    PubMed Central

    Berens, Philipp; Ecker, Alexander S.; Cotton, R. James; Ma, Wei Ji; Bethge, Matthias; Tolias, Andreas S.

    2012-01-01

    Orientation tuning has been a classic model for understanding single neuron computation in the neocortex. However, little is known about how orientation can be read out from the activity of neural populations, in particular in alert animals. Our study is a first step towards that goal. We recorded from up to 20 well-isolated single neurons in the primary visual cortex of alert macaques simultaneously and applied a simple, neurally plausible decoder to read out the population code. We focus on two questions: First, what are the time course and the time scale at which orientation can be read out from the population response? Second, how complex does the decoding mechanism in a downstream neuron have to be in order to reliably discriminate between visual stimuli with different orientations? We show that the neural ensembles in primary visual cortex of awake macaques represent orientation in a way that facilitates a fast and simple read-out mechanism: with an average latency of 30–80 ms, the population code can be read out instantaneously with a short integration time of only tens of milliseconds and neither stimulus contrast nor correlations need to be taken into account to compute the optimal synaptic weight pattern. Our study shows that – similar to the case of single neuron computation – the representation of orientation in the spike patterns of neural populations can serve as an exemplary case for understanding of the computations performed by neural ensembles underlying visual processing during behavior. PMID:22855811

  3. Ultra-fast single-file transport of a simple liquid beyond the collective behavior zone.

    PubMed

    Su, Jiaye; Yang, Keda; Huang, Decai

    2016-07-27

    We use molecular dynamics simulations to analyze the single-file transport behavior of a simple liquid through a narrow membrane channel. With the decrease of the liquid-channel interaction, the liquid flow exhibits a remarkable maximum behavior owing to the competition of liquid-liquid and liquid-channel interactions. Surprisingly, this maximum flow is coupled to a sudden reduce of the liquid occupancy, where the liquid particle is moving through the channel alone at nearly constant velocity, rather than in a collective motion mode. Further investigation on the encountered energy barrier suggests that this maximum flow should be induced by particles having large instant velocities (or thermal fluctuation) that overcome the liquid-liquid and liquid-channel interaction barriers. Further decreasing the liquid-channel interaction leads to the decrease and ultimate stabilization of the liquid flow, since the energy barrier will increase and becomes steady. These results suggest that the breakdown of collective behavior can be a new rule for achieving fast single-file transportation, especially for simple or nonpolar liquids with relatively weak liquid-liquid interactions, and is thus helpful for the design of high flux nanofluidic devices.

  4. SERF: A Simple, Effective, Robust, and Fast Image Super-Resolver From Cascaded Linear Regression.

    PubMed

    Hu, Yanting; Wang, Nannan; Tao, Dacheng; Gao, Xinbo; Li, Xuelong

    2016-09-01

    Example learning-based image super-resolution techniques estimate a high-resolution image from a low-resolution input image by relying on high- and low-resolution image pairs. An important issue for these techniques is how to model the relationship between high- and low-resolution image patches: most existing complex models either generalize hard to diverse natural images or require a lot of time for model training, while simple models have limited representation capability. In this paper, we propose a simple, effective, robust, and fast (SERF) image super-resolver for image super-resolution. The proposed super-resolver is based on a series of linear least squares functions, namely, cascaded linear regression. It has few parameters to control the model and is thus able to robustly adapt to different image data sets and experimental settings. The linear least square functions lead to closed form solutions and therefore achieve computationally efficient implementations. To effectively decrease these gaps, we group image patches into clusters via k-means algorithm and learn a linear regressor for each cluster at each iteration. The cascaded learning process gradually decreases the gap of high-frequency detail between the estimated high-resolution image patch and the ground truth image patch and simultaneously obtains the linear regression parameters. Experimental results show that the proposed method achieves superior performance with lower time consumption than the state-of-the-art methods.

  5. Simple and fast PO-CL method for the evaluation of antioxidant capacity of hydrophilic and hydrophobic antioxidants

    NASA Astrophysics Data System (ADS)

    Zargoosh, Kiomars; Ghayeb, Yousef; Azmoon, Behnaz; Qandalee, Mohammad

    2013-08-01

    A simple and fast procedure is described for evaluating the antioxidant activity of hydrophilic and hydrophobic compounds by using the peroxyoxalate-chemiluminescence (PO-CL) reaction of Bis(2,4,6-trichlorophenyl) oxalate (TCPO) with hydrogen peroxide in the presence of di(tert-butyl)2-(tert-butylamino)-5-[(E)-2-phenyl-1-ethenyl]3,4-furandicarboxylate as a highly fluorescent fluorophore. The IC50 values of the well-known antioxidants were calculated and the results were expressed as gallic equivalent antioxidant capacity (GEAC). It was found that the proposed method is free of physical quenching and oxidant interference, for this reason, proposed method is able to determine the accurate scavenging activity of the antioxidants to the free radicals. Finally, the proposed method was applied to the evaluation of antioxidant activity of complex real samples such as soybean oil and sunflower oil (as hydrophobic samples) and honey (as hydrophilic sample). To the best of our knowledge, this is the first time that total antioxidant activity can be determined directly in soybean oil, sunflower oil and honey (not in their extracts) using PO-CL reactions.

  6. FAst MEtabolizer (FAME): A rapid and accurate predictor of sites of metabolism in multiple species by endogenous enzymes.

    PubMed

    Kirchmair, Johannes; Williamson, Mark J; Afzal, Avid M; Tyzack, Jonathan D; Choy, Alison P K; Howlett, Andrew; Rydberg, Patrik; Glen, Robert C

    2013-11-25

    FAst MEtabolizer (FAME) is a fast and accurate predictor of sites of metabolism (SoMs). It is based on a collection of random forest models trained on diverse chemical data sets of more than 20 000 molecules annotated with their experimentally determined SoMs. Using a comprehensive set of available data, FAME aims to assess metabolic processes from a holistic point of view. It is not limited to a specific enzyme family or species. Besides a global model, dedicated models are available for human, rat, and dog metabolism; specific prediction of phase I and II metabolism is also supported. FAME is able to identify at least one known SoM among the top-1, top-2, and top-3 highest ranked atom positions in up to 71%, 81%, and 87% of all cases tested, respectively. These prediction rates are comparable to or better than SoM predictors focused on specific enzyme families (such as cytochrome P450s), despite the fact that FAME uses only seven chemical descriptors. FAME covers a very broad chemical space, which together with its inter- and extrapolation power makes it applicable to a wide range of chemicals. Predictions take less than 2.5 s per molecule in batch mode on an Ultrabook. Results are visualized using Jmol, with the most likely SoMs highlighted. PMID:24219364

  7. A fast and accurate implementation of tunable algorithms used for generation of fractal-like aggregate models

    NASA Astrophysics Data System (ADS)

    Skorupski, Krzysztof; Mroczka, Janusz; Wriedt, Thomas; Riefler, Norbert

    2014-06-01

    In many branches of science experiments are expensive, require specialist equipment or are very time consuming. Studying the light scattering phenomenon by fractal aggregates can serve as an example. Light scattering simulations can overcome these problems and provide us with theoretical, additional data which complete our study. For this reason a fractal-like aggregate model as well as fast aggregation codes are needed. Until now various computer models, that try to mimic the physics behind this phenomenon, have been developed. However, their implementations are mostly based on a trial-and-error procedure. Such approach is very time consuming and the morphological parameters of resulting aggregates are not exact because the postconditions (e.g. the position error) cannot be very strict. In this paper we present a very fast and accurate implementation of a tunable aggregation algorithm based on the work of Filippov et al. (2000). Randomization is reduced to its necessary minimum (our technique can be more than 1000 times faster than standard algorithms) and the position of a new particle, or a cluster, is calculated with algebraic methods. Therefore, the postconditions can be extremely strict and the resulting errors negligible (e.g. the position error can be recognized as non-existent). In our paper two different methods, which are based on the particle-cluster (PC) and the cluster-cluster (CC) aggregation processes, are presented.

  8. LGH: A Fast and Accurate Algorithm for Single Individual Haplotyping Based on a Two-Locus Linkage Graph.

    PubMed

    Xie, Minzhu; Wang, Jianxin; Chen, Xin

    2015-01-01

    Phased haplotype information is crucial in our complete understanding of differences between individuals at the genetic level. Given a collection of DNA fragments sequenced from a homologous pair of chromosomes, the problem of single individual haplotyping (SIH) aims to reconstruct a pair of haplotypes using a computer algorithm. In this paper, we encode the information of aligned DNA fragments into a two-locus linkage graph and approach the SIH problem by vertex labeling of the graph. In order to find a vertex labeling with the minimum sum of weights of incompatible edges, we develop a fast and accurate heuristic algorithm. It starts with detecting error-tolerant components by an adapted breadth-first search. A proper labeling of vertices is then identified for each component, with which sequencing errors are further corrected and edge weights are adjusted accordingly. After contracting each error-tolerant component into a single vertex, the above procedure is iterated on the resulting condensed linkage graph until error-tolerant components are no longer detected. The algorithm finally outputs a haplotype pair based on the vertex labeling. Extensive experiments on simulated and real data show that our algorithm is more accurate and faster than five existing algorithms for single individual haplotyping. PMID:26671798

  9. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  10. Development and validation of a novel, simple, and accurate spectrophotometric method for the determination of lead in human serum.

    PubMed

    Shayesteh, Tavakol Heidari; Khajavi, Farzad; Khosroshahi, Abolfazl Ghafuri; Mahjub, Reza

    2016-01-01

    The determination of blood lead levels is the most useful indicator of the determination of the amount of lead that is absorbed by the human body. Various methods, like atomic absorption spectroscopy (AAS), have already been used for the detection of lead in biological fluid, but most of these methods are based on complicated, expensive, and highly instructed instruments. In this study, a simple and accurate spectroscopic method for the determination of lead has been developed and applied for the investigation of lead concentration in biological samples. In this study, a silica gel column was used to extract lead and eliminate interfering agents in human serum samples. The column was washed with deionized water. The pH was adjusted to the value of 8.2 using phosphate buffer, and then tartrate and cyanide solutions were added as masking agents. The lead content was extracted into the organic phase containing dithizone as a complexion reagent and the dithizone-Pb(II) complex was formed and approved by visible spectrophotometry at 538 nm. The recovery was found to be 84.6 %. In order to validate the method, a calibration curve involving the use of various concentration levels was calculated and proven to be linear in the range of 0.01-1.5 μg/ml, with an R (2) regression coefficient of 0.9968 by statistical analysis of linear model validation. The largest error % values were found to be -5.80 and +11.6 % for intra-day and inter-day measurements, respectively. The largest RSD % values were calculated to be 6.54 and 12.32 % for intra-day and inter-day measurements, respectively. Further, the limit of detection (LOD) was calculated to be 0.002 μg/ml. The developed method was applied to determine the lead content in the human serum of voluntary miners, and it has been proven that there is no statistically significant difference between the data provided from this novel method and the data obtained from previously studied AAS. PMID:26631397

  11. Development and validation of a novel, simple, and accurate spectrophotometric method for the determination of lead in human serum.

    PubMed

    Shayesteh, Tavakol Heidari; Khajavi, Farzad; Khosroshahi, Abolfazl Ghafuri; Mahjub, Reza

    2016-01-01

    The determination of blood lead levels is the most useful indicator of the determination of the amount of lead that is absorbed by the human body. Various methods, like atomic absorption spectroscopy (AAS), have already been used for the detection of lead in biological fluid, but most of these methods are based on complicated, expensive, and highly instructed instruments. In this study, a simple and accurate spectroscopic method for the determination of lead has been developed and applied for the investigation of lead concentration in biological samples. In this study, a silica gel column was used to extract lead and eliminate interfering agents in human serum samples. The column was washed with deionized water. The pH was adjusted to the value of 8.2 using phosphate buffer, and then tartrate and cyanide solutions were added as masking agents. The lead content was extracted into the organic phase containing dithizone as a complexion reagent and the dithizone-Pb(II) complex was formed and approved by visible spectrophotometry at 538 nm. The recovery was found to be 84.6 %. In order to validate the method, a calibration curve involving the use of various concentration levels was calculated and proven to be linear in the range of 0.01-1.5 μg/ml, with an R (2) regression coefficient of 0.9968 by statistical analysis of linear model validation. The largest error % values were found to be -5.80 and +11.6 % for intra-day and inter-day measurements, respectively. The largest RSD % values were calculated to be 6.54 and 12.32 % for intra-day and inter-day measurements, respectively. Further, the limit of detection (LOD) was calculated to be 0.002 μg/ml. The developed method was applied to determine the lead content in the human serum of voluntary miners, and it has been proven that there is no statistically significant difference between the data provided from this novel method and the data obtained from previously studied AAS.

  12. Adaptive optics in spinning disk microscopy: improved contrast and brightness by a simple and fast method.

    PubMed

    Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J

    2015-09-01

    Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples.

  13. Optimal construction of a fast and accurate polarisable water potential based on multipole moments trained by machine learning.

    PubMed

    Handley, Chris M; Hawe, Glenn I; Kell, Douglas B; Popelier, Paul L A

    2009-08-14

    To model liquid water correctly and to reproduce its structural, dynamic and thermodynamic properties warrants models that account accurately for electronic polarisation. We have previously demonstrated that polarisation can be represented by fluctuating multipole moments (derived by quantum chemical topology) predicted by multilayer perceptrons (MLPs) in response to the local structure of the cluster. Here we further develop this methodology of modeling polarisation enabling control of the balance between accuracy, in terms of errors in Coulomb energy and computing time. First, the predictive ability and speed of two additional machine learning methods, radial basis function neural networks (RBFNN) and Kriging, are assessed with respect to our previous MLP based polarisable water models, for water dimer, trimer, tetramer, pentamer and hexamer clusters. Compared to MLPs, we find that RBFNNs achieve a 14-26% decrease in median Coulomb energy error, with a factor 2.5-3 slowdown in speed, whilst Kriging achieves a 40-67% decrease in median energy error with a 6.5-8.5 factor slowdown in speed. Then, these compromises between accuracy and speed are improved upon through a simple multi-objective optimisation to identify Pareto-optimal combinations. Compared to the Kriging results, combinations are found that are no less accurate (at the 90th energy error percentile), yet are 58% faster for the dimer, and 26% faster for the pentamer.

  14. A fast and simple method for the polymerase chain reaction-based sexing of livestock embryos.

    PubMed

    Tavares, K C S; Carneiro, I S; Rios, D B; Feltrin, C; Ribeiro, A K C; Gaudêncio-Neto, S; Martins, L T; Aguiar, L H; Lazzarotto, C R; Calderón, C E M; Lopes, F E M; Teixeira, L P R; Bertolini, M; Bertolini, L R

    2016-03-22

    Embryo sexing is a powerful tool for livestock producers because it allows them to manage their breeding stocks more effectively. However, the cost of supplies and reagents, and the need for trained professionals to biopsy embryos by micromanipulation restrict the worldwide use of the technology to a limited number of specialized groups. The aim of this study was to couple a fast and inexpensive DNA extraction protocol with a practical biopsy approach to create a simple, quick, effective, and dependable embryo sexing procedure. From a total of 1847 sheep and cattle whole embryos or embryo biopsies, the sexing efficiency was 100% for embryo biopsies, 98% for sheep embryos, and 90.2% for cattle embryos. We used a primer pair that was common to both species and only 10% of the total extracted DNA. The whole protocol takes only 2 h to perform, which suggests that the proposed procedure can be readily applied to field conditions. Moreover, in addition to embryo sexing, the procedure can be used for further analyses, such as genotyping and molecular diagnosis in preimplantation embryos.

  15. A fast and simple method for the polymerase chain reaction-based sexing of livestock embryos.

    PubMed

    Tavares, K C S; Carneiro, I S; Rios, D B; Feltrin, C; Ribeiro, A K C; Gaudêncio-Neto, S; Martins, L T; Aguiar, L H; Lazzarotto, C R; Calderón, C E M; Lopes, F E M; Teixeira, L P R; Bertolini, M; Bertolini, L R

    2016-01-01

    Embryo sexing is a powerful tool for livestock producers because it allows them to manage their breeding stocks more effectively. However, the cost of supplies and reagents, and the need for trained professionals to biopsy embryos by micromanipulation restrict the worldwide use of the technology to a limited number of specialized groups. The aim of this study was to couple a fast and inexpensive DNA extraction protocol with a practical biopsy approach to create a simple, quick, effective, and dependable embryo sexing procedure. From a total of 1847 sheep and cattle whole embryos or embryo biopsies, the sexing efficiency was 100% for embryo biopsies, 98% for sheep embryos, and 90.2% for cattle embryos. We used a primer pair that was common to both species and only 10% of the total extracted DNA. The whole protocol takes only 2 h to perform, which suggests that the proposed procedure can be readily applied to field conditions. Moreover, in addition to embryo sexing, the procedure can be used for further analyses, such as genotyping and molecular diagnosis in preimplantation embryos. PMID:27050974

  16. A homotopy-based sparse representation for fast and accurate shape prior modeling in liver surgical planning.

    PubMed

    Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu

    2015-01-01

    Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement

  17. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  18. Fast and simple determination of perfluorinated compounds and their potential precursors in different packaging materials.

    PubMed

    Zabaleta, I; Bizkarguenaga, E; Bilbao, D; Etxebarria, N; Prieto, A; Zuloaga, O

    2016-05-15

    A simple and fast analytical method for the determination of fourteen perfluorinated compounds (PFCs), including three perfluoroalkylsulfonates (PFSAs), seven perfluorocarboxylic acids (PFCAs), three perfluorophosphonic acids (PFPAs) and perfluorooctanesulfonamide (PFOSA) and ten potential precursors, including four polyfluoroalkyl phosphates (PAPs), four fluorotelomer saturated acids (FTCAs) and two fluorotelomer unsaturated acids (FTUCAs) in different packaging materials was developed in the present work. In order to achieve this objective the optimization of an ultrasonic probe-assisted extraction (UPAE) method was carried out before the analysis of the target compounds by liquid-chromatography-triple quadrupole-tandem mass spectrometry (LC-QqQ-MS/MS). 7 mL of 1 % acetic acid in methanol and a 2.5-min single extraction cycle were sufficient for the extraction of all the target analytes. The optimized analytical method was validated in terms of recovery, precision and method detection limits (MDLs). Apparent recovery values after correction with the corresponding labeled standard were in the 69-103 % and 62-98 % range for samples fortified at 25 ng/g and 50 ng/g concentration levels, respectively and MDL values in the 0.6-2.2 ng/g range were obtained. The developed method was applied to the analysis of plastic (milk bottle, muffin cup, pre-cooked food wrapper and cup of coffee) and cardboard materials (microwave popcorn bag, greaseproof paper for French fries, cardboard box for pizza and cinema cardboard box for popcorn). To the best of our knowledge, this is the first method that describes the determination of fourteen PFCs and ten potential precursors in packaging materials. Moreover, 6:2 FTCA, 6:2 FTUCA and 5:3 FTCA analytes were detected for the first time in microwave popcorn bags. PMID:26992531

  19. Fast and simple determination of perfluorinated compounds and their potential precursors in different packaging materials.

    PubMed

    Zabaleta, I; Bizkarguenaga, E; Bilbao, D; Etxebarria, N; Prieto, A; Zuloaga, O

    2016-05-15

    A simple and fast analytical method for the determination of fourteen perfluorinated compounds (PFCs), including three perfluoroalkylsulfonates (PFSAs), seven perfluorocarboxylic acids (PFCAs), three perfluorophosphonic acids (PFPAs) and perfluorooctanesulfonamide (PFOSA) and ten potential precursors, including four polyfluoroalkyl phosphates (PAPs), four fluorotelomer saturated acids (FTCAs) and two fluorotelomer unsaturated acids (FTUCAs) in different packaging materials was developed in the present work. In order to achieve this objective the optimization of an ultrasonic probe-assisted extraction (UPAE) method was carried out before the analysis of the target compounds by liquid-chromatography-triple quadrupole-tandem mass spectrometry (LC-QqQ-MS/MS). 7 mL of 1 % acetic acid in methanol and a 2.5-min single extraction cycle were sufficient for the extraction of all the target analytes. The optimized analytical method was validated in terms of recovery, precision and method detection limits (MDLs). Apparent recovery values after correction with the corresponding labeled standard were in the 69-103 % and 62-98 % range for samples fortified at 25 ng/g and 50 ng/g concentration levels, respectively and MDL values in the 0.6-2.2 ng/g range were obtained. The developed method was applied to the analysis of plastic (milk bottle, muffin cup, pre-cooked food wrapper and cup of coffee) and cardboard materials (microwave popcorn bag, greaseproof paper for French fries, cardboard box for pizza and cinema cardboard box for popcorn). To the best of our knowledge, this is the first method that describes the determination of fourteen PFCs and ten potential precursors in packaging materials. Moreover, 6:2 FTCA, 6:2 FTUCA and 5:3 FTCA analytes were detected for the first time in microwave popcorn bags.

  20. A fast experimental beam hardening correction method for accurate bone mineral measurements in 3D μCT imaging system.

    PubMed

    Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice

    2015-06-01

    Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.

  1. A finite rate of innovation algorithm for fast and accurate spike detection from two-photon calcium imaging

    NASA Astrophysics Data System (ADS)

    Oñativia, Jon; Schultz, Simon R.; Dragotti, Pier Luigi

    2013-08-01

    Objective. Inferring the times of sequences of action potentials (APs) (spike trains) from neurophysiological data is a key problem in computational neuroscience. The detection of APs from two-photon imaging of calcium signals offers certain advantages over traditional electrophysiological approaches, as up to thousands of spatially and immunohistochemically defined neurons can be recorded simultaneously. However, due to noise, dye buffering and the limited sampling rates in common microscopy configurations, accurate detection of APs from calcium time series has proved to be a difficult problem. Approach. Here we introduce a novel approach to the problem making use of finite rate of innovation (FRI) theory (Vetterli et al 2002 IEEE Trans. Signal Process. 50 1417-28). For calcium transients well fit by a single exponential, the problem is reduced to reconstructing a stream of decaying exponentials. Signals made of a combination of exponentially decaying functions with different onset times are a subclass of FRI signals, for which much theory has recently been developed by the signal processing community. Main results. We demonstrate for the first time the use of FRI theory to retrieve the timing of APs from calcium transient time series. The final algorithm is fast, non-iterative and parallelizable. Spike inference can be performed in real-time for a population of neurons and does not require any training phase or learning to initialize parameters. Significance. The algorithm has been tested with both real data (obtained by simultaneous electrophysiology and multiphoton imaging of calcium signals in cerebellar Purkinje cell dendrites), and surrogate data, and outperforms several recently proposed methods for spike train inference from calcium imaging data.

  2. Sewage sludge toxicity assessment using earthworm Eisenia fetida: can biochemical and histopathological analysis provide fast and accurate insight?

    PubMed

    Babić, S; Barišić, J; Malev, O; Klobučar, G; Popović, N Topić; Strunjak-Perović, I; Krasnići, N; Čož-Rakovac, R; Klobučar, R Sauerborn

    2016-06-01

    Sewage sludge (SS) is a complex organic by-product of wastewater treatment plants. Deposition of large amounts of SS can increase the risk of soil contamination. Therefore, there is an increasing need for fast and accurate assessment of SS toxic potential. Toxic effects of SS were tested on earthworm Eisenia fetida tissue, at the subcellular and biochemical level. Earthworms were exposed to depot sludge (DS) concentration ratio of 30 or 70 %, to undiluted and to 100 and 10 times diluted active sludge (AS). The exposure to DS lasted for 24/48 h (acute exposure), 96 h (semi-acute exposure) and 7/14/28 days (sub-chronic exposure) and 48 h for AS. Toxic effects were tested by the measurements of multixenobiotic resistance mechanism (MXR) activity and lipid peroxidation levels, as well as the observation of morphological alterations and behavioural changes. Biochemical markers confirmed the presence of MXR inhibitors in the tested AS and DS and highlighted the presence of SS-induced oxidative stress. The MXR inhibition and thiobarbituric acid reactive substance (TBARS) concentration in the whole earthworm's body were higher after the exposition to lower concentration of the DS. Furthermore, histopathological changes revealed damage to earthworm body wall tissue layers as well as to the epithelial and chloragogen cells in the typhlosole region. These changes were proportional to SS concentration in tested soils and to exposure duration. Obtained results may contribute to the understanding of SS-induced toxic effects on terrestrial invertebrates exposed through soil contact and to identify defence mechanisms of earthworms. PMID:26971513

  3. A finite rate of innovation algorithm for fast and accurate spike detection from two-photon calcium imaging

    PubMed Central

    Oñativia, Jon; Schultz, Simon R; Dragotti, Pier Luigi

    2014-01-01

    Objective Inferring the times of sequences of action potentials (APs) (spike trains) from neurophysiological data is a key problem in computational neuroscience. The detection of APs from two-photon imaging of calcium signals offers certain advantages over traditional electrophysiological approaches, as up to thousands of spatially and immunohistochemically defined neurons can be recorded simultaneously. However, due to noise, dye buffering and the limited sampling rates in common microscopy configurations, accurate detection of APs from calcium time series has proved to be a difficult problem. Approach Here we introduce a novel approach to the problem making use of finite rate of innovation (FRI) theory (Vetterli et al 2002 IEEE Trans. Signal Process. 50 1417–28). For calcium transients well fit by a single exponential, the problem is reduced to reconstructing a stream of decaying exponentials. Signals made of a combination of exponentially decaying functions with different onset times are a subclass of FRI signals, for which much theory has recently been developed by the signal processing community. Main results We demonstrate for the first time the use of FRI theory to retrieve the timing of APs from calcium transient time series. The final algorithm is fast, non-iterative and parallelizable. Spike inference can be performed in real-time for a population of neurons and does not require any training phase or learning to initialize parameters. Significance The algorithm has been tested with both real data (obtained by simultaneous electrophysiology and multiphoton imaging of calcium signals in cerebellar Purkinje cell dendrites), and surrogate data, and outperforms several recently proposed methods for spike train inference from calcium imaging data. PMID:23860257

  4. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    PubMed

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  5. Simple, Sensitive and Accurate Multiplex Detection of Clinically Important Melanoma DNA Mutations in Circulating Tumour DNA with SERS Nanotags

    PubMed Central

    Wee, Eugene J.H.; Wang, Yuling; Tsao, Simon Chang-Hao; Trau, Matt

    2016-01-01

    Sensitive and accurate identification of specific DNA mutations can influence clinical decisions. However accurate diagnosis from limiting samples such as circulating tumour DNA (ctDNA) is challenging. Current approaches based on fluorescence such as quantitative PCR (qPCR) and more recently, droplet digital PCR (ddPCR) have limitations in multiplex detection, sensitivity and the need for expensive specialized equipment. Herein we describe an assay capitalizing on the multiplexing and sensitivity benefits of surface-enhanced Raman spectroscopy (SERS) with the simplicity of standard PCR to address the limitations of current approaches. This proof-of-concept method could reproducibly detect as few as 0.1% (10 copies, CV < 9%) of target sequences thus demonstrating the high sensitivity of the method. The method was then applied to specifically detect three important melanoma mutations in multiplex. Finally, the PCR/SERS assay was used to genotype cell lines and ctDNA from serum samples where results subsequently validated with ddPCR. With ddPCR-like sensitivity and accuracy yet at the convenience of standard PCR, we believe this multiplex PCR/SERS method could find wide applications in both diagnostics and research. PMID:27446486

  6. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    SciTech Connect

    Gupta, V; Wang, Y; Romero, A; Heijmen, B; Hoogeman, M; Myronenko, A; Jordan, P

    2014-06-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain ground truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto

  7. A Simple Iterative Model Accurately Captures Complex Trapline Formation by Bumblebees Across Spatial Scales and Flower Arrangements

    PubMed Central

    Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353

  8. A simple iterative model accurately captures complex trapline formation by bumblebees across spatial scales and flower arrangements.

    PubMed

    Reynolds, Andrew M; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments.

  9. Solid phase red cell adherence immunoassay for anti-HIV 1: a simple, rapid, and accurate method for donor screening.

    PubMed

    Watson-Williams, E J; Yee, J L; Carlson, J R; Mertens, S C; Holland, P; Sinor, L; Plapp, F V

    1988-01-01

    In technically developed countries in which acquired immunodeficiency syndrome is a risk to the recipients of blood or tissue, it is mandatory to screen the donor for evidence of HIV (human immunodeficiency virus) infection. Current tests, based on enzyme-linked immunoassay, are time-consuming and expensive and as such are unsuitable for developing countries. We describe a second generation test using anti-human IgG coupled to red cells as the indicator of antibody having reacted with test antigen (1). The test is complete within ten minutes, simple to perform and to read and has 100% sensitivity and 99% specificity compared with Western blot. It is ideal for the rapid screening of organ donors and for the screening of blood donors where cost is a major consideration.

  10. A Simple and Accurate Method To Calculate Free Energy Profiles and Reaction Rates from Restrained Molecular Simulations of Diffusive Processes.

    PubMed

    Ovchinnikov, Victor; Nam, Kwangho; Karplus, Martin

    2016-08-25

    A method is developed to obtain simultaneously free energy profiles and diffusion constants from restrained molecular simulations in diffusive systems. The method is based on low-order expansions of the free energy and diffusivity as functions of the reaction coordinate. These expansions lead to simple analytical relationships between simulation statistics and model parameters. The method is tested on 1D and 2D model systems; its accuracy is found to be comparable to or better than that of the existing alternatives, which are briefly discussed. An important aspect of the method is that the free energy is constructed by integrating its derivatives, which can be computed without need for overlapping sampling windows. The implementation of the method in any molecular simulation program that supports external umbrella potentials (e.g., CHARMM) requires modification of only a few lines of code. As a demonstration of its applicability to realistic biomolecular systems, the method is applied to model the α-helix ↔ β-sheet transition in a 16-residue peptide in implicit solvent, with the reaction coordinate provided by the string method. Possible modifications of the method are briefly discussed; they include generalization to multidimensional reaction coordinates [in the spirit of the model of Ermak and McCammon (Ermak, D. L.; McCammon, J. A. J. Chem. Phys. 1978, 69, 1352-1360)], a higher-order expansion of the free energy surface, applicability in nonequilibrium systems, and a simple test for Markovianity. In view of the small overhead of the method relative to standard umbrella sampling, we suggest its routine application in the cases where umbrella potential simulations are appropriate.

  11. All sky coordination initiative, simple service for wide-field monitoring systems to cooperate in searching for fast optical transients

    NASA Astrophysics Data System (ADS)

    Karpov, S.; Sokołowski, M.; Gorbovskoy, E.

    Here we stress the necessity of cooperation between different wide-field monitoring projects (FAVOR/TORTORA, Pi of the Sky, MASTER, etc), aimed for independent detection of fast optical transients, in order to maximize the area of the sky covered at any moment and to coordinate the monitoring of gamma-ray telescopes' field of view. We review current solutions available for it and propose a simple protocol with dedicated service (ASCI) for such systems to share their current status and pointing schedules.

  12. Simple and fast classification of non-LTR retrotransposons based on phylogeny of their RT domain protein sequences.

    PubMed

    Kapitonov, Vladimir V; Tempel, Sébastien; Jurka, Jerzy

    2009-12-15

    Rapidly growing number of sequenced genomes requires fast and accurate computational tools for analysis of different transposable elements (TEs). In this paper we focus on a rapid and reliable procedure for classification of autonomous non-LTR retrotransposons based on alignment and clustering of their reverse transcriptase (RT) domains. Typically, the RT domain protein sequences encoded by different non-LTR retrotransposons are similar to each other in terms of significant BLASTP E-values. Therefore, they can be easily detected by the routine BLASTP searches of genomic DNA sequences coding for proteins similar to the RT domains of known non-LTR retrotransposons. However, detailed classification of non-LTR retrotransposons, i.e. their assignment to specific clades, is a slow and complex procedure that is not formalized or integrated as a standard set of computational methods and data. Here we describe a tool (RTclass1) designed for the fast and accurate automated assignment of novel non-LTR retrotransposons to known or novel clades using phylogenetic analysis of the RT domain protein sequences. RTclass1 classifies a particular non-LTR retrotransposon based on its RT domain in less than 10 min on a standard desktop computer and achieves 99.5% accuracy. RT1class1 works either as a stand-alone program installed locally or as a web-server that can be accessed distantly by uploading sequence data through the internet (http://www.girinst.org/RTphylogeny/RTclass1).

  13. Open LED Illuminator: A Simple and Inexpensive LED Illuminator for Fast Multicolor Particle Tracking in Neurons

    PubMed Central

    Bosse, Jens B.; Tanneti, Nikhila S.; Hogue, Ian B.; Enquist, Lynn W.

    2015-01-01

    Dual-color live cell fluorescence microscopy of fast intracellular trafficking processes, such as axonal transport, requires rapid switching of illumination channels. Typical broad-spectrum sources necessitate the use of mechanical filter switching, which introduces delays between acquisition of different fluorescence channels, impeding the interpretation and quantification of highly dynamic processes. Light Emitting Diodes (LEDs), however, allow modulation of excitation light in microseconds. Here we provide a step-by-step protocol to enable any scientist to build a research-grade LED illuminator for live cell microscopy, even without prior experience with electronics or optics. We quantify and compare components, discuss our design considerations, and demonstrate the performance of our LED illuminator by imaging axonal transport of herpes virus particles with high temporal resolution. PMID:26600461

  14. Open LED Illuminator: A Simple and Inexpensive LED Illuminator for Fast Multicolor Particle Tracking in Neurons.

    PubMed

    Bosse, Jens B; Tanneti, Nikhila S; Hogue, Ian B; Enquist, Lynn W

    2015-01-01

    Dual-color live cell fluorescence microscopy of fast intracellular trafficking processes, such as axonal transport, requires rapid switching of illumination channels. Typical broad-spectrum sources necessitate the use of mechanical filter switching, which introduces delays between acquisition of different fluorescence channels, impeding the interpretation and quantification of highly dynamic processes. Light Emitting Diodes (LEDs), however, allow modulation of excitation light in microseconds. Here we provide a step-by-step protocol to enable any scientist to build a research-grade LED illuminator for live cell microscopy, even without prior experience with electronics or optics. We quantify and compare components, discuss our design considerations, and demonstrate the performance of our LED illuminator by imaging axonal transport of herpes virus particles with high temporal resolution.

  15. Fast Quantum Molecular Dynamics Simulations of Simple Organic Liquids under Shock Compression

    NASA Astrophysics Data System (ADS)

    Cawkwell, Marc; Niklasson, Anders; Manner, Virginia; McGrane, Shawn; Dattelbaum, Dana

    2013-06-01

    The responses of liquid formic acid, acrylonitrile, and nitromethane to shock compression have been studied using quantum-based molecular dynamics simulations with the self-consistent tight-binding code LATTE. Microcanonical Born-Oppenheimer trajectories with precise energy conservation were computed without relying on an iterative self-consistent field optimization of the electronic degrees of freedom at each time step via the Fast Quantum Mechanical Molecular Dynamics formalism. The input shock pressures required to initiate chemistry in our simulations agree very well with recent laser- and flyer-plate-driven shock compression experiments. On-the-fly analysis of the electronic structure of the liquids over hundreds of picoseconds after dynamic compression revealed that their reactivity is strongly correlated with the temperature and pressure dependence of their HOMO-LUMO gap.

  16. The dissociative single and double ionization of some simple molecules by fast ions and VUV photons

    NASA Astrophysics Data System (ADS)

    Browne, Clive Ronald Harold

    The partial cross sections for the production of energetic fragment protons/deuterons in the dissociative photoionization of HCl/DCl and H2S/D2 S have been determined using vacuum ultraviolet (VUV) photons in the 20-50eV photon energy range. Thresholds in the gross structure of the partial photoionization. cross sections were visible and these values were found to agree well with previous experimental and theoretical data corresponding to Franck-Condon excitations. The kinetic energy spectra of the fragment protons/deuterons produced in the dissociative single and double photoionization of HCl/DCl and H 2S/D2S by 20-50eV photons have been obtained for the first time. The nature of the fragment ions shown in the energy spectra confirm the important role played by indirect fragmentation mechanisms, especially in the double ionization processes. Complementary mass and kinetic energy spectra of the molecular fragment ions formed in the dissociative ionization of the CH4, C2 H2, C2H4, C2H6, and C3H8 group of hydrocarbons have been measured using fast (3-30keV) H+ and He+ ions. The observed differences, between projectiles, in the mass and energy spectra indicate that in contrast to H+, fragmentation of the molecules by He + ions is not governed by the Born approximation. An investigation has also been carried out into the energy distribution of the fragment ion-pairs produced in the dissociative double ionization of H2, D2, H2O and N2 by fast (3-30keV) ion impact. The kinetic energy spectra show ample evidence of low energy (2-7eV) ions and ion-pairs, in agreement with previous reports, supporting the suggestion that they are formed through two-electron excited autoionizing states. The energy distributions of N+N+ ion-pairs produced from the dissociative ionization of N2 by He+ ions shows considerable structure and some interesting contrasts with those produced by H+ ions.

  17. FAST TRACK COMMUNICATION: A simple proof of the recent generalizations of Hawking's black hole topology theorem

    NASA Astrophysics Data System (ADS)

    Rácz, István

    2008-08-01

    A key result in four-dimensional black hole physics, since the early 1970s, is Hawking's topology theorem assertion that the cross-sections of an 'apparent horizon', separating the black hole region from the rest of the spacetime, are topologically 2-spheres. Later, during the 1990s, by applying a variant of Hawking's argument, Gibbons and Woolgar could also show the existence of a genus-dependent lower bound for the entropy of topological black holes with negative cosmological constant. Recently, Hawking's black hole topology theorem, along with the results of Gibbons and Woolgar, has been generalized to the case of black holes in higher dimensions. Our aim here is to give a simple self-contained proof of these generalizations, which also makes their range of applicability transparent.

  18. A fast, simple and robust protocol for growing crystals in the lipidic cubic phase.

    PubMed

    Aherne, Margaret; Lyons, Joseph A; Caffrey, Martin

    2012-12-01

    A simple and inexpensive protocol for producing crystals in the sticky and viscous mesophase used for membrane protein crystallization by the in meso method is described. It provides crystals that appear within 15-30 min of setup at 293 K. The protocol gives the experimenter a convenient way of gaining familiarity and a level of comfort with the lipidic cubic mesophase, which can be daunting as a material when first encountered. Having used the protocol to produce crystals of the test protein, lysozyme, the experimenter can proceed with confidence to apply the method to more valuable membrane (and soluble) protein targets. The glass sandwich plates prepared using this robust protocol can further be used to practice harvesting and snap-cooling of in meso-grown crystals, to explore diffraction data collection with mesophase-embedded crystals, and for an assortment of quality control and calibration applications when used in combination with a crystallization robot.

  19. Simple and Fast Method for Fabrication of Endoscopic Implantable Sensor Arrays

    PubMed Central

    Tahirbegi, I. Bogachan; Alvira, Margarita; Mir, Mònica; Samitier, Josep

    2014-01-01

    Here we have developed a simple method for the fabrication of disposable implantable all-solid-state ion-selective electrodes (ISE) in an array format without using complex fabrication equipment or clean room facilities. The electrodes were designed in a needle shape instead of planar electrodes for a full contact with the tissue. The needle-shape platform comprises 12 metallic pins which were functionalized with conductive inks and ISE membranes. The modified microelectrodes were characterized with cyclic voltammetry, scanning electron microscope (SEM), and optical interferometry. The surface area and roughness factor of each microelectrode were determined and reproducible values were obtained for all the microelectrodes on the array. In this work, the microelectrodes were modified with membranes for the detection of pH and nitrate ions to prove the reliability of the fabricated sensor array platform adapted to an endoscope. PMID:24971473

  20. Fusion of microlitre water-in-oil droplets for simple, fast and green chemical assays.

    PubMed

    Chiu, S-H; Urban, P L

    2015-08-01

    A simple format for microscale chemical assays is proposed. It does not require the use of test tubes, microchips or microtiter plates. Microlitre-range (ca. 0.7-5.0 μL) aqueous droplets are generated by a commercial micropipette in a non-polar matrix inside a Petri dish. When two droplets are pipetted nearby, they spontaneously coalesce within seconds, priming a chemical reaction. Detection of the reaction product is accomplished by colorimetry, spectrophotometry, or fluorimetry using simple light-emitting diode (LED) arrays as the sources of monochromatic light, while chemiluminescence detection of the analytes present in single droplets is conducted in the dark. A smartphone camera is used as the detector. The limits of detection obtained for the developed in-droplet assays are estimated to be: 1.4 nmol (potassium permanganate by colorimetry), 1.4 pmol (fluorescein by fluorimetry), and 580 fmol (sodium hypochlorite by chemiluminescence detection). The format has successfully been used to monitor the progress of chemical and biochemical reactions over time with sub-second resolution. A semi-quantitative analysis of ascorbic acid using Tillman's reagent is presented. A few tens of individual droplets can be scanned in parallel. Rapid switching of the LED light sources with different wavelengths enables a spectral analysis of multiple droplets. Very little solid waste is produced. The assay matrix is readily recycled, thus the volume of liquid waste produced each time is also very small (typically, 1-10 μL per analysis). Various water-immiscible translucent liquids can be used as the reaction matrix: including silicone oil, 1-octanol as well as soybean cooking oil.

  1. Fusion of microlitre water-in-oil droplets for simple, fast and green chemical assays.

    PubMed

    Chiu, S-H; Urban, P L

    2015-08-01

    A simple format for microscale chemical assays is proposed. It does not require the use of test tubes, microchips or microtiter plates. Microlitre-range (ca. 0.7-5.0 μL) aqueous droplets are generated by a commercial micropipette in a non-polar matrix inside a Petri dish. When two droplets are pipetted nearby, they spontaneously coalesce within seconds, priming a chemical reaction. Detection of the reaction product is accomplished by colorimetry, spectrophotometry, or fluorimetry using simple light-emitting diode (LED) arrays as the sources of monochromatic light, while chemiluminescence detection of the analytes present in single droplets is conducted in the dark. A smartphone camera is used as the detector. The limits of detection obtained for the developed in-droplet assays are estimated to be: 1.4 nmol (potassium permanganate by colorimetry), 1.4 pmol (fluorescein by fluorimetry), and 580 fmol (sodium hypochlorite by chemiluminescence detection). The format has successfully been used to monitor the progress of chemical and biochemical reactions over time with sub-second resolution. A semi-quantitative analysis of ascorbic acid using Tillman's reagent is presented. A few tens of individual droplets can be scanned in parallel. Rapid switching of the LED light sources with different wavelengths enables a spectral analysis of multiple droplets. Very little solid waste is produced. The assay matrix is readily recycled, thus the volume of liquid waste produced each time is also very small (typically, 1-10 μL per analysis). Various water-immiscible translucent liquids can be used as the reaction matrix: including silicone oil, 1-octanol as well as soybean cooking oil. PMID:26040707

  2. A Simple and Fast Semiautomatic Procedure for the Atomistic Modeling of Complex DNA Polyhedra.

    PubMed

    Alves, Cassio; Iacovelli, Federico; Falconi, Mattia; Cardamone, Francesca; Morozzo Della Rocca, Blasco; de Oliveira, Cristiano L P; Desideri, Alessandro

    2016-05-23

    A semiautomatic procedure to build complex atomistic covalently linked DNA nanocages has been implemented in a user-friendly, free, and fast program. As a test set, seven different truncated DNA polyhedra, composed by B-DNA double helices connected through short single-stranded linkers, have been generated. The atomistic structures, including a tetrahedron, a cube, an octahedron, a dodecahedron, a triangular prism, a pentagonal prism, and a hexagonal prism, have been probed through classical molecular dynamics and analyzed to evaluate their structural and dynamical properties and to highlight possible building faults. The analysis of the simulated trajectories also allows us to investigate the role of the different geometries in defining nanocages stability and flexibility. The data indicate that the cages are stable and that their structural and dynamical parameters measured along the trajectories are slightly affected by the different geometries. These results demonstrate that the constraints imposed by the covalent links induce an almost identical conformational variability independently of the three-dimensional geometry and that the program presented here is a reliable and valid tool to engineer DNA nanostructures. PMID:27050675

  3. Simple preparation of coated resin complexes and their incorporation into fast-disintegrating tablets.

    PubMed

    Jeong, Seong Hoon; Park, Kinam

    2010-01-01

    Even though ion-exchange resins are good drug carriers to get sustained release properties, it may not be good enough only with themselves. For further sustained release effect, a diffusion barrier or coating on the resins' surface can be utilized. Initially, microencapsulation using a w/o/w double emulsion method was used to apply ethylcellulose (EC) onto the drug/resin complexes. Typical pharmaceutical waxes can be alternative materials to delay the drug release from the complex. After the coating, the coated resin particles were incorporated into fast-disintegrating tablets to get an idea regarding the effects of wet granulation and compression on the release. Among the different grades of ECs tested (Ethocel 20, 45, and 100), more viscous EC resulted in better morphologies and sustained release effects. Because the drug release rate was significantly dependent on the coating level, the release rate can be modified easily by changing different levels of the coating. The drug release rate was also strongly dependent on the granulation and compaction process as the coated particles were incorporated into the tablet dosage form. Among the tested waxes, stearic acid had an effect on the sustained release together with lubrication and wetting properties. Even though microencapsulation or wax coating may not be practical for real manufacturing, the results may give valuable information how to formulate sustained release dosage forms and their properties on the tablet preparation. PMID:20191352

  4. Simple preparation of coated resin complexes and their incorporation into fast-disintegrating tablets.

    PubMed

    Jeong, Seong Hoon; Park, Kinam

    2010-01-01

    Even though ion-exchange resins are good drug carriers to get sustained release properties, it may not be good enough only with themselves. For further sustained release effect, a diffusion barrier or coating on the resins' surface can be utilized. Initially, microencapsulation using a w/o/w double emulsion method was used to apply ethylcellulose (EC) onto the drug/resin complexes. Typical pharmaceutical waxes can be alternative materials to delay the drug release from the complex. After the coating, the coated resin particles were incorporated into fast-disintegrating tablets to get an idea regarding the effects of wet granulation and compression on the release. Among the different grades of ECs tested (Ethocel 20, 45, and 100), more viscous EC resulted in better morphologies and sustained release effects. Because the drug release rate was significantly dependent on the coating level, the release rate can be modified easily by changing different levels of the coating. The drug release rate was also strongly dependent on the granulation and compaction process as the coated particles were incorporated into the tablet dosage form. Among the tested waxes, stearic acid had an effect on the sustained release together with lubrication and wetting properties. Even though microencapsulation or wax coating may not be practical for real manufacturing, the results may give valuable information how to formulate sustained release dosage forms and their properties on the tablet preparation.

  5. A Simple and Fast Semiautomatic Procedure for the Atomistic Modeling of Complex DNA Polyhedra.

    PubMed

    Alves, Cassio; Iacovelli, Federico; Falconi, Mattia; Cardamone, Francesca; Morozzo Della Rocca, Blasco; de Oliveira, Cristiano L P; Desideri, Alessandro

    2016-05-23

    A semiautomatic procedure to build complex atomistic covalently linked DNA nanocages has been implemented in a user-friendly, free, and fast program. As a test set, seven different truncated DNA polyhedra, composed by B-DNA double helices connected through short single-stranded linkers, have been generated. The atomistic structures, including a tetrahedron, a cube, an octahedron, a dodecahedron, a triangular prism, a pentagonal prism, and a hexagonal prism, have been probed through classical molecular dynamics and analyzed to evaluate their structural and dynamical properties and to highlight possible building faults. The analysis of the simulated trajectories also allows us to investigate the role of the different geometries in defining nanocages stability and flexibility. The data indicate that the cages are stable and that their structural and dynamical parameters measured along the trajectories are slightly affected by the different geometries. These results demonstrate that the constraints imposed by the covalent links induce an almost identical conformational variability independently of the three-dimensional geometry and that the program presented here is a reliable and valid tool to engineer DNA nanostructures.

  6. Simple fast noninvasive technique for measuring brachial wall mechanics during flow mediated vasodilatation analysis

    NASA Astrophysics Data System (ADS)

    Mahmoud, Ahmed M.; Stapleton, Phoebe A.; Frisbee, Jefferson C.; D'Audiffret, Alexandre; Mukdadi, Osama M.

    2009-02-01

    Measurement of flow-mediated vasodilatation (FMD) in brachial and other conduit arteries has become a common method to asses the status of endothelial function in vivo. In spite of the direct relationship between the arterial wall multi-component strains and FMD responses, direct measurement of wall strain tensor due to FMD has not yet been reported in the literature. In this work, a noninvasive direct ultrasound-based strain tensor measuring (STM) technique is presented to assess changes in the mechanical parameters of the vascular wall during FMD. The STM technique utilizes only sequences of B-mode ultrasound images, and starts with segmenting a region of interest within the artery and providing the acquisition parameters. Then a block matching technique is employed to measure the frame to frame local velocities. Displacements, diameter change, multi-component strain tensor and strain rates are then calculated by integrating or differentiating velocity components. The accuracy of the STM algorithm was assessed using a phantom study, and was further validated using in vivo data from human subjects. Results indicate the validity and versatility of the STM algorithm, and describe how parameters other than the diameter change are sensitive to pre- and post-occlusion, which can then be used for accurate assessment of atherosclerosis.

  7. Fast and accurate procedure for the determination of Cr(VI) in solid samples by isotope dilution mass spectrometry.

    PubMed

    Fabregat-Cabello, Neus; Rodríguez-González, Pablo; Castillo, Ángel; Malherbe, Julien; Roig-Navarro, Antoni F; Long, Stephen E; García Alonso, J Ignacio

    2012-11-20

    We present here a new environmental measurement method for the rapid extraction and accurate quantification of Cr(VI) in solid samples. The quantitative extraction of Cr(VI) is achieved in 10 minutes by means of focused microwave assisted extraction using 50 mmol/L Ethylendiamintetraacetic acid (EDTA) at pH 10 as extractant. In addition, it enables the separation of Cr species by anion exchange chromatography using a mobile phase which is a 1:10 dilution of the extracting solution. Thus, neutralization or acidification steps which are prone to cause interconversion of Cr species are not needed. Another benefit of using EDTA is that it allows to measure Cr(III)-EDTA complex and Cr(VI) simultaneously in an alkaline extraction solution. The application of a 10 minutes focused microwave assisted extraction (5 min at 90 °C plus 5 min at 110 °C) has been shown to quantitatively extract all forms of hexavalent chromium from the standard reference materials (SRM) candidate NIST 2700 and NIST 2701. A double spike isotope dilution mass spectrometry (IDMS) procedure was employed to study chromium interconversion reactions. It was observed that the formation of a Cr(III)-EDTA complex avoided Cr(III) oxidation for these two reference materials. Thus, the use of a double spiking strategy for quantification is not required and a single spike IDMS procedure using isotopically enriched Cr(VI) provided accurate results.

  8. Fast and Accurate Phylogenetic Reconstruction from High-Resolution Whole-Genome Data and a Novel Robustness Estimator

    NASA Astrophysics Data System (ADS)

    Lin, Yu; Rajan, Vaibhav; Moret, Bernard M. E.

    The rapid accumulation of whole-genome data has renewed interest in the study of genomic rearrangements. Comparative genomics, evolutionary biology, and cancer research all require models and algorithms to elucidate the mechanisms, history, and consequences of these rearrangements. However, even simple models lead to NP-hard problems, particularly in the area of phylogenetic analysis. Current approaches are limited to small collections of genomes and low-resolution data (typically a few hundred syntenic blocks). Moreover, whereas phylogenetic analyses from sequence data are deemed incomplete unless bootstrapping scores (a measure of confidence) are given for each tree edge, no equivalent to bootstrapping exists for rearrangement-based phylogenetic analysis.

  9. Arapan-S: a fast and highly accurate whole-genome assembly software for viruses and small genomes

    PubMed Central

    2012-01-01

    Background Genome assembly is considered to be a challenging problem in computational biology, and has been studied extensively by many researchers. It is extremely difficult to build a general assembler that is able to reconstruct the original sequence instead of many contigs. However, we believe that creating specific assemblers, for solving specific cases, will be much more fruitful than creating general assemblers. Findings In this paper, we present Arapan-S, a whole-genome assembly program dedicated to handling small genomes. It provides only one contig (along with the reverse complement of this contig) in many cases. Although genomes consist of a number of segments, the implemented algorithm can detect all the segments, as we demonstrate for Influenza Virus A. The Arapan-S program is based on the de Bruijn graph. We have implemented a very sophisticated and fast method to reconstruct the original sequence and neglect erroneous k-mers. The method explores the graph by using neither the shortest nor the longest path, but rather a specific and reliable path based on the coverage level or k-mers’ lengths. Arapan-S uses short reads, and it was tested on raw data downloaded from the NCBI Trace Archive. Conclusions Our findings show that the accuracy of the assembly was very high; the result was checked against the European Bioinformatics Institute (EBI) database using the NCBI BLAST Sequence Similarity Search. The identity and the genome coverage was more than 99%. We also compared the efficiency of Arapan-S with other well-known assemblers. In dealing with small genomes, the accuracy of Arapan-S is significantly higher than the accuracy of other assemblers. The assembly process is very fast and requires only a few seconds. Arapan-S is available for free to the public. The binary files for Arapan-S are available through http://sourceforge.net/projects/dnascissor/files/. PMID:22591859

  10. A simple device for sub-aperture stitching of fast convex surfaces

    NASA Astrophysics Data System (ADS)

    Aguirre-Aguirre, D.; Izazaga-Pérez, R.; Villalobos-Mendoza, B.; Carrasco-Licea, E.; Granados-Agustin, F. S.; Percino-Zacarías, M. E.; Salazar-Morales, M. F.; Cruz-Zavala, E.

    2015-10-01

    In this work, we show a simple device that helps in the use of the sub-aperture stitching method for testing convex surfaces with large diameter and a small f/#. This device was designed at INAOE's Optical work shop to solve the problem that exists when a Newton Interferometer and the sub-aperture stitching method are used. It is well known that if the f/# of a surface is small, the slopes over the surface increases rapidly and this is critical for points far from the vertex. Therefore, if we use a reference master in the Newton interferometer to test a convex surface with a large diameter and an area far from the vertex, the master tends to slide causing scratches over the surface under test. To solve this problem, a device for mounting the surface under test with two freedom degrees, a rotating axis and a lever to tilt the surface, was designed. As result, the optical axis of the master can be placed in vertical position avoiding undesired movements of the master and making the sub-aperture stitching easier. We describe the proposed design and the results obtained with this device.

  11. A simple and inclusive method to determine the habit plane in transmission electron microscope based on accurate measurement of foil thickness

    SciTech Connect

    Qiu, Dong Zhang, Mingxing

    2014-08-15

    A simple and inclusive method is proposed for accurate determination of the habit plane between bicrystals in transmission electron microscope. Whilst this method can be regarded as a variant of surface trace analysis, the major innovation lies in the improved accuracy and efficiency of foil thickness measurement, which involves a simple tilt of the thin foil about a permanent tilting axis of the specimen holder, rather than cumbersome tilt about the surface trace of the habit plane. Experimental study has been done to validate this proposed method in determining the habit plane between lamellar α{sub 2} plates and γ matrix in a Ti–Al–Nb alloy. Both high accuracy (± 1°) and high precision (± 1°) have been achieved by using the new method. The source of the experimental errors as well as the applicability of this method is discussed. Some tips to minimise the experimental errors are also suggested. - Highlights: • An improved algorithm is formulated to measure the foil thickness. • Habit plane can be determined with a single tilt holder based on the new algorithm. • Better accuracy and precision within ± 1° are achievable using the proposed method. • The data for multi-facet determination can be collected simultaneously.

  12. A simple and accurate SNP scoring strategy based on typeIIS restriction endonuclease cleavage and matrix-assisted laser desorption/ionization mass spectrometry

    PubMed Central

    Hong, Sun Pyo; Ji, Seung Il; Rhee, Hwanseok; Shin, Soo Kyeong; Hwang, Sun Young; Lee, Seung Hwan; Lee, Soong Deok; Oh, Heung-Bum; Yoo, Wangdon; Kim, Soo-Ok

    2008-01-01

    Background We describe the development of a novel matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF)-based single nucleotide polymorphism (SNP) scoring strategy, termed Restriction Fragment Mass Polymorphism (RFMP) that is suitable for genotyping variations in a simple, accurate, and high-throughput manner. The assay is based on polymerase chain reaction (PCR) amplification and mass measurement of oligonucleotides containing a polymorphic base, to which a typeIIS restriction endonuclease recognition was introduced by PCR amplification. Enzymatic cleavage of the products leads to excision of oligonucleotide fragments representing base variation of the polymorphic site whose masses were determined by MALDI-TOF MS. Results The assay represents an improvement over previous methods because it relies on the direct mass determination of PCR products rather than on an indirect analysis, where a base-extended or fluorescent report tag is interpreted. The RFMP strategy is simple and straightforward, requiring one restriction digestion reaction following target amplification in a single vessel. With this technology, genotypes are generated with a high call rate (99.6%) and high accuracy (99.8%) as determined by independent sequencing. Conclusion The simplicity, accuracy and amenability to high-throughput screening analysis should make the RFMP assay suitable for large-scale genotype association study as well as clinical genotyping in laboratories. PMID:18538037

  13. Millimeter-wave interferometry: an attractive technique for fast and accurate sensing of civil and mechanical structures

    NASA Astrophysics Data System (ADS)

    Kim, Seoktae; Nguyen, Cam

    2014-04-01

    This paper discusses the RF interferometry at millimeter-wave frequencies for sensing applications and reports the development of a millimeter-wave interferometric sensor operating around 35 GHz. The sensor is completely realized using microwave integrated circuits (MICs) and microwave monolithic integrated circuits (MMICs). It has been used for various sensing including displacement and velocity measurement. The sensor achieves a resolution and maximum error of only 10 μm and 27 μm, respectively, for displacement sensing and can measure velocity as low as 27.7 mm/s with a resolution about 2.7mm/s. Quick response and accurate sensing, as demonstrated by the developed millimeter-wave interferometric sensor, make the millimeter-wave interferometry attractive for sensing of various civil and mechanical structures.

  14. Fast and accurate resonance assignment of small-to-large proteins by combining automated and manual approaches.

    PubMed

    Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A; Lundström, Patrik

    2015-01-01

    The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available.

  15. Fast and Accurate Resonance Assignment of Small-to-Large Proteins by Combining Automated and Manual Approaches

    PubMed Central

    Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A.; Lundström, Patrik

    2015-01-01

    The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available. PMID:25569628

  16. Fast and accurate resonance assignment of small-to-large proteins by combining automated and manual approaches.

    PubMed

    Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A; Lundström, Patrik

    2015-01-01

    The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available. PMID:25569628

  17. Protocol: a fast and simple in situ PCR method for localising gene expression in plant tissue

    PubMed Central

    2014-01-01

    Background An important step in characterising the function of a gene is identifying the cells in which it is expressed. Traditional methods to determine this include in situ hybridisation, gene promoter-reporter fusions or cell isolation/purification techniques followed by quantitative PCR. These methods, although frequently used, can have limitations including their time-consuming nature, limited specificity, reliance upon well-annotated promoters, high cost, and the need for specialized equipment. In situ PCR is a relatively simple and rapid method that involves the amplification of specific mRNA directly within plant tissue whilst incorporating labelled nucleotides that are subsequently detected by immunohistochemistry. Another notable advantage of this technique is that it can be used on plants that are not easily genetically transformed. Results An optimised workflow for in-tube and on-slide in situ PCR is presented that has been evaluated using multiple plant species and tissue types. The protocol includes optimised methods for: (i) fixing, embedding, and sectioning of plant tissue; (ii) DNase treatment; (iii) in situ RT-PCR with the incorporation of DIG-labelled nucleotides; (iv) signal detection using colourimetric alkaline phosphatase substrates; and (v) mounting and microscopy. We also provide advice on troubleshooting and the limitations of using fluorescence as an alternative detection method. Using our protocol, reliable results can be obtained within two days from harvesting plant material. This method requires limited specialized equipment and can be adopted by any laboratory with a vibratome (vibrating blade microtome), a standard thermocycler, and a microscope. We show that the technique can be used to localise gene expression with cell-specific resolution. Conclusions The in situ PCR method presented here is highly sensitive and specific. It reliably identifies the cellular expression pattern of even highly homologous and low abundance

  18. Validation of a fast and accurate chromatographic method for detailed quantification of vitamin E in green leafy vegetables.

    PubMed

    Cruz, Rebeca; Casal, Susana

    2013-11-15

    Vitamin E analysis in green vegetables is performed by an array of different methods, making it difficult to compare published data or choosing the adequate one for a particular sample. Aiming to achieve a consistent method with wide applicability, the current study reports the development and validation of a fast micro-method for quantification of vitamin E in green leafy vegetables. The methodology uses solid-liquid extraction based on the Folch method, with tocol as internal standard, and normal-phase HPLC with fluorescence detection. A large linear working range was confirmed, being highly reproducible, with inter-day precisions below 5% (RSD). Method sensitivity was established (below 0.02 μg/g fresh weight), and accuracy was assessed by recovery tests (>96%). The method was tested in different green leafy vegetables, evidencing diverse tocochromanol profiles, with variable ratios and amounts of α- and γ-tocopherol, and other minor compounds. The methodology is adequate for routine analyses, with a reduced chromatographic run (<7 min) and organic solvent consumption, and requires only standard chromatographic equipment available in most laboratories.

  19. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    NASA Astrophysics Data System (ADS)

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  20. ICE-COLA: towards fast and accurate synthetic galaxy catalogues optimizing a quasi-N-body method

    NASA Astrophysics Data System (ADS)

    Izard, Albert; Crocce, Martin; Fosalba, Pablo

    2016-07-01

    Next generation galaxy surveys demand the development of massive ensembles of galaxy mocks to model the observables and their covariances, what is computationally prohibitive using N-body simulations. COmoving Lagrangian Acceleration (COLA) is a novel method designed to make this feasible by following an approximate dynamics but with up to three orders of magnitude speed-ups when compared to an exact N-body. In this paper, we investigate the optimization of the code parameters in the compromise between computational cost and recovered accuracy in observables such as two-point clustering and halo abundance. We benchmark those observables with a state-of-the-art N-body run, the MICE Grand Challenge simulation. We find that using 40 time-steps linearly spaced since zi ˜ 20, and a force mesh resolution three times finer than that of the number of particles, yields a matter power spectrum within 1 per cent for k ≲ 1 h Mpc-1 and a halo mass function within 5 per cent of those in the N-body. In turn, the halo bias is accurate within 2 per cent for k ≲ 0.7 h Mpc-1 whereas, in redshift space, the halo monopole and quadrupole are within 4 per cent for k ≲ 0.4 h Mpc-1. These results hold for a broad range in redshift (0 < z < 1) and for all halo mass bins investigated (M > 1012.5 h-1 M⊙). To bring accuracy in clustering to one per cent level we study various methods that re-calibrate halo masses and/or velocities. We thus propose an optimized choice of COLA code parameters as a powerful tool to optimally exploit future galaxy surveys.

  1. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    PubMed

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  2. A simple, efficient, and high-order accurate curved sliding-mesh interface approach to spectral difference method on coupled rotating and stationary domains

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Liang, Chunlei

    2015-08-01

    This paper presents a simple, efficient, and high-order accurate sliding-mesh interface approach to the spectral difference (SD) method. We demonstrate the approach by solving the two-dimensional compressible Navier-Stokes equations on quadrilateral grids. This approach is an extension of the straight mortar method originally designed for stationary domains [7,8]. Our sliding method creates curved dynamic mortars on sliding-mesh interfaces to couple rotating and stationary domains. On the nonconforming sliding-mesh interfaces, the related variables are first projected from cell faces to mortars to compute common fluxes, and then the common fluxes are projected back from the mortars to the cell faces to ensure conservation. To verify the spatial order of accuracy of the sliding-mesh spectral difference (SSD) method, both inviscid and viscous flow cases are tested. It is shown that the SSD method preserves the high-order accuracy of the SD method. Meanwhile, the SSD method is found to be very efficient in terms of computational cost. This novel sliding-mesh interface method is very suitable for parallel processing with domain decomposition. It can be applied to a wide range of problems, such as the hydrodynamics of marine propellers, the aerodynamics of rotorcraft, wind turbines, and oscillating wing power generators, etc.

  3. Benchmark atomization energy of ethane : importance of accurate zero-point vibrational energies and diagonal Born-Oppenheimer corrections for a 'simple' organic molecule.

    SciTech Connect

    Karton, A.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science

    2007-06-01

    A benchmark calculation of the atomization energy of the 'simple' organic molecule C2H6 (ethane) has been carried out by means of W4 theory. While the molecule is straightforward in terms of one-particle and n-particle basis set convergence, its large zero-point vibrational energy (and anharmonic correction thereto) and nontrivial diagonal Born-Oppenheimer correction (DBOC) represent interesting challenges. For the W4 set of molecules and C2H6, we show that DBOCs to the total atomization energy are systematically overestimated at the SCF level, and that the correlation correction converges very rapidly with the basis set. Thus, even at the CISD/cc-pVDZ level, useful correlation corrections to the DBOC are obtained. When applying such a correction, overall agreement with experiment was only marginally improved, but a more significant improvement is seen when hydrogen-containing systems are considered in isolation. We conclude that for closed-shell organic molecules, the greatest obstacles to highly accurate computational thermochemistry may not lie in the solution of the clamped-nuclei Schroedinger equation, but rather in the zero-point vibrational energy and the diagonal Born-Oppenheimer correction.

  4. Fast and accurate finite element analysis of large-scale three-dimensional photonic devices with a robust domain decomposition method.

    PubMed

    Xue, Ming-Feng; Kang, Young Mo; Arbabi, Amir; McKeown, Steven J; Goddard, Lynford L; Jin, Jian-Ming

    2014-02-24

    A fast and accurate full-wave technique based on the dual-primal finite element tearing and interconnecting method and the second-order transmission condition is presented for large-scale three-dimensional photonic device simulations. The technique decomposes a general three-dimensional electromagnetic problem into smaller subdomain problems so that parallel computing can be performed on distributed-memory computer clusters to reduce the simulation time significantly. With the electric fields computed everywhere, photonic device parameters such as transmission and reflection coefficients are extracted. Several photonic devices, with simulation volumes up to 1.9×10(4) (λ/n(avg))3 and modeled with over one hundred million unknowns, are simulated to demonstrate the application, efficiency, and capability of this technique. The simulations show good agreement with experimental results and in a special case with a simplified two-dimensional simulation.

  5. Advanced oxidation protein products (AOPP) for monitoring oxidative stress in critically ill patients: a simple, fast and inexpensive automated technique.

    PubMed

    Selmeci, László; Seres, Leila; Antal, Magda; Lukács, Júlia; Regöly-Mérei, Andrea; Acsády, György

    2005-01-01

    Oxidative stress is known to be involved in many human pathological processes. Although there are numerous methods available for the assessment of oxidative stress, most of them are still not easily applicable in a routine clinical laboratory due to the complex methodology and/or lack of automation. In research into human oxidative stress, the simplification and automation of techniques represent a key issue from a laboratory point of view at present. In 1996 a novel oxidative stress biomarker, referred to as advanced oxidation protein products (AOPP), was detected in the plasma of chronic uremic patients. Here we describe in detail an automated version of the originally published microplate-based technique that we adapted for a Cobas Mira Plus clinical chemistry analyzer. AOPP reference values were measured in plasma samples from 266 apparently healthy volunteers (university students; 81 male and 185 female subjects) with a mean age of 21.3 years (range 18-33). Over a period of 18 months we determined AOPP concentrations in more than 300 patients in our department. Our experiences appear to demonstrate that this technique is especially suitable for monitoring oxidative stress in critically ill patients (sepsis, reperfusion injury, heart failure) even at daily intervals, since AOPP exhibited rapid responses in both directions. We believe that the well-established relationship between AOPP response and induced damage makes this simple, fast and inexpensive automated technique applicable in daily routine laboratory practice for assessing and monitoring oxidative stress in critically ill or other patients.

  6. Thioflavin-S staining of bacterial inclusion bodies for the fast, simple, and inexpensive screening of amyloid aggregation inhibitors.

    PubMed

    Pouplana, S; Espargaro, A; Galdeano, C; Viayna, E; Sola, I; Ventura, S; Muñoz-Torrero, D; Sabate, R

    2014-01-01

    Amyloid aggregation is linked to a large number of human disorders, from neurodegenerative diseases as Alzheimer's disease (AD) or spongiform encephalopathies to non-neuropathic localized diseases as type II diabetes and cataracts. Because the formation of insoluble inclusion bodies (IBs) during recombinant protein production in bacteria has been recently shown to share mechanistic features with amyloid self-assembly, bacteria have emerged as a tool to study amyloid aggregation. Herein we present a fast, simple, inexpensive and quantitative method for the screening of potential anti-aggregating drugs. This method is based on monitoring the changes in the binding of thioflavin-S to intracellular IBs in intact Eschericchia coli cells in the presence of small chemical compounds. This in vivo technique fairly recapitulates previous in vitro data. Here we mainly use the Alzheimer's related β-amyloid peptide as a model system, but the technique can be easily implemented for screening inhibitors relevant for other conformational diseases simply by changing the recombinant amyloid protein target. Indeed, we show that this methodology can be also applied to the evaluation of inhibitors of the aggregation of tau protein, another amyloidogenic protein with a key role in AD.

  7. Fast, simple, and sensitive high-performance liquid chromatography method for measuring vitamins A and E in human blood plasma.

    PubMed

    Yuan, Chao; Burgyan, Maria; Bunch, Dustin R; Reineks, Edmunds; Jackson, Raymond; Steinle, Roxanne; Wang, Sihe

    2014-09-01

    Vitamins A and E are fat-soluble vitamins that play important roles in several physiological processes. Monitoring their concentrations is needed to detect deficiency and guide therapy. In this study, we developed a high-performance liquid chromatography method to measure the major forms of vitamin A (retinol) and vitamin E (α-tocopherol and γ-tocopherol) in human blood plasma. Vitamins A and E were extracted with hexane and separated on a reversed-phase column using methanol as the mobile phase. Retinol was detected by ultraviolet absorption, whereas tocopherols were detected by fluorescence emission. The chromatographic cycle time was 4.0 min per sample. The analytical measurement range was 0.03-5.14, 0.32-36.02, and 0.10-9.99 mg/L for retinol, α-tocopherol, and γ-tocopherol, respectively. Intr-aassay and total coefficient of variation were <6.0% for all compounds. This method was traceable to standard reference materials offered by the National Institute of Standards and Technology. Reference intervals were established using plasma samples collected from 51 healthy adult donors and were found to be 0.30-1.20, 6.0-23.0, and 0.3-3.2 mg/L for retinol, α-tocopherol, and γ-tocopherol, respectively. In conclusion, we developed and validated a fast, simple, and sensitive high-performance liquid chromatography method for measuring the major forms of vitamins A and E in human plasma.

  8. Evaluation of a fast and simple sample preparation method for PBDE flame retardants and DDT pesticides in fish for analysis by ELISA compared with GC-MS/MS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A simple, fast, and cost-effective sample preparation method, previously developed and validated for the analysis of organic contaminants in fish using low-pressure gas chromatography tandem mass spectrometry (LPGC-MS/MS), was evaluated for analysis of polybrominated diphenyl ethers (PBDEs) and dich...

  9. A simple robust and accurate a posteriori sub-cell finite volume limiter for the discontinuous Galerkin method on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Loubère, Raphaël

    2016-08-01

    In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order

  10. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    PubMed Central

    Wilson, Lydia J; Newhauser, Wayne D

    2015-01-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833

  11. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    NASA Astrophysics Data System (ADS)

    Jagetic, Lydia J.; Newhauser, Wayne D.

    2015-06-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  12. A simple three-dimensional-focusing, continuous-flow mixer for the study of fast protein dynamics

    PubMed Central

    Burke, Kelly S.; Parul, Dzmitry; Reddish, Michael J.; Dyer, R. Brian

    2013-01-01

    We present a simple, yet flexible microfluidic mixer with a demonstrated mixing time as short as 80 µs that is widely accessible because it is made of commercially available parts. To simplify the study of fast protein dynamics, we have developed an inexpensive continuous-flow microfluidic mixer, requiring no specialized equipment or techniques. The mixer uses three-dimensional, hydrodynamic focusing of a protein sample stream by a surrounding sheath solution to achieve rapid diffusional mixing between the sample and sheath. Mixing initiates the reaction of interest. Reactions can be spatially observed by fluorescence or absorbance spectroscopy. We characterized the pixel-to-time calibration and diffusional mixing experimentally. We achieved a mixing time as short as 80 µs. We studied the kinetics of horse apomyoglobin (apoMb) unfolding from the intermediate (I) state to its completely unfolded (U) state, induced by a pH jump from the initial pH of 4.5 in the sample stream to a final pH of 2.0 in the sheath solution. The reaction time was probed using the fluorescence of 1-anilinonapthalene-8-sulfonate (1,8-ANS) bound to the folded protein. We observed unfolding of apoMb within 760 µs, without populating additional intermediate states under these conditions. We also studied the reaction kinetics of the conversion of pyruvate to lactate catalyzed by lactate dehydrogenase using the intrinsic tryptophan emission of the enzyme. We observe sub-millisecond kinetics that we attribute to Michaelis complex formation and loop domain closure. These results demonstrate the utility of the three-dimensional focusing mixer for biophysical studies of protein dynamics. PMID:23760106

  13. A simple and fast method based on mixed hemimicelles coated magnetite nanoparticles for simultaneous extraction of acidic and basic pollutants.

    PubMed

    Asgharinezhad, Ali Akbar; Ebrahimzadeh, Homeira

    2016-01-01

    One of the considerable and disputable areas in analytical chemistry is a single-step simultaneous extraction of acidic and basic pollutants. In this research, a simple and fast coextraction of acidic and basic pollutants (with different polarities) with the aid of magnetic dispersive micro-solid phase extraction based on mixed hemimicelles assembly was introduced for the first time. Cetyltrimethylammonium bromide (CTAB)-coated Fe3O4 nanoparticles as an efficient sorbent was successfully applied to adsorb 4-nitrophenol and 4-chlorophenol as two acidic and chlorinated aromatic amines as basic model compounds. Using a central composite design methodology combined with desirability function approach, the optimal experimental conditions were evaluated. The opted conditions were pH = 10; concentration of CTAB = 0.86 mmol L(-1); sorbent amount = 55.5 mg; sorption time = 11.0 min; no salt addition to the sample, type, and volume of the eluent = 120 μL methanol containing 5% acetic acid and 0.01 mol L(-1) HCl; and elution time = 1.0 min. Under the optimum conditions, detection limits and linear dynamic ranges were achieved in the range of 0.05-0.1 and 0.25-500 μg L(-1), respectively. The percent of extraction recoveries and relative standard deviations (n = 5) were in the range of 71.4-98.0 and 4.5-6.5, respectively. The performance of the optimized method was certified by coextraction of other acidic and basic compounds. Ultimately, the applicability of the method was successfully confirmed by the extraction and determination of the target analytes in various water samples, and satisfactory results were obtained.

  14. A Simple and Fast Kinetic Assay for the Determination of Fructan Exohydrolase Activity in Perennial Ryegrass (Lolium perenne L.).

    PubMed

    Gasperl, Anna; Morvan-Bertrand, Annette; Prud'homme, Marie-Pascale; van der Graaff, Eric; Roitsch, Thomas

    2015-01-01

    Despite the fact that fructans are the main constituent of water-soluble carbohydrates in forage grasses and cereal crops of temperate climates, little knowledge is available on the regulation of the enzymes involved in fructan metabolism. The analysis of enzyme activities involved in this process has been hampered by the low affinity of the fructan enzymes for sucrose and fructans used as fructosyl donor. Further, the analysis of fructan composition and enzyme activities is restricted to specialized labs with access to suited HPLC equipment and appropriate fructan standards. The degradation of fructan polymers with high degree of polymerization (DP) by fructan exohydrolases (FEHs) to fructosyloligomers is important to liberate energy in the form of fructan, but also under conditions where the generation of low DP polymers is required. Based on published protocols employing enzyme coupled endpoint reactions in single cuvettes, we developed a simple and fast kinetic 1-FEH assay. This assay can be performed in multi-well plate format using plate readers to determine the activity of 1-FEH against 1-kestotriose, resulting in a significant time reduction. Kinetic assays allow an optimal and more precise determination of enzyme activities compared to endpoint assays, and enable to check the quality of any reaction with respect to linearity of the assay. The enzyme coupled kinetic 1-FEH assay was validated in a case study showing the expected increase in 1-FEH activity during cold treatment. This assay is cost effective and could be performed by any lab with access to a plate reader suited for kinetic measurements and readings at 340 nm, and is highly suited to assess temporal changes and relative differences in 1-FEH activities. Thus, this enzyme coupled kinetic 1-FEH assay is of high importance both to the field of basic fructan research and plant breeding.

  15. A Simple and Fast Kinetic Assay for the Determination of Fructan Exohydrolase Activity in Perennial Ryegrass (Lolium perenne L.)

    PubMed Central

    Gasperl, Anna; Morvan-Bertrand, Annette; Prud’homme, Marie-Pascale; Roitsch, Thomas

    2015-01-01

    Despite the fact that fructans are the main constituent of water-soluble carbohydrates in forage grasses and cereal crops of temperate climates, little knowledge is available on the regulation of the enzymes involved in fructan metabolism. The analysis of enzyme activities involved in this process has been hampered by the low affinity of the fructan enzymes for sucrose and fructans used as fructosyl donor. Further, the analysis of fructan composition and enzyme activities is restricted to specialized labs with access to suited HPLC equipment and appropriate fructan standards. The degradation of fructan polymers with high degree of polymerization (DP) by fructan exohydrolases (FEHs) to fructosyloligomers is important to liberate energy in the form of fructan, but also under conditions where the generation of low DP polymers is required. Based on published protocols employing enzyme coupled endpoint reactions in single cuvettes, we developed a simple and fast kinetic 1-FEH assay. This assay can be performed in multi-well plate format using plate readers to determine the activity of 1-FEH against 1-kestotriose, resulting in a significant time reduction. Kinetic assays allow an optimal and more precise determination of enzyme activities compared to endpoint assays, and enable to check the quality of any reaction with respect to linearity of the assay. The enzyme coupled kinetic 1-FEH assay was validated in a case study showing the expected increase in 1-FEH activity during cold treatment. This assay is cost effective and could be performed by any lab with access to a plate reader suited for kinetic measurements and readings at 340 nm, and is highly suited to assess temporal changes and relative differences in 1-FEH activities. Thus, this enzyme coupled kinetic 1-FEH assay is of high importance both to the field of basic fructan research and plant breeding. PMID:26734049

  16. A fast and simple dose-calibrator-based quality control test for the radionuclidic purity of cyclotron-produced 99mTc

    NASA Astrophysics Data System (ADS)

    Tanguay, J.; Hou, X.; Esquinas, P.; Vuckovic, M.; Buckley, K.; Schaffer, P.; Bénard, F.; Ruth, T. J.; Celler, A.

    2015-11-01

    Cyclotron production of {{}99\\text{m}} Tc through the 100Mo(p,2n){{}99\\text{m}} Tc reaction channel is actively being investigated as an alternative to reactor-based 99Mo generation by nuclear fission of 235U. Like most radioisotope production methods, cyclotron production of {{}99\\text{m}} Tc will result in creation of unwanted impurities, including Tc and non-Tc isotopes. It is important to measure the amounts of these impurities for release of cyclotron-produced {{}99\\text{m}} Tc (CPTc) for clinical use. Detection of radioactive impurities will rely on measurements of their gamma (γ) emissions. Gamma spectroscopy is not suitable for this purpose because the overwhelming presence of {{}99\\text{m}} Tc and the count-rate limitations of γ spectroscopy systems preclude fast and accurate measurement of small amounts of impurities. In this article we describe a simple and fast method for measuring γ emission rates from radioactive impurities in CPTc. The proposed method is similar to that used to identify 99Mo breakthrough in generator-produced {{}99\\text{m}} Tc: one dose calibrator (DC) reading of a CPTc source placed in a lead shield is followed by a second reading of the same source in air. Our experimental and theoretical analysis show that the ratio of DC readings in lead to those in air are linearly related to γ emission rates from impurities per MBq of {{}99\\text{m}} Tc over a large range of clinically-relevant production conditions. We show that estimates of the γ emission rates from Tc impurities per MBq of {{}99\\text{m}} Tc can be used to estimate increases in radiation dose (relative to pure {{}99\\text{m}} Tc) to patients injected with CPTc-based radiopharmaceuticals. This enables establishing dosimetry-based clinical-release criteria that can be tested using commercially-available dose calibrators. We show that our approach is highly sensitive to the presence of {{}93\\text{g}} Tc, {{}93\\text{m}} Tc, {{}94\\text{g}} Tc, {{}94\\text{m}} Tc

  17. A fast and simple dose-calibrator-based quality control test for the radionuclidic purity of cyclotron-produced (99m)Tc.

    PubMed

    Tanguay, J; Hou, X; Esquinas, P; Vuckovic, M; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A

    2015-11-01

    Cyclotron production of 99mTc through the (100)Mo(p,2n)99mTc reaction channel is actively being investigated as an alternative to reactor-based (99)Mo generation by nuclear fission of (235)U. Like most radioisotope production methods, cyclotron production of 99mTc will result in creation of unwanted impurities, including Tc and non-Tc isotopes. It is important to measure the amounts of these impurities for release of cyclotron-produced 99mTc (CPTc) for clinical use. Detection of radioactive impurities will rely on measurements of their gamma (γ) emissions. Gamma spectroscopy is not suitable for this purpose because the overwhelming presence of 99mTc and the count-rate limitations of γ spectroscopy systems preclude fast and accurate measurement of small amounts of impurities. In this article we describe a simple and fast method for measuring γ emission rates from radioactive impurities in CPTc. The proposed method is similar to that used to identify (99)Mo breakthrough in generator-produced 99mTc: one dose calibrator (DC) reading of a CPTc source placed in a lead shield is followed by a second reading of the same source in air. Our experimental and theoretical analysis show that the ratio of DC readings in lead to those in air are linearly related to γ emission rates from impurities per MBq of 99mTc over a large range of clinically-relevant production conditions. We show that estimates of the γ emission rates from Tc impurities per MBq of 99mTc can be used to estimate increases in radiation dose (relative to pure 99mTc) to patients injected with CPTc-based radiopharmaceuticals. This enables establishing dosimetry-based clinical-release criteria that can be tested using commercially-available dose calibrators. We show that our approach is highly sensitive to the presence of 93gTc, 93mTc, 94gTc, 94mTc, 95mTc, 95gTc, and 96gTc, in addition to a number of non-Tc impurities.

  18. A fast and simple dose-calibrator-based quality control test for the radionuclidic purity of cyclotron-produced (99m)Tc.

    PubMed

    Tanguay, J; Hou, X; Esquinas, P; Vuckovic, M; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A

    2015-11-01

    Cyclotron production of 99mTc through the (100)Mo(p,2n)99mTc reaction channel is actively being investigated as an alternative to reactor-based (99)Mo generation by nuclear fission of (235)U. Like most radioisotope production methods, cyclotron production of 99mTc will result in creation of unwanted impurities, including Tc and non-Tc isotopes. It is important to measure the amounts of these impurities for release of cyclotron-produced 99mTc (CPTc) for clinical use. Detection of radioactive impurities will rely on measurements of their gamma (γ) emissions. Gamma spectroscopy is not suitable for this purpose because the overwhelming presence of 99mTc and the count-rate limitations of γ spectroscopy systems preclude fast and accurate measurement of small amounts of impurities. In this article we describe a simple and fast method for measuring γ emission rates from radioactive impurities in CPTc. The proposed method is similar to that used to identify (99)Mo breakthrough in generator-produced 99mTc: one dose calibrator (DC) reading of a CPTc source placed in a lead shield is followed by a second reading of the same source in air. Our experimental and theoretical analysis show that the ratio of DC readings in lead to those in air are linearly related to γ emission rates from impurities per MBq of 99mTc over a large range of clinically-relevant production conditions. We show that estimates of the γ emission rates from Tc impurities per MBq of 99mTc can be used to estimate increases in radiation dose (relative to pure 99mTc) to patients injected with CPTc-based radiopharmaceuticals. This enables establishing dosimetry-based clinical-release criteria that can be tested using commercially-available dose calibrators. We show that our approach is highly sensitive to the presence of 93gTc, 93mTc, 94gTc, 94mTc, 95mTc, 95gTc, and 96gTc, in addition to a number of non-Tc impurities. PMID:26449791

  19. New Method for Accurate Calibration of Micro-Channel Plate based Detection Systems and its use in the Fast Plasma Investigation of NASA's Magnetospheric MultiScale Mission

    NASA Astrophysics Data System (ADS)

    Gliese, U.; Avanov, L. A.; Barrie, A.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Zeuch, M.; Pollock, C. J.; Jacques, A. D.

    2013-12-01

    The Fast Plasma Investigation (FPI) of the NASA Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30ms for electrons; 150ms for ions) and spatially differentiated measurements of full the 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity and reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated by setting a fixed detection threshold and, subsequently, measuring a detection system count rate plateau curve to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection amplifier threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully and individually characterize each of the fundamental parameters of the detection system. We present a new detection system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. The fundamental

  20. A Simple, Fast, Low Cost, HPLC/UV Validated Method for Determination of Flutamide: Application to Protein Binding Studies

    PubMed Central

    Esmaeilzadeh, Sara; Valizadeh, Hadi; Zakeri-Milani, Parvin

    2016-01-01

    Purpose: The main goal of this study was development of a reverse phase high performance liquid chromatography (RP-HPLC) method for flutamide quantitation which is applicable to protein binding studies. Methods: Ultrafilteration method was used for protein binding study of flutamide. For sample analysis, flutamide was extracted by a simple and low cost extraction method using diethyl ether and then was determined by HPLC/UV. Acetanilide was used as an internal standard. The chromatographic system consisted of a reversed-phase C8 column with C8 pre-column, and the mobile phase of a mixture of 29% (v/v) methanol, 38% (v/v) acetonitrile and 33% (v/v) potassium dihydrogen phosphate buffer (50 mM) with pH adjusted to 3.2. Results: Acetanilide and flutamide were eluted at 1.8 and 2.9 min, respectively. The linearity of method was confirmed in the range of 62.5-16000 ng/ml (r2 > 0.99). The limit of quantification was shown to be 62.5 ng/ml. Precision and accuracy ranges found to be (0.2-1.4%, 90-105%) and (0.2-5.3 %, 86.7-98.5 %) respectively. Acetanilide and flutamide capacity factor values of 1.35 and 2.87, tailing factor values of 1.24 and 1.07 and resolution values of 1.8 and 3.22 were obtained in accordance with ICH guidelines. Conclusion: Based on the obtained results a rapid, precise, accurate, sensitive and cost-effective analysis procedure was proposed for quantitative determination of flutamide. PMID:27478788

  1. Fast, accurate, and robust automatic marker detection for motion correction based on oblique kV or MV projection image pairs

    SciTech Connect

    Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den

    2010-04-15

    takes a little less than a second where most time is spent on the image preprocessing. Conclusions: The authors have developed a method to automatically detect multiple markers in a pair of projection images that is robust, accurate, and sufficiently fast for clinical use. It can be used for kV, MV, or mixed image pairs and can cope with limited motion between the projection images.

  2. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media

    PubMed Central

    Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.

    2015-01-01

    To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553

  3. Ring polymer molecular dynamics fast computation of rate coefficients on accurate potential energy surfaces in local configuration space: Application to the abstraction of hydrogen from methane

    NASA Astrophysics Data System (ADS)

    Meng, Qingyong; Chen, Jun; Zhang, Dong H.

    2016-04-01

    To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3 reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ˜20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.

  4. Quantification of reverse transcriptase activity by real-time PCR as a fast and accurate method for titration of HIV, lenti- and retroviral vectors.

    PubMed

    Vermeire, Jolien; Naessens, Evelien; Vanderstraeten, Hanne; Landi, Alessia; Iannucci, Veronica; Van Nuffel, Anouk; Taghon, Tom; Pizzato, Massimo; Verhasselt, Bruno

    2012-01-01

    Quantification of retroviruses in cell culture supernatants and other biological preparations is required in a diverse spectrum of laboratories and applications. Methods based on antigen detection, such as p24 for HIV, or on genome detection are virus specific and sometimes suffer from a limited dynamic range of detection. In contrast, measurement of reverse transcriptase (RT) activity is a generic method which can be adapted for higher sensitivity using real-time PCR quantification (qPCR-based product-enhanced RT (PERT) assay). We present an evaluation of a modified SYBR Green I-based PERT assay (SG-PERT), using commercially available reagents such as MS2 RNA and ready-to-use qPCR mixes. This assay has a dynamic range of 7 logs, a sensitivity of 10 nU HIV-1 RT and outperforms p24 ELISA for HIV titer determination by lower inter-run variation, lower cost and higher linear range. The SG-PERT values correlate with transducing and infectious units in HIV-based viral vector and replication-competent HIV-1 preparations respectively. This assay can furthermore quantify Moloney Murine Leukemia Virus-derived vectors and can be performed on different instruments, such as Roche Lightcycler® 480 and Applied Biosystems ABI 7300. We consider this test to be an accurate, fast and relatively cheap method for retroviral quantification that is easily implemented for use in routine and research laboratories.

  5. Quantification of Reverse Transcriptase Activity by Real-Time PCR as a Fast and Accurate Method for Titration of HIV, Lenti- and Retroviral Vectors

    PubMed Central

    Vermeire, Jolien; Naessens, Evelien; Vanderstraeten, Hanne; Landi, Alessia; Iannucci, Veronica; Van Nuffel, Anouk; Taghon, Tom; Pizzato, Massimo; Verhasselt, Bruno

    2012-01-01

    Quantification of retroviruses in cell culture supernatants and other biological preparations is required in a diverse spectrum of laboratories and applications. Methods based on antigen detection, such as p24 for HIV, or on genome detection are virus specific and sometimes suffer from a limited dynamic range of detection. In contrast, measurement of reverse transcriptase (RT) activity is a generic method which can be adapted for higher sensitivity using real-time PCR quantification (qPCR-based product-enhanced RT (PERT) assay). We present an evaluation of a modified SYBR Green I-based PERT assay (SG-PERT), using commercially available reagents such as MS2 RNA and ready-to-use qPCR mixes. This assay has a dynamic range of 7 logs, a sensitivity of 10 nU HIV-1 RT and outperforms p24 ELISA for HIV titer determination by lower inter-run variation, lower cost and higher linear range. The SG-PERT values correlate with transducing and infectious units in HIV-based viral vector and replication-competent HIV-1 preparations respectively. This assay can furthermore quantify Moloney Murine Leukemia Virus-derived vectors and can be performed on different instruments, such as Roche Lightcycler® 480 and Applied Biosystems ABI 7300. We consider this test to be an accurate, fast and relatively cheap method for retroviral quantification that is easily implemented for use in routine and research laboratories. PMID:23227216

  6. Fast MS/MS acquisition without dynamic exclusion enables precise and accurate quantification of proteome by MS/MS fragment intensity

    PubMed Central

    Zhang, Shen; Wu, Qi; Shan, Yichu; Zhao, Qun; Zhao, Baofeng; Weng, Yejing; Sui, Zhigang; Zhang, Lihua; Zhang, Yukui

    2016-01-01

    Most currently proteomic studies use data-dependent acquisition with dynamic exclusion to identify and quantify the peptides generated by the digestion of biological sample. Although dynamic exclusion permits more identifications and higher possibility to find low abundant proteins, stochastic and irreproducible precursor ion selection caused by dynamic exclusion limit the quantification capabilities, especially for MS/MS based quantification. This is because a peptide is usually triggered for fragmentation only once due to dynamic exclusion. Therefore the fragment ions used for quantification only reflect the peptide abundances at that given time point. Here, we propose a strategy of fast MS/MS acquisition without dynamic exclusion to enable precise and accurate quantification of proteome by MS/MS fragment intensity. The results showed comparable proteome identification efficiency compared to the traditional data-dependent acquisition with dynamic exclusion, better quantitative accuracy and reproducibility regardless of label-free based quantification or isobaric labeling based quantification. It provides us with new insights to fully explore the potential of modern mass spectrometers. This strategy was applied to the relative quantification of two human disease cell lines, showing great promises for quantitative proteomic applications. PMID:27198003

  7. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing

    PubMed Central

    Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; Meng, Patrick; Xie, Lei

    2016-01-01

    Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and

  8. Encoding of movement dynamics by Purkinje cell simple spike activity during fast arm movements under resistive and assistive force fields.

    PubMed

    Yamamoto, Kenji; Kawato, Mitsuo; Kotosaka, Shinya; Kitazawa, Shigeru

    2007-02-01

    It is controversial whether simple-spike activity of cerebellar Purkinje cells during arm movements encodes movement kinematics like velocity or dynamics like muscle activities. To examine this issue, we trained monkeys to flex or extend the elbow by 45 degrees in 400 ms under resistive and assistive force fields but without altering kinematics. During the task movements after training, simple-spike discharges were recorded in the intermediate part of the cerebellum in lobules V-VI, and electromyographic activity was recorded from arm muscles. Velocity profiles (kinematics) in the two force fields were almost identical to each other, whereas not only the electromyographic activities (dynamics) but also simple-spike activities in many Purkinje cells differed distinctly depending on the type of force field. Simple-spike activities encoded much larger mutual information with the type of force field than that with the residual small difference in the height of peak velocity. The difference in simple-spike activities averaged over the recorded Purkinje-cells increased approximately 40 ms before the appearance of the difference in electromyographic activities between the two force fields, suggesting that the difference of simple-spike activities could be the origin of the difference of muscle activities. Simple-spike activity of many Purkinje cells correlated with electromyographic activity with a lead of approximately 80 ms, and these neurons had little overlap with another group of neurons the simple-spike activity of which correlated with velocity profiles. These results show that simple-spike activity of at least a group of Purkinje cells in the intermediate part of cerebellar lobules V-VI encodes movement dynamics.

  9. Simple Fabrication of a Highly Sensitive and Fast Glucose Biosensor using Enzyme Immobilized in Mesocellular Carbon Foam

    SciTech Connect

    Lee, Dohoon; Lee, Jinwoo; Kim, Jungbae; Kim, Jaeyun; Na, Hyon Bin; Kim, Bokie; Shin, Chae-Ho; Kwak, Ja Hun; Dohnalkova, Alice; Grate, Jay W.; Hyeon, Taeghwan; Kim, Hak Sung

    2005-12-05

    We fabricated a highly sensitive and fast glucose biosensor by simply immobilizing glucose oxidase in mesocellular carbon foam. Due to its unique structure, the MSU-F-C enabled high enzyme loading without serious mass transfer limitation, resulting in high catalytic efficiency. As a result, the glucose biosensor fabricated with MSU-F-C/GOx showed a high sensitivity and fast response. Given these results and the inherent electrical conductivity, we anticipate that MSU-F-C will make a useful matrix for enzyme immobilization in various biocatalytic and electrobiocatalytic applications.

  10. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  11. Sexing the Sciuridae: a simple and accurate set of molecular methods to determine sex in tree squirrels, ground squirrels and marmots.

    PubMed

    Gorrell, Jamieson C; Boutin, Stan; Raveh, Shirley; Neuhaus, Peter; Côté, Steeve D; Coltman, David W

    2012-09-01

    We determined the sequence of the male-specific minor histocompatibility complex antigen (Smcy) from the Y chromosome of seven squirrel species (Sciuridae, Rodentia). Based on conserved regions inside the Smcy intron sequence, we designed PCR primers for sex determination in these species that can be co-amplified with nuclear loci as controls. PCR co-amplification yields two products for males and one for females that are easily visualized as bands by agarose gel electrophoresis. Our method provides simple and reliable sex determination across a wide range of squirrel species.

  12. Simple and fast calculation of the second-order gradients for globalized dual heuristic dynamic programming in neural networks.

    PubMed

    Fairbank, Michael; Alonso, Eduardo; Prokhorov, Danil

    2012-10-01

    We derive an algorithm to exactly calculate the mixed second-order derivatives of a neural network's output with respect to its input vector and weight vector. This is necessary for the adaptive dynamic programming (ADP) algorithms globalized dual heuristic programming (GDHP) and value-gradient learning. The algorithm calculates the inner product of this second-order matrix with a given fixed vector in a time that is linear in the number of weights in the neural network. We use a "forward accumulation" of the derivative calculations which produces a much more elegant and easy-to-implement solution than has previously been published for this task. In doing so, the algorithm makes GDHP simple to implement and efficient, bridging the gap between the widely used DHP and GDHP ADP methods.

  13. Atmospheric transmittance of an absorbing gas. 4. OPTRAN: a computationally fast and accurate transmittance model for absorbing gases with fixed and with variable mixing ratios at variable viewing angles

    NASA Astrophysics Data System (ADS)

    McMillin, L. M.; Crone, L. J.; Goldberg, M. D.; Kleespies, T. J.

    1995-09-01

    A fast and accurate method for the generation of atmospheric transmittances, optical path transmittance (OPTRAN), is described. Results from OPTRAN are compared with those produced by other currently used methods. OPTRAN produces transmittances that can be used to generate brightness temperatures that are accurate to better than 0.2 K, well over 10 times as accurate as the current methods. This is significant because it brings the accuracy of transmittance computation to a level at which it will not adversely affect atmospheric retrievals. OPTRAN is the product of an evolution of approaches developed earlier at the National Environmental Satellite, Data, and Information Service. A majorfeature of OPTRAN that contributes to its accuracy is that transmittance is obtained as a function of the absorber amount rather than the pressure.

  14. A simple and fast method for the production and characterization of methylic and ethylic biodiesels from tucum oil via an alkaline route.

    PubMed

    de Oliveira, Marcelo Firmino; Vieira, Andressa Tironi; Batista, Antônio Carlos Ferreira; de Souza Rodrigues, Hugo; Stradiotto, Nelson Ramos

    2011-01-01

    A simple, fast, and complete route for the production of methylic and ethylic biodiesel from tucum oil is described. Aliquots of the oil obtained directly from pressed tucum (pulp and almonds) were treated with potassium methoxide or ethoxide at 40°C for 40 min. The biodiesel form was removed from the reactor and washed with 0.1 M HCl aqueous solution. A simple distillation at 100°C was carried out in order to remove water and alcohol species from the biodiesel. The oxidative stability index was obtained for the tucum oil as well as the methylic and ethylic biodiesel at 6.13, 2.90, and 2.80 h, for storage times higher than 8 days. Quality control of the original oil and of the methylic and ethylic biodiesels, such as the amount of glycerin produced during the transesterification process, was accomplished by the TLC, GC-MS, and FT-IR techniques. The results obtained in this study indicate a potential biofuel production by simple treatment of tucum, an important Amazonian fruit. PMID:21629751

  15. A Simple and Fast Method for the Production and Characterization of Methylic and Ethylic Biodiesels from Tucum Oil via an Alkaline Route

    PubMed Central

    de Oliveira, Marcelo Firmino; Vieira, Andressa Tironi; Batista, Antônio Carlos Ferreira; Rodrigues, Hugo de Souza; Stradiotto, Nelson Ramos

    2011-01-01

    A simple, fast, and complete route for the production of methylic and ethylic biodiesel from tucum oil is described. Aliquots of the oil obtained directly from pressed tucum (pulp and almonds) were treated with potassium methoxide or ethoxide at 40°C for 40 min. The biodiesel form was removed from the reactor and washed with 0.1 M HCl aqueous solution. A simple distillation at 100°C was carried out in order to remove water and alcohol species from the biodiesel. The oxidative stability index was obtained for the tucum oil as well as the methylic and ethylic biodiesel at 6.13, 2.90, and 2.80 h, for storage times higher than 8 days. Quality control of the original oil and of the methylic and ethylic biodiesels, such as the amount of glycerin produced during the transesterification process, was accomplished by the TLC, GC-MS, and FT-IR techniques. The results obtained in this study indicate a potential biofuel production by simple treatment of tucum, an important Amazonian fruit. PMID:21629751

  16. A simple and fast method for the production and characterization of methylic and ethylic biodiesels from tucum oil via an alkaline route.

    PubMed

    de Oliveira, Marcelo Firmino; Vieira, Andressa Tironi; Batista, Antônio Carlos Ferreira; de Souza Rodrigues, Hugo; Stradiotto, Nelson Ramos

    2011-01-01

    A simple, fast, and complete route for the production of methylic and ethylic biodiesel from tucum oil is described. Aliquots of the oil obtained directly from pressed tucum (pulp and almonds) were treated with potassium methoxide or ethoxide at 40°C for 40 min. The biodiesel form was removed from the reactor and washed with 0.1 M HCl aqueous solution. A simple distillation at 100°C was carried out in order to remove water and alcohol species from the biodiesel. The oxidative stability index was obtained for the tucum oil as well as the methylic and ethylic biodiesel at 6.13, 2.90, and 2.80 h, for storage times higher than 8 days. Quality control of the original oil and of the methylic and ethylic biodiesels, such as the amount of glycerin produced during the transesterification process, was accomplished by the TLC, GC-MS, and FT-IR techniques. The results obtained in this study indicate a potential biofuel production by simple treatment of tucum, an important Amazonian fruit.

  17. Fast and simple one-step preparation of ⁶⁸Ga citrate for routine clinical PET.

    PubMed

    Jensen, Svend B; Nielsen, Karin M; Mewis, Dennis; Kaufmann, Jens

    2013-08-01

    The imaging of infectious and inflammatory diseases using gallium-67 (⁶⁷Ga) citrate scintigraphy has been a well-established diagnostic tool for decades. In recent times, interest has focused on PET using the short-lived positron emitting radioisotope ⁶⁸Ga. ⁶⁸Ga is not only more readily available, it also provides better quality images whose high resolution permits quantitative analyses, thus improving the management of patients suffering from infections or inflammation. The purpose of our study was to develop a fast and reliable synthesis protocol for the preparation of ⁶⁸Ga citrate under good manufacturing practice aspects without the use of organic solvents. A commercially available synthesis module was used to perform 10 syntheses with an average yield of 768 ± 31 MBq (mean ± SD) within 10 min; 92.04 ± 1.23% of the radioactivity was located in the product vial, and the rest on the cation exchange cartridge (7.48 ± 1.23%) and in the waste vial (0.47 ± 0.28%). The radiochemical purity of the product determined by instant thin-layer chromatography was greater than 99%. The products have been proven to be sterile and pyrogen-free. Variations were made in several critical synthesis parameters, and the results are presented herein. By eliminating the use of organic solvents, the previously required quality control testing of the final product by gas chromatography can be abandoned. This novel, high-yielding method allows for a more efficient synthesis of ⁶⁸Ga citrate with both shorter production time and high radiochemical purity.

  18. A fast, robust, and simple implicit method for adaptive time-stepping on adaptive mesh-refinement grids

    NASA Astrophysics Data System (ADS)

    Commerçon, B.; Debout, V.; Teyssier, R.

    2014-03-01

    Context. Implicit solvers present strong limitations when used on supercomputing facilities and in particular for adaptive mesh-refinement codes. Aims: We present a new method for implicit adaptive time-stepping on adaptive mesh-refinement grids. We implement it in the radiation-hydrodynamics solver we designed for the RAMSES code for astrophysical purposes and, more particularly, for protostellar collapse. Methods: We briefly recall the radiation-hydrodynamics equations and the adaptive time-stepping methodology used for hydrodynamical solvers. We then introduce the different types of boundary conditions (Dirichlet, Neumann, and Robin) that are used at the interface between levels and present our implementation of the new method in the RAMSES code. The method is tested against classical diffusion and radiation-hydrodynamics tests, after which we present an application for protostellar collapse. Results: We show that using Dirichlet boundary conditions at level interfaces is a good compromise between robustness and accuracy and that it can be used in structure formation calculations. The gain in computational time over our former unique time step method ranges from factors of 5 to 50 depending on the level of adaptive time-stepping and on the problem. We successfully compare the old and new methods for protostellar collapse calculations that involve highly non linear physics. Conclusions: We have developed a simple but robust method for adaptive time-stepping of implicit scheme on adaptive mesh-refinement grids. It can be applied to a wide variety of physical problems that involve diffusion processes.

  19. New approach for the synthesis of [18F]fluoroethyltyrosine for cancer imaging: simple, fast, and high yielding automated synthesis.

    PubMed

    Zuhayra, M; Alfteimi, A; Forstner, C Von; Lützen, U; Meller, B; Henze, E

    2009-11-01

    O-(2-[(18)F]fluoroethyl)-L-tyrosine ([(18)F]FET) is one of the first (18)F-labeled amino acids for imaging amino acid metabolism in tumors. This tracer overcomes the disadvantages of [(18)F]fluorodeoxyglucose, [(18)F]FDG, and [(11)C]methionine, [(11)C]MET. Nevertheless, the various synthetic methods providing (18)F[FET] exhibit a big disadvantage concerning the necessity of two purification steps during the synthesis including HPLC purification, which causes difficulties in the automation, moderate yields, and long synthesis times >60 min. A new approach for the synthesis of [(18)F]FET is developed starting from 2-bromoethyl triflate as precursor. After optimization of the synthesis parameters including the distillation step of [(18)F]-FCH(2)CH(2)Br combined with the final purification of [(18)F]FET using a simple solid phase extraction instead of an HPLC run the synthesis [(18)F]FET could be significantly simplified, shortened, and improved. The radiochemical yield (RCY) was about 45% (not decay corrected and calculated relative to [(18)F]F(-) activity that was delivered from the cyclotron). Synthesis time was only 35 min from the end of bombardment (EOB) and the radiochemical purity was >99% at the end of synthesis (EOS). Thus, this simplified synthesis for [(18)F]FET offers a very good option for routine clinical use. PMID:19804977

  20. Simple and fast analysis of iohexol in human serums using micro-hydrophilic interaction liquid chromatography with monolithic column.

    PubMed

    Chaloemsuwiwattanakan, Thotsaphorn; Sangcakul, Areeporn; Kitiyakara, Chagriya; Nacapricha, Duangjai; Wilairat, Prapin; Chaisuwan, Patcharin

    2016-09-01

    A simple and rapid method based on micro-liquid chromatography using a synthetic monolithic capillary column was developed for determination of iohexol in human serums, a marker to evaluate the glomerular filtration rate. A hydrophilic methacrylic acid-ethylene dimethacrylate monolith provided excellent selectivity and efficiency for iohexol with separation time of 3 min using a mobile phase of 40:60 v/v 50 mM phosphate buffer pH 5/methanol. Four serum protein removal, methods using perchloric acid, 50% acetonitrile, 0.1 M zinc sulfate, and centrifuge membrane filter were examined. The method of zinc sulfate was chosen due to its simplicity, compatibility with the mobile phase system, nontoxicity, and low cost. Interday calibration curves were conducted over iohexol concentrations range of 2-500 mg/L (R(2) = 0.9997 ± 0.0001) with detection limit of 0.44 mg/L. Intra- and interday precisions for peak area and retention time were less than 2.8 and 1.4%, respectively. The method was successfully applied to serum samples with percent recoveries from 102 to 104. The method was applied to monitor released iohexol from healthy subject. Compared with the commercially available reversed-phase high-performance liquid chromatography method, the presented method provided simpler chromatogram, faster separation with higher separation efficiency and much lower sample and solvent consumption. PMID:27443792

  1. Fast and simple epidemiological typing of Pseudomonas aeruginosa using the double-locus sequence typing (DLST) method.

    PubMed

    Basset, P; Blanc, D S

    2014-06-01

    Although the molecular typing of Pseudomonas aeruginosa is important to understand the local epidemiology of this opportunistic pathogen, it remains challenging. Our aim was to develop a simple typing method based on the sequencing of two highly variable loci. Single-strand sequencing of three highly variable loci (ms172, ms217, and oprD) was performed on a collection of 282 isolates recovered between 1994 and 2007 (from patients and the environment). As expected, the resolution of each locus alone [number of types (NT) = 35-64; index of discrimination (ID) = 0.816-0.964] was lower than the combination of two loci (NT = 78-97; ID = 0.966-0.971). As each pairwise combination of loci gave similar results, we selected the most robust combination with ms172 [reverse; R] and ms217 [R] to constitute the double-locus sequence typing (DLST) scheme for P. aeruginosa. This combination gave: (i) a complete genotype for 276/282 isolates (typability of 98%), (ii) 86 different types, and (iii) an ID of 0.968. Analysis of multiple isolates from the same patients or taps showed that DLST genotypes are generally stable over a period of several months. The high typability, discriminatory power, and ease of use of the proposed DLST scheme makes it a method of choice for local epidemiological analyses of P. aeruginosa. Moreover, the possibility to give unambiguous definition of types allowed to develop an Internet database ( http://www.dlst.org ) accessible by all. PMID:24326699

  2. Retrieval of Areal-averaged Spectral Surface Albedo from Transmission Data Alone: Computationally Simple and Fast Approach

    SciTech Connect

    Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Michalsky, Joseph; Hodges, G. B.

    2014-10-25

    We introduce and evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone at five wavelengths (415, 500, 615, 673 and 870nm), under fully overcast conditions. Our retrieval is based on a one-line semi-analytical equation and widely accepted assumptions regarding the weak spectral dependence of cloud optical properties, such as cloud optical depth and asymmetry parameter, in the visible and near-infrared spectral range. To illustrate the performance of our retrieval, we use as input measurements of spectral atmospheric transmission from Multi-Filter Rotating Shadowband Radiometer (MFRSR). These MFRSR data are collected at two well-established continental sites in the United States supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program and National Oceanic and Atmospheric Administration (NOAA). The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo. In particular, these comparisons are made at four MFRSR wavelengths (500, 615, 673 and 870nm) and for four seasons (winter, spring, summer and fall) at the ARM site using multi-year (2008-2013) MFRSR and MODIS data. Good agreement, on average, for these wavelengths results in small values (≤0.01) of the corresponding root mean square errors (RMSEs) for these two sites. The obtained RMSEs are comparable with those obtained previously for the shortwave albedos (MODIS-derived versus tower-measured) for these sites during growing seasons. We also demonstrate good agreement between tower-based daily-averaged surface albedos measured for “nearby” overcast and non-overcast days. Thus, our retrieval originally developed for overcast conditions likely can be extended for non-overcast days by interpolating between overcast retrievals.

  3. Retrieval of areal-averaged spectral surface albedo from transmission data alone: computationally simple and fast approach

    NASA Astrophysics Data System (ADS)

    Kassianov, Evgueni; Barnard, James; Flynn, Connor; Riihimaki, Laura; Michalsky, Joseph J.; Hodges, Gary

    2014-10-01

    We introduce and evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone at five wavelengths (415, 500, 615, 673 and 870nm), under fully overcast conditions. Our retrieval is based on a one-line semi-analytical equation and widely accepted assumptions regarding the weak spectral dependence of cloud optical properties, such as cloud optical depth and asymmetry parameter, in the visible and near-infrared spectral range. To illustrate the performance of our retrieval, we use as input measurements of spectral atmospheric transmission from the Multi-Filter Rotating Shadowband Radiometer (MFRSR). These MFRSR data are collected at two well-established continental sites in the United States supported by the U.S. Department of Energy's (DOE's) Atmospheric Radiation Measurement (ARM) Program and National Oceanic and Atmospheric Administration (NOAA). The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo. In particular, these comparisons are made at four MFRSR wavelengths (500, 615, 673 and 870nm) and for four seasons (winter, spring, summer and fall) at the ARM site using multi-year (2008-2013) MFRSR and MODIS data. Good agreement, on average, for these wavelengths results in small values (≤0.015) of the corresponding root mean square errors (RMSEs) for these two sites. The obtained RMSEs are comparable with those obtained previously for the shortwave albedos (MODIS-derived versus tower-measured) for these sites during growing seasons. We also demonstrate good agreement between tower-based daily-averaged surface albedos measured for "nearby" overcast and non-overcast days. Thus, our retrieval originally developed for overcast conditions likely can be extended for non-overcast days by interpolating between overcast retrievals.

  4. A new simple and fast thermally-solvent assisted method to bond PMMA–PMMA in micro-fluidics devices

    NASA Astrophysics Data System (ADS)

    Bamshad, Arshya; Nikfarjam, Alireza; Khaleghi, Hossein

    2016-06-01

    A rapid and simple thermally-solvent assisted method of bonding was introduced for poly(methyl methacrylate) (PMMA) based microfluidic substrates. The technique is a low-temperature (68 {}^\\circ \\text{C} ), and rapid (15 \\min ) bonding technique; in addition, only a fan-assisted oven with some paper clamps are used. Two different solvents (ethanol and isopropyl alcohol) with two different methods of cooling (one-step and three steps) were employed to determine the best solvent and method of cooling (residual stresses may be released in different cooling methods) by considering bonding strength and quality. In this bonding technique, a thin film of solvent between two PMMA sheets disperses tends to dissolve a thin film of PMMA sheet surface, then evaporate, and finally reconnect monomers of the PMMA sheets at the specific operating temperature. The operating temperature of this method comes from the coincidence of the solubility parameter graph of PMMA with the solubility parameter graph of the solvents. Different tests such as tensile strength test, deformation test, leakage tests, and surface characteristics tests were performed to find the optimum conditions for this bonding strategy. The best bonding quality and the highest bonding strength (28.47 \\text{MPa} ) occurred when 70% isopropyl alcohol solution was employed with the one-step cooling method. Furthermore, the bonding reversibility was taken into account and critical percentages for irreversible bonding were obtained for both of the solvents and methods. This method provides a perfect bonding quality for PMMA substrates, and can be used in laboratories without needing any expensive and special instruments, because of its merits such as lower bonding time, lower-cost, and higher strength etc in comparison with the majority of other common bonding techniques.

  5. A simple and fast ultrasound-assisted extraction procedure for Fe and Zn determination in milk-based infant formulas using flame atomic absorption spectrometry (FAAS).

    PubMed

    Machado, Ignacio; Bergmann, Gabriela; Pistón, Mariela

    2016-03-01

    A simple and fast ultrasound-assisted procedure for the determination of iron and zinc in infant formulas is presented. The analytical determinations were carried out by flame atomic absorption spectrometry. Multivariate experiments were performed for optimization; in addition, a comparative study was carried out using two ultrasonic devices. A method using an ultrasonic bath was selected because several samples can be prepared simultaneously, and there is less contamination risk. Analytical precision (sr(%)) was 3.3% and 4.1% for iron and zinc, respectively. Trueness was assessed using a reference material and by comparison of the results obtained analyzing commercial samples using a reference method. The results were statistically equivalent to the certified values and in good agreement with those obtained using the reference method. The proposed method can be easily implemented in laboratories for routine analysis with the advantage of being rapid and in agreement with green chemistry. PMID:26471568

  6. Validation of a simple and fast method to quantify in vitro mineralization with fluorescent probes used in molecular imaging of bone

    SciTech Connect

    Moester, Martiene J.C.; Schoeman, Monique A.E.; Oudshoorn, Ineke B.; Beusekom, Mara M. van; Mol, Isabel M.; Kaijzel, Eric L.; Löwik, Clemens W.G.M.; Rooij, Karien E. de

    2014-01-03

    Highlights: •We validate a simple and fast method of quantification of in vitro mineralization. •Fluorescently labeled agents can detect calcium deposits in the mineralized matrix of cell cultures. •Fluorescent signals of the probes correlated with Alizarin Red S staining. -- Abstract: Alizarin Red S staining is the standard method to indicate and quantify matrix mineralization during differentiation of osteoblast cultures. KS483 cells are multipotent mouse mesenchymal progenitor cells that can differentiate into chondrocytes, adipocytes and osteoblasts and are a well-characterized model for the study of bone formation. Matrix mineralization is the last step of differentiation of bone cells and is therefore a very important outcome measure in bone research. Fluorescently labelled calcium chelating agents, e.g. BoneTag and OsteoSense, are currently used for in vivo imaging of bone. The aim of the present study was to validate these probes for fast and simple detection and quantification of in vitro matrix mineralization by KS483 cells and thus enabling high-throughput screening experiments. KS483 cells were cultured under osteogenic conditions in the presence of compounds that either stimulate or inhibit osteoblast differentiation and thereby matrix mineralization. After 21 days of differentiation, fluorescence of stained cultures was quantified with a near-infrared imager and compared to Alizarin Red S quantification. Fluorescence of both probes closely correlated to Alizarin Red S staining in both inhibiting and stimulating conditions. In addition, both compounds displayed specificity for mineralized nodules. We therefore conclude that this method of quantification of bone mineralization using fluorescent compounds is a good alternative for the Alizarin Red S staining.

  7. Low voltage-driven oxide phototransistors with fast recovery, high signal-to-noise ratio, and high responsivity fabricated via a simple defect-generating process

    NASA Astrophysics Data System (ADS)

    Yun, Myeong Gu; Kim, Ye Kyun; Ahn, Cheol Hyoun; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun; Kim, Yong-Hoon

    2016-08-01

    We have demonstrated that photo-thin film transistors (photo-TFTs) fabricated via a simple defect-generating process could achieve fast recovery, a high signal to noise (S/N) ratio, and high sensitivity. The photo-TFTs are inverted-staggered bottom-gate type indium-gallium-zinc-oxide (IGZO) TFTs fabricated using atomic layer deposition (ALD)-derived Al2O3 gate insulators. The surfaces of the Al2O3 gate insulators are damaged by ion bombardment during the deposition of the IGZO channel layers by sputtering and the damage results in the hysteresis behavior of the photo-TFTs. The hysteresis loops broaden as the deposition power density increases. This implies that we can easily control the amount of the interface trap sites and/or trap sites in the gate insulator near the interface. The photo-TFTs with large hysteresis-related defects have high S/N ratio and fast recovery in spite of the low operation voltages including a drain voltage of 1 V, positive gate bias pulse voltage of 3 V, and gate voltage pulse width of 3 V (0 to 3 V). In addition, through the hysteresis-related defect-generating process, we have achieved a high responsivity since the bulk defects that can be photo-excited and eject electrons also increase with increasing deposition power density.

  8. Low voltage-driven oxide phototransistors with fast recovery, high signal-to-noise ratio, and high responsivity fabricated via a simple defect-generating process.

    PubMed

    Yun, Myeong Gu; Kim, Ye Kyun; Ahn, Cheol Hyoun; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun; Kim, Yong-Hoon

    2016-01-01

    We have demonstrated that photo-thin film transistors (photo-TFTs) fabricated via a simple defect-generating process could achieve fast recovery, a high signal to noise (S/N) ratio, and high sensitivity. The photo-TFTs are inverted-staggered bottom-gate type indium-gallium-zinc-oxide (IGZO) TFTs fabricated using atomic layer deposition (ALD)-derived Al2O3 gate insulators. The surfaces of the Al2O3 gate insulators are damaged by ion bombardment during the deposition of the IGZO channel layers by sputtering and the damage results in the hysteresis behavior of the photo-TFTs. The hysteresis loops broaden as the deposition power density increases. This implies that we can easily control the amount of the interface trap sites and/or trap sites in the gate insulator near the interface. The photo-TFTs with large hysteresis-related defects have high S/N ratio and fast recovery in spite of the low operation voltages including a drain voltage of 1 V, positive gate bias pulse voltage of 3 V, and gate voltage pulse width of 3 V (0 to 3 V). In addition, through the hysteresis-related defect-generating process, we have achieved a high responsivity since the bulk defects that can be photo-excited and eject electrons also increase with increasing deposition power density. PMID:27553518

  9. Low voltage-driven oxide phototransistors with fast recovery, high signal-to-noise ratio, and high responsivity fabricated via a simple defect-generating process

    PubMed Central

    Yun, Myeong Gu; Kim, Ye Kyun; Ahn, Cheol Hyoun; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun; Kim, Yong-Hoon

    2016-01-01

    We have demonstrated that photo-thin film transistors (photo-TFTs) fabricated via a simple defect-generating process could achieve fast recovery, a high signal to noise (S/N) ratio, and high sensitivity. The photo-TFTs are inverted-staggered bottom-gate type indium-gallium-zinc-oxide (IGZO) TFTs fabricated using atomic layer deposition (ALD)-derived Al2O3 gate insulators. The surfaces of the Al2O3 gate insulators are damaged by ion bombardment during the deposition of the IGZO channel layers by sputtering and the damage results in the hysteresis behavior of the photo-TFTs. The hysteresis loops broaden as the deposition power density increases. This implies that we can easily control the amount of the interface trap sites and/or trap sites in the gate insulator near the interface. The photo-TFTs with large hysteresis-related defects have high S/N ratio and fast recovery in spite of the low operation voltages including a drain voltage of 1 V, positive gate bias pulse voltage of 3 V, and gate voltage pulse width of 3 V (0 to 3 V). In addition, through the hysteresis-related defect-generating process, we have achieved a high responsivity since the bulk defects that can be photo-excited and eject electrons also increase with increasing deposition power density. PMID:27553518

  10. A simple, sensitive, and accurate alcohol electrode

    SciTech Connect

    Verduyn, C.; Scheffers, W.A.; Van Dijken, J.P.

    1983-04-01

    The construction and performance of an enzyme electrode is described which specifically detects lower primary aliphatic alcohols in aqueous solutions. The electrode consists of a commercial Clark-type oxygen electrode on which alcohol oxidase (E.C. 1.1.3.13) and catalase were immobilized. The decrease in electrode current is linearly proportional to ethanol concentrations betwee 1 and 25 ppm. The response of the electrode remains constant during 400 assays over a period of two weeks. The response time is between 1 and 2 min. Assembly of the electrode takes less than 1 h.

  11. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies

    PubMed Central

    Essa, Khalid S.

    2013-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472

  12. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies.

    PubMed

    Essa, Khalid S

    2014-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.

  13. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases

    PubMed Central

    Floden, Evan W.; Tommaso, Paolo D.; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-01-01

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. PMID:27106060

  14. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    PubMed

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-01

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee.

  15. Autonomous Instrumentation for Fast, Continuous and Accurate Isotopic Measurements of Water Vapor (δ18O, δ 2H, H2O) in the Field

    NASA Astrophysics Data System (ADS)

    Liem, J. S.; Dong, F.; Owano, T. G.; Baer, D. S.

    2010-12-01

    Stable isotopes of water vapor are powerful tracers to investigate the hydrological cycle and ecological processes. Therefore, continuous, in-situ and accurate measurements of δ18O and δ2H are critical to advance the understanding of water-cycle dynamics worldwide. Furthermore, the combination of meteorological techniques and high-frequency isotopic water measurements can provide detailed time-resolved information on the eco-physiological performance of plants and enable improved understanding of water fluxes at ecosystem scales. In this work, we present recent development and field deployment of a novel Water Vapor Isotope Measurement System (WVIMS) capable of simultaneous in situ measurements of δ18O and δ2H and water mixing ratio (H2O) with high precision, accuracy and speed (up to 10 Hz measurement rate). The WVIMS consists of an Analyzer (Water Vapor Isotope Analyzer), based on cavity enhanced laser absorption spectroscopy, and a Standard Source (Water Vapor Isotope Standard Source), based on quantitative evaporation of a liquid water standard (with known isotopic content), and operates in a dual-inlet configuration. The WVIMS automatically controls the entire sample and data collection, data analysis and calibration process to allow for continuous, autonomous unattended long-term operation. The WVIMS has been demonstrated for accurate (i.e. fully calibrated) measurements ranging from 500 ppmv (typical of arctic environments) to over 30,000 ppmv (typical of tropical environments) in air. Dual-inlet operation, which involves regular calibration with isotopic water vapor reference standards, essentially eliminates measurement drift, ensures data reliability, and allows operation over an extremely wide ambient temperature range (5-45C). This presentation will include recent measurements recorded using the WVIMS in plant growth chambers and in arctic environments. The availability of this new instrumentation provides new opportunities for detailed continuous

  16. MO-A-BRD-10: A Fast and Accurate GPU-Based Proton Transport Monte Carlo Simulation for Validating Proton Therapy Treatment Plans

    SciTech Connect

    Wan Chan Tseung, H; Ma, J; Beltran, C

    2014-06-15

    Purpose: To build a GPU-based Monte Carlo (MC) simulation of proton transport with detailed modeling of elastic and non-elastic (NE) protonnucleus interactions, for use in a very fast and cost-effective proton therapy treatment plan verification system. Methods: Using the CUDA framework, we implemented kernels for the following tasks: (1) Simulation of beam spots from our possible scanning nozzle configurations, (2) Proton propagation through CT geometry, taking into account nuclear elastic and multiple scattering, as well as energy straggling, (3) Bertini-style modeling of the intranuclear cascade stage of NE interactions, and (4) Simulation of nuclear evaporation. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions with therapeutically-relevant nuclei, (2) Pencil-beam dose calculations in homogeneous phantoms, (3) A large number of treatment plan dose recalculations, and compared with Geant4.9.6p2/TOPAS. A workflow was devised for calculating plans from a commercially available treatment planning system, with scripts for reading DICOM files and generating inputs for our MC. Results: Yields, energy and angular distributions of secondaries from NE collisions on various nuclei are in good agreement with the Geant4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%–2mm for 70–230 MeV pencil-beam dose distributions in water, soft tissue, bone and Ti phantoms is 100%. The pass rate at 2%–2mm for treatment plan calculations is typically above 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is around 20s for 1×10{sup 7} proton histories. Conclusion: Our GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle NE interactions on any nucleus. Dosimetric calculations demonstrate very good agreement with Geant4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil

  17. WE-A-17A-10: Fast, Automatic and Accurate Catheter Reconstruction in HDR Brachytherapy Using An Electromagnetic 3D Tracking System

    SciTech Connect

    Poulin, E; Racine, E; Beaulieu, L; Binnekamp, D

    2014-06-15

    Purpose: In high dose rate brachytherapy (HDR-B), actual catheter reconstruction protocols are slow and errors prompt. The purpose of this study was to evaluate the accuracy and robustness of an electromagnetic (EM) tracking system for improved catheter reconstruction in HDR-B protocols. Methods: For this proof-of-principle, a total of 10 catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a Philips-design 18G biopsy needle (used as an EM stylet) and the second generation Aurora Planar Field Generator from Northern Digital Inc. The Aurora EM system exploits alternating current technology and generates 3D points at 40 Hz. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical CT system with a resolution of 0.089 mm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, 5 catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 seconds or less. This would imply that for a typical clinical implant of 17 catheters, the total reconstruction time would be less than 3 minutes. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.92 ± 0.37 mm and 1.74 ± 1.39 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be significantly more accurate (unpaired t-test, p < 0.05). A mean difference of less than 0.5 mm was found between successive EM reconstructions. Conclusion: The EM reconstruction was found to be faster, more accurate and more robust than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators. We would like to disclose that the equipments, used in this study, is coming from a collaboration with Philips Medical.

  18. Development of a mechanism and an accurate and simple mathematical model for the description of drug release: Application to a relevant example of acetazolamide-controlled release from a bio-inspired elastin-based hydrogel.

    PubMed

    Fernández-Colino, A; Bermudez, J M; Arias, F J; Quinteros, D; Gonzo, E

    2016-04-01

    Transversality between mathematical modeling, pharmacology, and materials science is essential in order to achieve controlled-release systems with advanced properties. In this regard, the area of biomaterials provides a platform for the development of depots that are able to achieve controlled release of a drug, whereas pharmacology strives to find new therapeutic molecules and mathematical models have a connecting function, providing a rational understanding by modeling the parameters that influence the release observed. Herein we present a mechanism which, based on reasonable assumptions, explains the experimental data obtained very well. In addition, we have developed a simple and accurate “lumped” kinetics model to correctly fit the experimentally observed drug-release behavior. This lumped model allows us to have simple analytic solutions for the mass and rate of drug release as a function of time without limitations of time or mass of drug released, which represents an important step-forward in the area of in vitro drug delivery when compared to the current state of the art in mathematical modeling. As an example, we applied the mechanism and model to the release data for acetazolamide from a recombinant polymer. Both materials were selected because of a need to develop a suitable ophthalmic formulation for the treatment of glaucoma. The in vitro release model proposed herein provides a valuable predictive tool for ensuring product performance and batch-to-batch reproducibility, thus paving the way for the development of further pharmaceutical devices.

  19. Development of a mechanism and an accurate and simple mathematical model for the description of drug release: Application to a relevant example of acetazolamide-controlled release from a bio-inspired elastin-based hydrogel.

    PubMed

    Fernández-Colino, A; Bermudez, J M; Arias, F J; Quinteros, D; Gonzo, E

    2016-04-01

    Transversality between mathematical modeling, pharmacology, and materials science is essential in order to achieve controlled-release systems with advanced properties. In this regard, the area of biomaterials provides a platform for the development of depots that are able to achieve controlled release of a drug, whereas pharmacology strives to find new therapeutic molecules and mathematical models have a connecting function, providing a rational understanding by modeling the parameters that influence the release observed. Herein we present a mechanism which, based on reasonable assumptions, explains the experimental data obtained very well. In addition, we have developed a simple and accurate “lumped” kinetics model to correctly fit the experimentally observed drug-release behavior. This lumped model allows us to have simple analytic solutions for the mass and rate of drug release as a function of time without limitations of time or mass of drug released, which represents an important step-forward in the area of in vitro drug delivery when compared to the current state of the art in mathematical modeling. As an example, we applied the mechanism and model to the release data for acetazolamide from a recombinant polymer. Both materials were selected because of a need to develop a suitable ophthalmic formulation for the treatment of glaucoma. The in vitro release model proposed herein provides a valuable predictive tool for ensuring product performance and batch-to-batch reproducibility, thus paving the way for the development of further pharmaceutical devices. PMID:26838852

  20. Fast and Accurate Data Extraction for Near Real-Time Registration of 3-D Ultrasound and Computed Tomography in Orthopedic Surgery.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2015-12-01

    Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery.

  1. Fast and Accurate Data Extraction for Near Real-Time Registration of 3-D Ultrasound and Computed Tomography in Orthopedic Surgery.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2015-12-01

    Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery. PMID:26365924

  2. A Fast, Accurate and Easy to Implement Method for Pose Recognition of an Intramedullary Nail using a Tracked C-arm

    NASA Astrophysics Data System (ADS)

    Esfandiari, H.; Amiri, S.; Lichti, D. D.; Anglin, C.

    2014-06-01

    A C-arm is a mobile X-ray device that is frequently used during orthopaedic surgeries. It consists of a semi-circular, arc-shaped arm that holds an X-ray transmitter at one end and an X-ray detector at the other. Intramedullary nail (IM nail) fixation is a popular orthopaedic surgery in which a metallic rod is placed into the patient's fractured bone (femur or tibia) and fixed using metal screws. The main challenge of IM-nail fixation surgery is to achieve the X-ray shot in which the distal holes of the IM nail appear as circles (desired view) so that the surgeon can easily insert the screws. Although C-arm X-ray devices are routinely used in IM-nail fixation surgeries, the surgeons or radiation technologists (rad-techs) usually use it in a trial-and-error manner. This method raises both radiation exposure and surgery time. In this study, we have designed and developed an IM-nail distal locking navigation technique that leads to more accurate and faster screw placement with a lower radiation dose and a minimum number of added steps to the operation to make it more accepted within the orthopaedic community. The specific purpose of this study was to develop and validate an automated technique for identifying the current pose of the IM nail relative to the C-arm. An accuracy assessment was performed to test the reliability of the navigation results. Translational accuracy was demonstrated to be better than 1 mm, roll and pitch rotations better than 2° and yaw rotational accuracy better than 2-5° depending on the separate angle. Computation time was less than 3.5 seconds.

  3. A simple, fast and cheap non-SPE screening method for antibacterial residue analysis in milk and liver using liquid chromatography-tandem mass spectrometry.

    PubMed

    Martins, Magda Targa; Melo, Jéssica; Barreto, Fabiano; Hoff, Rodrigo Barcellos; Jank, Louise; Bittencourt, Michele Soares; Arsand, Juliana Bazzan; Schapoval, Elfrides Eva Scherman

    2014-11-01

    In routine laboratory work, screening methods for multiclass analysis can process a large number of samples in a short time. The main challenge is to develop a methodology to detect as many different classes of residues as possible, combined with speed and low cost. An efficient technique for the analysis of multiclass antibacterial residues (fluoroquinolones, tetracyclines, sulfonamides and trimethoprim) was developed based on simple, environment-friendly extraction for bovine milk, cattle and poultry liver. Acidified ethanol was used as an extracting solvent for milk samples. Liver samples were treated using EDTA-washed sand for cell disruption, methanol:water and acidified acetonitrile as extracting solvent. A total of 24 antibacterial residues were detected and confirmed using liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS), at levels between 10, 25 and 50% of the maximum residue limit (MRL). For liver samples a metabolite (sulfaquinoxaline-OH) was also monitored. A validation procedure was conducted for screening purposes in accordance with European Union requirements (2002/657/EC). The detection capability (CCβ) false compliant rate was less than 5% at the lowest level for each residue. Specificity and ruggedness were also discussed. Incurred and routine samples were analyzed and the method was successfully applied. The results proved that this method can be an important tool in routine analysis, since it is very fast and reliable.

  4. β-lactam antibiotics residues analysis in bovine milk by LC-ESI-MS/MS: a simple and fast liquid-liquid extraction method.

    PubMed

    Jank, L; Hoff, R B; Tarouco, P C; Barreto, F; Pizzolato, T M

    2012-01-01

    This study presents the development and validation of a simple method for the detection and quantification of six β-lactam antibiotics residues (ceftiofur, penicillin G, penicillin V, oxacillin, cloxacillin and dicloxacillin) in bovine milk using a fast liquid-liquid extraction (LLE) for sample preparation, followed by liquid chromatography-electrospray-tandem mass spectrometry (LC-MS/MS). LLE consisted of the addition of acetonitrile to the sample, followed by addition of sodium chloride, centrifugation and direct injection of an aliquot into the LC-MS/MS system. Separation was performed in a C(18) column, using acetonitrile and water, both with 0.1% of formic acid, as mobile phase. Method validation was performed according to the criteria of Commission Decision 2002/657/EC. Limits of detection ranged from 0.4 (penicillin G and penicillin V) to 10.0 ng ml(-1) (ceftiofur), and linearity was achieved. The decision limit (CCα), detection capability (CCβ), accuracy, inter- and intra-day repeatability of the method are reported.

  5. A fast and simple assay for busulfan in serum or plasma by liquid chromatography-tandem mass spectrometry using turbulent flow online extraction technology.

    PubMed

    Bunch, Dustin R; Heideloff, Courtney; Ritchie, James C; Wang, Sihe

    2010-12-01

    Busulfan is used in myeloablative preparation regimens for hematopoietic bone marrow transplantation. Due to its narrow therapeutic range therapeutic drug monitoring of busulfan is recommended. In this study a fast and simple method for measuring busulfan in serum or plasma by liquid chromatography-tandem mass spectrometry (LC-MS/MS) has been developed utilizing turbulent flow online extraction technology. Serum or plasma was mixed with acetonitrile containing d(8)-busulfan. After centrifugation the supernatant was injected onto a turbulent flow preparatory column then transferred to a C18 analytical column monitored by a tandem mass spectrometer set at positive electrospray ionization. The analytical cycle time was 4.0min. The method was linear from 0.15 to 41.90μmol/L with an accuracy of 87.9-103.0%. Inter- and intra-assay CVs across four concentration levels were 2.1-7.8%. No significant carryover or ion suppression was observed. No interference was observed from commercial control materials containing more than 100 compounds. Comparison with a well established LC-MS/MS method using patient specimens (n=45) showed a mean bias 1.3% with Deming regression of slope 1.02, intercept -0.02μmol/L, and a linear correlation coefficient 0.9883. The LC-MS/MS method coupled with turbulent flow online sample cleaning technology described here offers reliable busulfan quantitation in serum or plasma with minimum manual sample preparation and was fully validated for clinical use.

  6. A simple, fast and cheap non-SPE screening method for antibacterial residue analysis in milk and liver using liquid chromatography-tandem mass spectrometry.

    PubMed

    Martins, Magda Targa; Melo, Jéssica; Barreto, Fabiano; Hoff, Rodrigo Barcellos; Jank, Louise; Bittencourt, Michele Soares; Arsand, Juliana Bazzan; Schapoval, Elfrides Eva Scherman

    2014-11-01

    In routine laboratory work, screening methods for multiclass analysis can process a large number of samples in a short time. The main challenge is to develop a methodology to detect as many different classes of residues as possible, combined with speed and low cost. An efficient technique for the analysis of multiclass antibacterial residues (fluoroquinolones, tetracyclines, sulfonamides and trimethoprim) was developed based on simple, environment-friendly extraction for bovine milk, cattle and poultry liver. Acidified ethanol was used as an extracting solvent for milk samples. Liver samples were treated using EDTA-washed sand for cell disruption, methanol:water and acidified acetonitrile as extracting solvent. A total of 24 antibacterial residues were detected and confirmed using liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS), at levels between 10, 25 and 50% of the maximum residue limit (MRL). For liver samples a metabolite (sulfaquinoxaline-OH) was also monitored. A validation procedure was conducted for screening purposes in accordance with European Union requirements (2002/657/EC). The detection capability (CCβ) false compliant rate was less than 5% at the lowest level for each residue. Specificity and ruggedness were also discussed. Incurred and routine samples were analyzed and the method was successfully applied. The results proved that this method can be an important tool in routine analysis, since it is very fast and reliable. PMID:25127608

  7. Simple, fast and selective detection of adenosine triphosphate at physiological pH using unmodified gold nanoparticles as colorimetric probes and metal ions as cross-linkers.

    PubMed

    Deng, Dehua; Xia, Ning; Li, Sujuan; Xu, Chunying; Sun, Ting; Pang, Huan; Liu, Lin

    2012-11-06

    We report a simple, fast and selective colorimetric assay of adenosine triphosphate (ATP) using unmodified gold nanoparticles (AuNPs) as probes and metal ions as cross-linkers. ATP can be assembled onto the surface of AuNPs through interaction between the electron-rich nitrogen atoms and the electron-deficient surface of AuNPs. Accordingly, Cu2+ ions induce a change in the color and UV/Vis absorbance of AuNPs by coordinating to the triphosphate groups and a ring nitrogen of ATP. A detection limit of 50 nM was achieved, which is comparable to or lower than that achievable by the currently used electrochemical, spectroscopic or chromatographic methods. The theoretical simplicity and high selectivity reported herein demonstrated that AuNPs-based colorimetric assay could be applied in a wide variety of fields by rationally designing the surface chemistry of AuNPs. In addition, our results indicate that ATP-modified AuNPs are less stable in Cu2+, Cd2+ or Zn2+-containing solutions due to the formation of the corresponding dimeric metal-ATP complexes.

  8. Validation of a simple and fast method to quantify in vitro mineralization with fluorescent probes used in molecular imaging of bone.

    PubMed

    Moester, Martiene J C; Schoeman, Monique A E; Oudshoorn, Ineke B; van Beusekom, Mara M; Mol, Isabel M; Kaijzel, Eric L; Löwik, Clemens W G M; de Rooij, Karien E

    2014-01-01

    Alizarin Red S staining is the standard method to indicate and quantify matrix mineralization during differentiation of osteoblast cultures. KS483 cells are multipotent mouse mesenchymal progenitor cells that can differentiate into chondrocytes, adipocytes and osteoblasts and are a well-characterized model for the study of bone formation. Matrix mineralization is the last step of differentiation of bone cells and is therefore a very important outcome measure in bone research. Fluorescently labelled calcium chelating agents, e.g. BoneTag and OsteoSense, are currently used for in vivo imaging of bone. The aim of the present study was to validate these probes for fast and simple detection and quantification of in vitro matrix mineralization by KS483 cells and thus enabling high-throughput screening experiments. KS483 cells were cultured under osteogenic conditions in the presence of compounds that either stimulate or inhibit osteoblast differentiation and thereby matrix mineralization. After 21 days of differentiation, fluorescence of stained cultures was quantified with a near-infrared imager and compared to Alizarin Red S quantification. Fluorescence of both probes closely correlated to Alizarin Red S staining in both inhibiting and stimulating conditions. In addition, both compounds displayed specificity for mineralized nodules. We therefore conclude that this method of quantification of bone mineralization using fluorescent compounds is a good alternative for the Alizarin Red S staining.

  9. Can simple tests performed in the primary care setting provide accurate and efficient diagnosis of benign prostatic hyperplasia? Rationale and design of the Diagnosis Improvement in Primary Care Trial.

    PubMed

    Carballido, J; Fourcade, R; Pagliarulo, A; Cricelli, C; Brenes, F; Pedromingo-Marino, A; Castro, R

    2009-08-01

    Effective treatment of benign prostatic hyperplasia (BPH) improves lower urinary tract symptoms (LUTS) and patient quality of life, and reduces the risk of complications arising from disease progression. However, treatment can only be initiated when men with BPH are identified by accurate diagnostic tests. Current evidence suggests that diagnostic procedures employed by primary care physicians vary widely across Europe. The expected increases in BPH prevalence accompanying the gradual aging of the population, coupled with greater use of medical therapy, mean that general practitioners (GPs) are likely to have an increasingly important role in managing the condition. The GP/primary care clinic is therefore an attractive target location for strategies designed to improve the accuracy of BPH diagnosis. The Diagnosis Improvement in Primary Care Trial (D-IMPACT) is a prospective, multicentre, epidemiological study that aims to identify the optimal subset of simple tests applied by GPs in the primary care setting to diagnose BPH in men who spontaneously report obstructive (voiding) and/or irritative (storage) LUTS. These tests comprise medical history, symptom assessment with the International Prostate Symptom Score questionnaire, urinalysis, measurement of serum levels of prostate-specific antigen and subjective GP diagnosis after completing all tests including digital rectal examination. GP diagnoses and all other tests will be compared with gold-standard diagnoses provided by specialist urologists following completion of additional diagnostic tests. D-IMPACT will establish the diagnostic performance using a non-subjective and reproducible algorithm. An adjusted and multivariate analysis of the results of D-IMPACT will allow identification of the most efficient combination of tests that facilitate accurate BPH diagnosis in the primary care setting. In addition, D-IMPACT will estimate the prevalence of BPH in patients who present spontaneously to GPs with LUTS. PMID

  10. [Ultrafiltration as a fast and simple method for determination of free and protein bound prilocaine concentration. Clinical study following high-dose plexus anesthesia].

    PubMed

    Bachmann-Mennenga, B; Biscoping, J; Schürg, R; Sinning, E; Hempelmann, G

    1991-05-01

    Ultrafiltration as a Fast and Simple Method to Separate Free and Protein Bound Concentrations of Local Anesthetics/Pharmacokinetic studies following high-dose anesthesia of the axillary plexus. As many other drugs amide-type local anesthetics are protein bound in plasma. The extent of binding varies between local anesthetics. The free, non protein-bound fraction of these drugs is mainly responsible for cardiovascular and central-nervous side effects. If high doses are necessary for regional anesthetic procedures it seems reasonable to determine the pharmacological active, non protein-bound fraction in addition to the total concentration of the local anesthetic drug. Analyses of protein binding was performed using an ultrafiltration method which is discussed in this paper. Total (HPLC) and unbound plasma levels (combination of ultrafiltration and HPLC) of the local anesthetic drug in central venous blood were studied in 20 healthy orthopedic patients, undergoing plastic surgery of the upper limb (elbow, forearm, hand), over a time period of 90 min, when performing axillary plexus block with 30 ml prilocaine (CAS 721-50-6) 2% (= 600 mg). Separation of the local anesthetic fractions was achieved using the ultrafiltration system MPS-1, equipped with a YMT-membrane. These membranes have a narrow pore size retaining molecules larger than 30000 Dalton. Ultrafiltration was accomplished by subjecting 1.2 ml of plasma to centrifugation at 2000 x g for 60 min at 30 degrees C using a clinical centrifuge equipped with a 35 degree angle head rotor. The plasma samples were adjusted to physiological pH (7.40) with a sodium-potassium-phosphate buffer. The tightness of the used membrane was controlled by a micromethod for protein estimation (sensitivity 10 micrograms/ml).(ABSTRACT TRUNCATED AT 250 WORDS)

  11. A simple, fast, and sensitive assay for the detection of DNA, thrombin, and adenosine triphosphate based on Dual-Hairpin DNA structure.

    PubMed

    He, Xiuping; Wang, Guangfeng; Xu, Gang; Zhu, Yanhong; Chen, Ling; Zhang, Xiaojun

    2013-11-19

    In the present study, based on multifunctional Dual-Hairpin DNA structure, a simple, fast and high sensitive assay for the detection of DNA, thrombin and adenosine triphosphate (ATP) was demonstrated. DNA sequence labeled with methylene blue (MB), which was designed as single-stranded DNA (ssDNA) matching with target DNA, thrombin, or ATP aptamer, hybridized to the adjunct probe and formed the dual-hairpin structure on the electrode. With the hybridization of adjunct probe and the hairpin-like capture probe in the stem region, the dual-hairpin was formed with outer and inner hairpins. By the conjugation of the target probe with the adjunct probe in the outer hairpin, the adjunct probe divorced from the dual-hairpin structure. The adjunct probe with signal molecules MB, attaching near or divorcing far from the electrode, produced electrochemical signal change and efficient electron transfer due to the fact that it was in proximity to the electrode. However, upon hybridization with the perfect match target, the redox label with the target probe was forced away from the modified electrode, thus resulting in the change of the Dual-Hairpin DNA conformation, which enables impedance of the efficient electron transfer of MB and, consequently, a detectable change of the electrochemical response. In addition, another highlight of this biosensor is its regenerability and stability owing to the merits of structure. Also, based on this Dual-Hairpin platform, the detection limits of DNA, thrombin, and ATP were 50 nM, 3 pM, and 30 nM, respectively. Moreover, this pattern also demonstrated excellent regenerability, reproducibility, and stability. Additionally, given to its ease-of-use, simplicity in design, easy operations, as well as regenerability and stability, the proposed approach may be applied as an excellent design prompter in the preparation of other molecular sensors.

  12. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  13. Highly accurate fast lung CT registration

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd

    2013-03-01

    Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.

  14. Simple Machines Made Simple.

    ERIC Educational Resources Information Center

    St. Andre, Ralph E.

    Simple machines have become a lost point of study in elementary schools as teachers continue to have more material to cover. This manual provides hands-on, cooperative learning activities for grades three through eight concerning the six simple machines: wheel and axle, inclined plane, screw, pulley, wedge, and lever. Most activities can be…

  15. Improved Detection System Description and New Method for Accurate Calibration of Micro-Channel Plate Based Instruments and Its Use in the Fast Plasma Investigation on NASA's Magnetospheric MultiScale Mission

    NASA Technical Reports Server (NTRS)

    Gliese, U.; Avanov, L. A.; Barrie, A. C.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Gershman, D. J.; Dorelli, J. C.; Zeuch, M. A.; Pollock, C. J.; Jacques, A. D.

    2015-01-01

    The Fast Plasma Investigation (FPI) on NASAs Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30 ms for electrons; 150 ms for ions) and spatially differentiated measurements of the full 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity, the reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions of magnetically reconnecting plasmas. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated using the plateau curve technique. In this, a fixed detection threshold is set. The detection system count rate is then measured as a function of MCP voltage to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection system discrimination threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully describe the behavior of the detection system and individually characterize each of its fundamental parameters. To improve this situation, we have developed a detailed phenomenological description of the detection system, its behavior and its signal, crosstalk and noise sources. Based on this, we have devised a new detection

  16. A simple method to prevent hard X-ray-induced preheating effects inside the cone tip in indirect-drive fast ignition implosions

    NASA Astrophysics Data System (ADS)

    Liu, Dongxiao; Shan, Lianqiang; Zhou, Weimin; Wu, Yuchi; Zhu, Bin; Peng, Xiaoshi; Xu, Tao; Wang, Feng; Zhang, Feng; Bi, Bi; Zhang, Bo; Zhang, Zhimeng; Shui, Min; He, Yingling; Yang, Zhiwen; Chen, Tao; Chen, Li; Chen, Ming; Yang, Yimeng; Yuan, Yongteng; Wang, Peng; Gu, Yuqiu; Zhang, Baohan

    2016-06-01

    During fast-ignition implosions, preheating of inside the cone tip caused by hard X-rays can strongly affect the generation and transport of hot electrons in the cone. Although indirect-drive implosions have a higher implosion symmetry, they cause stronger preheating effects than direct-drive implosions. To control the preheating of the cone tip, we propose the use of indirect-drive fast-ignition targets with thicker tips. Experiments carried out at the ShenGuang-III prototype laser facility confirmed that thicker tips are effective for controlling preheating. Moreover, these results were consistent with those of 1D radiation hydrodynamic simulations.

  17. Serial measurement of hFABP and high-sensitivity troponin I post-PCI in STEMI: how fast and accurate can myocardial infarct size and no-reflow be predicted?

    PubMed

    Uitterdijk, André; Sneep, Stefan; van Duin, Richard W B; Krabbendam-Peters, Ilona; Gorsse-Bakker, Charlotte; Duncker, Dirk J; van der Giessen, Willem J; van Beusekom, Heleen M M

    2013-10-01

    The objective of this study was to compare heart-specific fatty acid binding protein (hFABP) and high-sensitivity troponin I (hsTnI) via serial measurements to identify early time points to accurately quantify infarct size and no-reflow in a preclinical swine model of ST-elevated myocardial infarction (STEMI). Myocardial necrosis, usually confirmed by hsTnI or TnT, takes several hours of ischemia before plasma levels rise in the absence of reperfusion. We evaluated the fast marker hFABP compared with hsTnI to estimate infarct size and no-reflow upon reperfused (2 h occlusion) and nonreperfused (8 h occlusion) STEMI in swine. In STEMI (n = 4) and STEMI + reperfusion (n = 8) induced in swine, serial blood samples were taken for hFABP and hsTnI and compared with triphenyl tetrazolium chloride and thioflavin-S staining for infarct size and no-reflow at the time of euthanasia. hFABP increased faster than hsTnI upon occlusion (82 ± 29 vs. 180 ± 73 min, P < 0.05) and increased immediately upon reperfusion while hsTnI release was delayed 16 ± 3 min (P < 0.05). Peak hFABP and hsTnI reperfusion values were reached at 30 ± 5 and 139 ± 21 min, respectively (P < 0.05). Infarct size (containing 84 ± 0.6% no-reflow) correlated well with area under the curve for hFABP (r(2) = 0.92) but less for hsTnI (r(2) = 0.53). At 50 and 60 min reperfusion, hFABP correlated best with infarct size (r(2) = 0.94 and 0.93) and no-reflow (r(2) = 0.96 and 0.94) and showed high sensitivity for myocardial necrosis (2.3 ± 0.6 and 0.4 ± 0.6 g). hFABP rises faster and correlates better with infarct size and no-reflow than hsTnI in STEMI + reperfusion when measured early after reperfusion. The highest sensitivity detecting myocardial necrosis, 0.4 ± 0.6 g at 60 min postreperfusion, provides an accurate and early measurement of infarct size and no-reflow.

  18. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  19. Accurate fluorescence quantum yield determination by fluorescence correlation spectroscopy.

    PubMed

    Kempe, Daryan; Schöne, Antonie; Fitter, Jörg; Gabba, Matteo

    2015-04-01

    Here, we present a comparative method for the accurate determination of fluorescence quantum yields (QYs) by fluorescence correlation spectroscopy. By exploiting the high sensitivity of single-molecule spectroscopy, we obtain the QYs of samples in the microliter range and at (sub)nanomolar concentrations. Additionally, in combination with fluorescence lifetime measurements, our method allows the quantification of both static and collisional quenching constants. Thus, besides being simple and fast, our method opens up the possibility to photophysically characterize labeled biomolecules under application-relevant conditions and with low sample consumption, which is often important in single-molecule studies.

  20. Simple prostatectomy

    MedlinePlus

    Prostatectomy - simple; Suprapubic prostatectomy; Retropubic simple prostatectomy; Open prostatectomy; Millen procedure ... prostate and what caused your prostate to grow. Open simple prostatectomy is often used when the prostate ...

  1. Fast and simple procedure for fractionation of zinc in soil using an ultrasound probe and FAAS detection. Validation of the analytical method and evaluation of the uncertainty budget.

    PubMed

    Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata

    2016-01-01

    A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland). PMID:26666658

  2. Fast and simple procedure for fractionation of zinc in soil using an ultrasound probe and FAAS detection. Validation of the analytical method and evaluation of the uncertainty budget.

    PubMed

    Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata

    2016-01-01

    A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland).

  3. A simple, fast, and inexpensive CTAB-PVP-silica based method for genomic DNA isolation from single, small insect larvae and pupae.

    PubMed

    Huanca-Mamani, W; Rivera-Cabello, D; Maita-Maita, J

    2015-01-01

    In this study, we report a modified CTAB-PVP method combined with silicon dioxide (silica) treatment for the extraction of high quality genomic DNA from a single larva or pupa. This method efficiently obtains DNA from small specimens, which is difficult and challenging because of the small amount of starting tissue. Maceration with liquid nitrogen, phenol treatment, and the ethanol precipitation step are eliminated using this methodology. The A260/A280 absorbance ratios of the isolated DNA were approximately 1.8, suggesting that the DNA is pure and can be used for further molecular analysis. The quality of the isolated DNA permits molecular applications and represents a fast, cheap, and effective alternative method for laboratories with low budgets.

  4. A simple and fast method for chlorsulfuron and metsulfuron methyl determination in water samples using multiwalled carbon nanotubes (MWCNTs) and capillary electrophoresis.

    PubMed

    Springer, Valeria H; Lista, Adriana G

    2010-11-15

    A new method to determine metsulfuron methyl (MSM) and chlorsulfuron (CS) in different water samples was developed. It consists in a solid phase extraction (SPE) procedure using multiwalled carbon nanotubes (MWCNTs) as sorbent material in combination with capillary zone electrophoretic determination. To carry out the pre-concentration step, a simple flow injection system was developed and optimized. Thus, 250 μL of aqueous solution containing methanol 50% (v/v) and acetonitrile 2% (v/v) as eluent, 10 mL of sample and a flow rate of 1.15 mL min(-1) were selected. The CE variables also were optimized. A rapid determination and good resolution of two herbicides were obtained within 9 min using a simple electrophoretic buffer (50 mmol L(-1) sodium tetraborate with 3% of methanol, pH=9.0). Under the optimum conditions, the calibration curves were linear between 0.5 and 6 μg L(-1) for MSM and CS with R(2)=0.995 and 0.997, respectively. The repeatability of the proposed method, expressed as relative standard deviation (RSD), varied between 4.1% and 5.4% (n=10) and the detection limits for MSM and CS were 0.40 and 0.36 μg L(-1), respectively. Good results were achieved when the proposed method was applied to spiked real water samples. The recoveries percentages of the two analytes were over the range 86-108%. PMID:21035652

  5. Using simple molecular orbital calculations to predict disease: fast DFT methods applied to enzymes implicated in PKU, Parkinson's disease and Obsessive Compulsive Disorder

    NASA Astrophysics Data System (ADS)

    Hofto, Laura; Hofto, Meghan; Cross, Jessica; Cafiero, Mauricio

    2007-09-01

    Many diseases can be traced to point mutations in the DNA coding for specific enzymes. These point mutations result in the change of one amino acid residue in the enzyme. We have developed a model using simple molecular orbital calculations which can be used to quantitatively determine the change in interaction between the enzyme's active site and necessary ligands upon mutation. We have applied this model to three hydroxylase proteins: phenylalanine hydroxylase, tyrosine hydroxylase, and tryptophan hydroxylase, and we have obtained excellent correlation between our results and observed disease symptoms. Furthermore, we are able to use this agreement as a baseline to screen other mutations which may also cause onset of disease symptoms. Our focus is on systems where the binding is due largely to dispersion, which is much more difficult to model inexpensively than pure electrostatic interactions. Our calculations are run in parallel on a sixteen processor cluster of 64-bit Athlon processors.

  6. Fast quantitative analysis of boric acid by gas chromatography-mass spectrometry coupled with a simple and selective derivatization reaction using triethanolamine.

    PubMed

    Zeng, Li-Min; Wang, Hao-Yang; Guo, Yin-Long

    2010-03-01

    A fast, selective, and sensitive GC-MS method has been developed and validated for the determination of boric acid in the drinking water by derivatization with triethanolamine. This analytic strategy successfully converts the inorganic, nonvolatile boric acid B(OH)(3) present in the drinking water to a volatile triethanolamine borate B(OCH(2)CH(2))(3)N in a quantitative manner, which facilitates the GC measurement. The SIM mode was applied in the analysis and showed high accuracy, specificity, and reproducibility, as well as reducing the matrix effect effectively. The calibration curve was obtained from 0.01 microg/mL to 10.0 microg/mL with a satisfactory correlation coefficient of 0.9988. The limit of detection for boric acid was 0.04 microg/L. Then the method was applied for detection of the amount of boric acid in bottled drinking water and the results are in accordance with the reported concentration value of boric acid. This study offers a perspective into the utility of GC-MS as an alternate quantitative tool for detection of B(OH)(3), even for detection of boron in various other samples by digesting the boron compounds to boric acid.

  7. Development of a colloidal gold immunochromatographic strip assay for simple and fast detection of human α-lactalbumin in genetically modified cow milk.

    PubMed

    Tao, Chenyu; Zhang, Qingde; Feng, Na; Shi, Deshi; Liu, Bang

    2016-03-01

    The qualitative and quantitative declaration of food ingredients is important to consumers, especially for genetically modified food as it experiences a rapid increase in sales. In this study, we designed an accurate and rapid detection system using colloidal gold immunochromatographic strip assay (GICA) methods to detect genetically modified cow milk. First, we prepared 2 monoclonal antibodies for human α-lactalbumin (α-LA) and measured their antibody titers; the one with the higher titer was used for further experiments. Then, we found the optimal pH value and protein amount of GICA for detection of pure milk samples. The developed strips successfully detected genetically modified cow milk and non-modified cow milk. To determine the sensitivity of GICA, a quantitative ELISA system was used to determine the exact amount of α-LA, and then genetically modified milk was diluted at different rates to test the sensitivity of GICA; the sensitivity was 10 μg/mL. Our results demonstrated that the applied method was effective to detect human α-LA in cow milk.

  8. Fast, simple and efficient salting-out assisted liquid-liquid extraction of naringenin from fruit juice samples prior to their enantioselective determination by liquid chromatography.

    PubMed

    Magiera, Sylwia; Kwietniowska, Ewelina

    2016-11-15

    In this study, an easy, simple and efficient method for the determination of naringenin enantiomers in fruit juices after salting-out-assisted liquid-liquid extraction (SALLE) and high-performance liquid chromatography (HPLC) with diode-array detection (DAD) was developed. The sample treatment is based on the use of water-miscible acetonitrile as the extractant and acetonitrile phase separation under high-salt conditions. After extraction, juice samples were incubated with hydrochloric acid in order to achieve hydrolysis of naringin to naringenin. The hydrolysis parameters were optimized by using a half-fraction factorial central composite design (CCD). After sample preparation, chromatographic separation was obtained on a Chiralcel® OJ-RH column using the mobile phase consisting of 10mM aqueous ammonium acetate:methanol:acetonitrile (50:30:20; v/v/v) with detection at 288nm. The average recovery of the analyzed compounds ranged from 85.6 to 97.1%. The proposed method was satisfactorily used for the determination of naringenin enantiomers in various fruit juices samples.

  9. Fast, simple and efficient salting-out assisted liquid-liquid extraction of naringenin from fruit juice samples prior to their enantioselective determination by liquid chromatography.

    PubMed

    Magiera, Sylwia; Kwietniowska, Ewelina

    2016-11-15

    In this study, an easy, simple and efficient method for the determination of naringenin enantiomers in fruit juices after salting-out-assisted liquid-liquid extraction (SALLE) and high-performance liquid chromatography (HPLC) with diode-array detection (DAD) was developed. The sample treatment is based on the use of water-miscible acetonitrile as the extractant and acetonitrile phase separation under high-salt conditions. After extraction, juice samples were incubated with hydrochloric acid in order to achieve hydrolysis of naringin to naringenin. The hydrolysis parameters were optimized by using a half-fraction factorial central composite design (CCD). After sample preparation, chromatographic separation was obtained on a Chiralcel® OJ-RH column using the mobile phase consisting of 10mM aqueous ammonium acetate:methanol:acetonitrile (50:30:20; v/v/v) with detection at 288nm. The average recovery of the analyzed compounds ranged from 85.6 to 97.1%. The proposed method was satisfactorily used for the determination of naringenin enantiomers in various fruit juices samples. PMID:27283626

  10. Simple and Fast Continuous Estimation Method of Respiratory Frequency During Sleep using the Number of Extreme Points of Heart Rate Time Series

    NASA Astrophysics Data System (ADS)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is reported that frequency component of approximately 0.25Hz of heart rate time series (RSA) is corresponding to the respiratory frequency. In this paper, we proposed that continuous estimation method of respiratory fequency during sleep using the number of extreme points of heart rate time series in real time. Equation for calculation of the method is very simple and the method can continuously calculate frequency by window width of about 18 beats. To evaluate accuracy of proposal method, RSA frequency was calculated using proposal method from the heart rate time series during supine rest. Result, minimum error rate was observed when RSA had time lag for about 11s and error rate was about 13.8%. Result of estimating RSA frequency time series during sleep, it varied regularly during non-REM and varied irregularly during REM. This result is similar as report of previous study about respiratory variability during sleep. Therefore, it is considered that proposal method possible to apply respiratory monitoring system during sleep.

  11. Simple and fast electrochemical detection of sequence-specific DNA via click chemistry-mediated labeling of hairpin DNA probes with ethynylferrocene.

    PubMed

    Hu, Qiong; Deng, Xianbao; Kong, Jinming; Dong, Yuanyuan; Liu, Qianrui; Zhang, Xueji

    2015-06-21

    A universal and straightforward electrochemical biosensing strategy for the detection and identification of sequence-specific DNA via click chemistry-mediated labeling of hairpin DNA probes (hairpins) with ethynylferrocene was reported. In the target-unbound form, the immobilized hairpins were kept in the folded stem-loop configuration with their azido terminals held in close proximity of the electrode surface, making them difficult to be labeled with ethynylferrocene due to the remarkable steric hindrance of the densely packed hairpins. Upon hybridization, they were unfolded and underwent a large conformational change, thus enabling the azido terminals to become available for its subsequent conjugation with ethynylferrocene via the Cu(I)-catalyzed azide-alkyne cycloaddition (CuAAC). After that, the quantitatively labeled ethynylferrocene could be exploited as the electroactive probes to monitor the DNA hybridization. As the unfolded hairpins were labeled in a stoichiometric ratio of 1 : 1, the electrochemical measurement based on differential pulse voltammetry enabled a reliable quantification of sequence-specific DNA. Under optimal conditions, the strategy could detect target single-stranded DNA (ssDNA) down to 0.296 pM with a good linear response over the range from 1 pM to 1 nM, and had excellent specificity in the genotyping of single-nucleotide polymorphisms. Furthermore, it also exhibited good detection reliability in serum samples and required no complicated protocols. More importantly, the simplicity of this strategy together with its compatibility with microfluidic chips makes it show great potential in clinical applications, where simple procedures are generally preferred.

  12. FAST: FAST Analysis of Sequences Toolbox

    PubMed Central

    Lawrence, Travis J.; Kauffman, Kyle T.; Amrine, Katherine C. H.; Carper, Dana L.; Lee, Raymond S.; Becich, Peter J.; Canales, Claudia J.; Ardell, David H.

    2015-01-01

    FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought. PMID:26042145

  13. FAST: FAST Analysis of Sequences Toolbox.

    PubMed

    Lawrence, Travis J; Kauffman, Kyle T; Amrine, Katherine C H; Carper, Dana L; Lee, Raymond S; Becich, Peter J; Canales, Claudia J; Ardell, David H

    2015-01-01

    FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought.

  14. Determination of aminoglycoside residues in milk and muscle based on a simple and fast extraction procedure followed by liquid chromatography coupled to tandem mass spectrometry and time of flight mass spectrometry.

    PubMed

    Arsand, Juliana Bazzan; Jank, Louíse; Martins, Magda Targa; Hoff, Rodrigo Barcellos; Barreto, Fabiano; Pizzolato, Tânia Mara; Sirtori, Carla

    2016-07-01

    Antibiotics are widely used in veterinary medicine mainly for treatment and prevention of diseases. The aminoglycosides are one of the antibiotics classes that have been extensively employed in animal husbandry for the treatment of bacterial infections, but also as growth promotion. The European Union has issued strict Maximum Residue Levels (MRLs) for aminoglycosides in several animal origin products including bovine milk, bovine, swine and poultry muscle. This paper describes a fast and simple analytical method for the determination of ten aminoglycosides (spectinomycin, tobramycin, gentamicin, kanamycin, hygromycin, apramycin, streptomycin, dihydrostreptomycin, amikacin and neomycin) in bovine milk and bovine, swine and poultry muscle. For sample preparation, an extraction method was developed using trichloroacetic acid and clean up with low temperature precipitation and C18 bulk. Liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) was used to carry out quantitative analysis and liquid chromatography-quadrupole-time of flight-mass spectrometry (LC-QTOF-MS) was used to screening purposes. Both methods were validated according to the European Union Commission Directive 2002/657/EC. Good performance characteristics were obtained for recovery, precision, calibration curve, specificity, decision limits (CCα) and detection capabilities (CCβ) in all matrices evaluated. The detection limit (LOD) and quantification limit (LOQ) were ranging from 5 to 100ngg(-1) and 12.5 to 250ngg(-1), respectively. Good linearity (r)-above 0.99-was achieved in concentrations ranging from 0.0 to 2.0×MRL. Recoveries ranged from 36.8% to 98.0% and the coefficient of variation from 0.9 to 20.2%, noting that all curves have been made into their own matrices in order to minimize the matrix effects. The CCβ values obtained in qualitative method were between 25 and 250ngg(-1). The proposed method showed to be simple, easy, and adequate for high-throughput analysis of a large

  15. An accurate and simple method for measurement of paw edema.

    PubMed

    Fereidoni, M; Ahmadiani, A; Semnanian, S; Javan, M

    2000-01-01

    Several methods for measuring inflammation are available that rely on the parameters changing during inflammation. The most commonly used methods estimate the volume of edema formed. In this study, we present a novel method for measuring the volume of pathologically or artificially induced edema. In this model, a liquid column is placed on a balance. When an object is immersed, the liquid applies a force F to attempt its expulsion. Physically, F is the weight (W) of the volume of liquid displaced by that part of the object inserted into the liquid. A balance is used to measure this force (F=W).Therefore, the partial or entire volume of any object, for example, the inflamed hind paw of a rat, can be calculated thus, using the specific gravity of the immersion liquid, at equilibrium mass/specific gravity=volume (V). The extent of edema at time t (measured as V) will be V(t)-V(o). This method is easy to use, materials are of low cost and readily available. It is important that the rat paw (or any object whose volume is being measured) is kept from contacting the wall of the column containing the fluid whilst the value on the balance is read.

  16. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  17. Evaluation of a Fast and Simple Sample Preparation Method for Polybrominated Diphenyl Ether (PBDE) Flame Retardants and Dichlorodiphenyltrichloroethane (DDT) Pesticides in Fish for Analysis by ELISA Compared with GC-MS/MS.

    PubMed

    Sapozhnikova, Yelena; Simons, Tawana; Lehotay, Steven J

    2015-05-13

    A simple, fast, and cost-effective sample preparation method, previously developed and validated for the analysis of organic contaminants in fish using low-pressure gas chromatography-tandem mass spectrometry (LPGC-MS/MS), was evaluated for the analysis of polybrominated diphenyl ethers (PBDEs) and dichlorodiphenyltrichloroethane (DDT) pesticides using enzyme-linked immunosorbent assay (ELISA). The sample preparation technique was based on the quick, easy, cheap, rugged, effective, and safe (QuEChERS) approach with filter-vial dispersive solid phase extraction (d-SPE). Incurred PBDEs and DDTs were analyzed in three types of fish with 3-10% lipid content: Pacific croaker, salmon, and National Institute of Standards and Technology (NIST) Standard Reference Material 1947 (Lake Michigan fish tissue). LPGC-MS/MS and ELISA results were in agreement: 108-111 and 65-82% accuracy ELISA versus LPGC-MS/MS results for PBDEs and DDTs, respectively. Similar detection limits were achieved for ELISA and LPGC-MS/MS. Matrix effects (MEs) were significant (e.g., -60%) for PBDE measurement in ELISA, but not a factor in the case of DDT pesticides. This study demonstrated that the sample preparation method can be adopted for semiquantitative screening analysis of fish samples by commercial kits for PBDEs and DDTs. PMID:25644932

  18. Evaluation of a Fast and Simple Sample Preparation Method for Polybrominated Diphenyl Ether (PBDE) Flame Retardants and Dichlorodiphenyltrichloroethane (DDT) Pesticides in Fish for Analysis by ELISA Compared with GC-MS/MS.

    PubMed

    Sapozhnikova, Yelena; Simons, Tawana; Lehotay, Steven J

    2015-05-13

    A simple, fast, and cost-effective sample preparation method, previously developed and validated for the analysis of organic contaminants in fish using low-pressure gas chromatography-tandem mass spectrometry (LPGC-MS/MS), was evaluated for the analysis of polybrominated diphenyl ethers (PBDEs) and dichlorodiphenyltrichloroethane (DDT) pesticides using enzyme-linked immunosorbent assay (ELISA). The sample preparation technique was based on the quick, easy, cheap, rugged, effective, and safe (QuEChERS) approach with filter-vial dispersive solid phase extraction (d-SPE). Incurred PBDEs and DDTs were analyzed in three types of fish with 3-10% lipid content: Pacific croaker, salmon, and National Institute of Standards and Technology (NIST) Standard Reference Material 1947 (Lake Michigan fish tissue). LPGC-MS/MS and ELISA results were in agreement: 108-111 and 65-82% accuracy ELISA versus LPGC-MS/MS results for PBDEs and DDTs, respectively. Similar detection limits were achieved for ELISA and LPGC-MS/MS. Matrix effects (MEs) were significant (e.g., -60%) for PBDE measurement in ELISA, but not a factor in the case of DDT pesticides. This study demonstrated that the sample preparation method can be adopted for semiquantitative screening analysis of fish samples by commercial kits for PBDEs and DDTs.

  19. A fast and accurate method for the determination of total and soluble fluorine in toothpaste using high-resolution graphite furnace molecular absorption spectrometry and its comparison with established techniques.

    PubMed

    Gleisner, Heike; Einax, Jürgen W; Morés, Silvane; Welz, Bernhard; Carasek, Eduardo

    2011-04-01

    A fast and reliable method has been developed for the determination of total and soluble fluorine in toothpaste, important quality control parameters in dentifrices. The method is based on the molecular absorption of gallium mono-fluoride, GaF, using a commercially available high-resolution continuum source atomic absorption spectrometer. Transversely heated platform tubes with zirconium as permanent chemical modifier were used throughout. Before each sample injection, a palladium and zirconium modifier solution and a gallium reagent were deposited onto the graphite platform and thermally pretreated to transform them into their active forms. The samples were only diluted and introduced directly into the graphite tube together with additional gallium reagent. Under these conditions the fluoride was stable up to a pyrolysis temperature of 550 °C, and the optimum vaporization (molecule formation) temperature was 1550 °C. The GaF molecular absorption was measured at 211.248 nm, and the limits of detection and quantification were 5.2 pg and 17 pg, respectively, corresponding to a limit of quantification of about 30 μg g(-1) (ppm) F in the original toothpaste. The proposed method was used for the determination of total and soluble fluorine content in toothpaste samples from different manufactures. The samples contained different ionic fluoride species and sodium monofluorophosphate (MFP) with covalently bonded fluorine. The results for total fluorine were compared with those obtained with a modified conventional headspace gas chromatographic procedure. Accuracy and precision of the two procedures were comparable, but the proposed procedure was much less labor-intensive, and about five times faster than the latter one.

  20. A fast and accurate method for the determination of total and soluble fluorine in toothpaste using high-resolution graphite furnace molecular absorption spectrometry and its comparison with established techniques.

    PubMed

    Gleisner, Heike; Einax, Jürgen W; Morés, Silvane; Welz, Bernhard; Carasek, Eduardo

    2011-04-01

    A fast and reliable method has been developed for the determination of total and soluble fluorine in toothpaste, important quality control parameters in dentifrices. The method is based on the molecular absorption of gallium mono-fluoride, GaF, using a commercially available high-resolution continuum source atomic absorption spectrometer. Transversely heated platform tubes with zirconium as permanent chemical modifier were used throughout. Before each sample injection, a palladium and zirconium modifier solution and a gallium reagent were deposited onto the graphite platform and thermally pretreated to transform them into their active forms. The samples were only diluted and introduced directly into the graphite tube together with additional gallium reagent. Under these conditions the fluoride was stable up to a pyrolysis temperature of 550 °C, and the optimum vaporization (molecule formation) temperature was 1550 °C. The GaF molecular absorption was measured at 211.248 nm, and the limits of detection and quantification were 5.2 pg and 17 pg, respectively, corresponding to a limit of quantification of about 30 μg g(-1) (ppm) F in the original toothpaste. The proposed method was used for the determination of total and soluble fluorine content in toothpaste samples from different manufactures. The samples contained different ionic fluoride species and sodium monofluorophosphate (MFP) with covalently bonded fluorine. The results for total fluorine were compared with those obtained with a modified conventional headspace gas chromatographic procedure. Accuracy and precision of the two procedures were comparable, but the proposed procedure was much less labor-intensive, and about five times faster than the latter one. PMID:21215545

  1. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future.

  2. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future. PMID:26831987

  3. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1965-01-01

    Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.

  4. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  5. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (i.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  6. Fast and Accurate Digital Morphometry of Facial Expressions.

    PubMed

    Grewe, Carl Martin; Schreiber, Lisa; Zachow, Stefan

    2015-10-01

    Facial surgery deals with a part of the human body that is of particular importance in everyday social interactions. The perception of a person's natural, emotional, and social appearance is significantly influenced by one's expression. This is why facial dynamics has been increasingly studied by both artists and scholars since the mid-Renaissance. Currently, facial dynamics and their importance in the perception of a patient's identity play a fundamental role in planning facial surgery. Assistance is needed for patient information and communication, and documentation and evaluation of the treatment as well as during the surgical procedure. Here, the quantitative assessment of morphological features has been facilitated by the emergence of diverse digital imaging modalities in the last decades. Unfortunately, the manual data preparation usually needed for further quantitative analysis of the digitized head models (surface registration, landmark annotation) is time-consuming, and thus inhibits its use for treatment planning and communication. In this article, we refer to historical studies on facial dynamics, briefly present related work from the field of facial surgery, and draw implications for further developments in this context. A prototypical stereophotogrammetric system for high-quality assessment of patient-specific 3D dynamic morphology is described. An individual statistical model of several facial expressions is computed, and possibilities to address a broad range of clinical questions in facial surgery are demonstrated.

  7. A fast, time-accurate unsteady full potential scheme

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Ide, H.; Gorski, J.; Osher, S.

    1985-01-01

    The unsteady form of the full potential equation is solved in conservation form by an implicit method based on approximate factorization. At each time level, internal Newton iterations are performed to achieve time accuracy and computational efficiency. A local time linearization procedure is introduced to provide a good initial guess for the Newton iteration. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi, obtained by imposing the density to be continuous across the wake. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. The resulting unsteady method performs well which, even at low reduced frequency levels of 0.1 or less, requires fewer than 100 time steps per cycle at transonic Mach numbers. The code is fully vectorized for the CRAY-XMP and the VPS-32 computers.

  8. Fast and accurate database searches with MS-GF+Percolator

    SciTech Connect

    Granholm, Viktor; Kim, Sangtae; Navarro, Jose' C.; Sjolund, Erik; Smith, Richard D.; Kall, Lukas

    2014-02-28

    To identify peptides and proteins from the large number of fragmentation spectra in mass spectrometrybased proteomics, researches commonly employ so called database search engines. Additionally, postprocessors like Percolator have been used on the results from such search engines, to assess confidence, infer peptides and generally increase the number of identifications. A recent search engine, MS-GF+, has previously been showed to out-perform these classical search engines in terms of the number of identified spectra. However, MS-GF+ generates only limited statistical estimates of the results, hence hampering the biological interpretation. Here, we enabled Percolator-processing for MS-GF+ output, and observed an increased number of identified peptides for a wide variety of datasets. In addition, Percolator directly reports false discovery rate estimates, such as q values and posterior error probabilities, as well as p values, for peptide-spectrum matches, peptides and proteins, functions useful for the whole proteomics community.

  9. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  10. Fast and accurate decisions through collective vigilance in fish shoals

    PubMed Central

    Ward, Ashley J. W.; Herbert-Read, James E.; Sumpter, David J. T.; Krause, Jens

    2011-01-01

    Although it has been suggested that large animal groups should make better decisions than smaller groups, there are few empirical demonstrations of this phenomenon and still fewer explanations of the how these improvements may be made. Here we show that both speed and accuracy of decision making increase with group size in fish shoals under predation threat. We examined two plausible mechanisms for this improvement: first, that groups are guided by a small proportion of high-quality decision makers and, second, that group members use self-organized division of vigilance. Repeated testing of individuals showed no evidence of different decision-making abilities between individual fish. Instead, we suggest that shoals achieve greater decision-making efficiencies through division of labor combined with social information transfer. Our results should prompt reconsideration of how we view cooperation in animal groups with fluid membership. PMID:21262802

  11. Fast and Accurate Support Vector Machines on Large Scale Systems

    SciTech Connect

    Vishnu, Abhinav; Narasimhan, Jayenthi; Holder, Larry; Kerbyson, Darren J.; Hoisie, Adolfy

    2015-09-08

    Support Vector Machines (SVM) is a supervised Machine Learning and Data Mining (MLDM) algorithm, which has become ubiquitous largely due to its high accuracy and obliviousness to dimensionality. The objective of SVM is to find an optimal boundary --- also known as hyperplane --- which separates the samples (examples in a dataset) of different classes by a maximum margin. Usually, very few samples contribute to the definition of the boundary. However, existing parallel algorithms use the entire dataset for finding the boundary, which is sub-optimal for performance reasons. In this paper, we propose a novel distributed memory algorithm to eliminate the samples which do not contribute to the boundary definition in SVM. We propose several heuristics, which range from early (aggressive) to late (conservative) elimination of the samples, such that the overall time for generating the boundary is reduced considerably. In a few cases, a sample may be eliminated (shrunk) pre-emptively --- potentially resulting in an incorrect boundary. We propose a scalable approach to synchronize the necessary data structures such that the proposed algorithm maintains its accuracy. We consider the necessary trade-offs of single/multiple synchronization using in-depth time-space complexity analysis. We implement the proposed algorithm using MPI and compare it with libsvm--- de facto sequential SVM software --- which we enhance with OpenMP for multi-core/many-core parallelism. Our proposed approach shows excellent efficiency using up to 4096 processes on several large datasets such as UCI HIGGS Boson dataset and Offending URL dataset.

  12. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

  13. Mobil unit provides fast and accurate Btu measurements

    SciTech Connect

    Lansing, J. )

    1991-05-01

    Southern California Gas Co. (SoCalGas) provides service to more than four million customers in a 23,000-plus square mile area. Some 95% of these customers fall under the residential category and the remaining customers are industrial and commercial. To ensure Btu value received from the supplier and delivered to the user is accounted for properly, SoCalGas has divided its service area into 47 districts according to the gas Btu content. The company obtains the information by collecting approximately 200 sample cylinders each week from field monitoring points and transporting them to one of four laboratories for analysis. For collecting the information from each lab site, SoCalGas uses a computerized Gas Quality Measurement System (GQMS) that utilizes a Hewlett-Packard 1000 computer. Information on all the gas sample analysis is transmitted each day to the company's measurement office. About two- thirds of the lab work is performed in Los Angeles and the remaining at three satellite laboratories. Sample points are strategically located to monitor gas entering each district. By measuring gas volumes at these key points, a volume- weighted average can be determined and the customers' monthly bills then can be adjusted for gas energy content by this volume-weighted four-week average. The engineering department uses sample-cylinder analysis data to establish and maintain correct Btu boundaries. However, the time it takes for this information to be processed makes it difficult for engineering to process the data.

  14. Fast and accurate hashing via iterative nearest neighbors expansion.

    PubMed

    Jin, Zhongming; Zhang, Debing; Hu, Yao; Lin, Shiding; Cai, Deng; He, Xiaofei

    2014-11-01

    Recently, the hashing techniques have been widely applied to approximate the nearest neighbor search problem in many real applications. The basic idea of these approaches is to generate binary codes for data points which can preserve the similarity between any two of them. Given a query, instead of performing a linear scan of the entire data base, the hashing method can perform a linear scan of the points whose hamming distance to the query is not greater than rh , where rh is a constant. However, in order to find the true nearest neighbors, both the locating time and the linear scan time are proportional to O(∑i=0(rh)(c || i)) ( c is the code length), which increase exponentially as rh increases. To address this limitation, we propose a novel algorithm named iterative expanding hashing in this paper, which builds an auxiliary index based on an offline constructed nearest neighbor table to avoid large rh . This auxiliary index can be easily combined with all the traditional hashing methods. Extensive experimental results over various real large-scale datasets demonstrate the superiority of the proposed approach.

  15. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  16. A simple phoswich system

    NASA Astrophysics Data System (ADS)

    Ramsden, D.; Zhang, S. N.

    1988-06-01

    Normal phoswich detector systems use a combination of NaI(Tl) and CsI(Na) scintillators and require the application of careful pulse-shape discriminator techniques to resolve the two components in the scintillation light output which have decay constants of 250 and 630 ns respectively. These techniques provide a good anticoincidence veto efficiency for a relatively narrow range in the ratio of energy deposits in the two crytals and for a detector system whose temperature is carefully controlled. This paper describes the performance of a simple phoswich which makes use of the fast UV signal from a BaF 2 crystal to provide a prompt veto signal. The performance to be expected from various combinations of a BaF 2 anticoincidence crystal with other primary detectors is presented. These simulations have been verified by simple experimental tests.

  17. Fast protein folding kinetics

    PubMed Central

    Gelman, Hannah; Gruebele, Martin

    2014-01-01

    Fast folding proteins have been a major focus of computational and experimental study because they are accessible to both techniques: they are small and fast enough to be reasonably simulated with current computational power, but have dynamics slow enough to be observed with specially developed experimental techniques. This coupled study of fast folding proteins has provided insight into the mechanisms which allow some proteins to find their native conformation well less than 1 ms and has uncovered examples of theoretically predicted phenomena such as downhill folding. The study of fast folders also informs our understanding of even “slow” folding processes: fast folders are small, relatively simple protein domains and the principles that govern their folding also govern the folding of more complex systems. This review summarizes the major theoretical and experimental techniques used to study fast folding proteins and provides an overview of the major findings of fast folding research. Finally, we examine the themes that have emerged from studying fast folders and briefly summarize their application to protein folding in general as well as some work that is left to do. PMID:24641816

  18. Simple Waveforms, Simply Described

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2008-01-01

    Since the first Lazarus Project calculations, it has been frequently noted that binary black hole merger waveforms are 'simple.' In this talk we examine some of the simple features of coalescence and merger waveforms from a variety of binary configurations. We suggest an interpretation of the waveforms in terms of an implicit rotating source. This allows a coherent description, of both the inspiral waveforms, derivable from post-Newtonian(PN) calculations, and the numerically determined merger-ringdown. We focus particularly on similarities in the features of various Multipolar waveform components Generated by various systems. The late-time phase evolution of most L these waveform components are accurately described with a sinple analytic fit. We also discuss apparent relationships among phase and amplitude evolution. Taken together with PN information, the features we describe can provide an approximate analytic description full coalescence wavefoRms. complementary to other analytic waveforns approaches.

  19. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  20. Toward a UV-visible-near-infrared hyperspectral imaging platform for fast multiplex reflection spectroscopy.

    PubMed

    Li, Jianping; Chan, Robert K Y

    2010-10-15

    A reflection hyperspectral imaging system covering a 350-1000nm spectral range is realized by a UV-visible-near-IR Fourier transform imaging spectrometer. The system has a simple design and good spectral and spatial resolving performance. Accurate and fast microspectroscopic measurement results on novel colloidal crystal beads demonstrate the system has practical potential for high-throughput molecular multiplex assays. PMID:20967056

  1. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  2. Fast self-attenuation determination of low energy gamma lines.

    PubMed

    Haddad, Kh

    2016-09-01

    Linear correlation between self-attenuation factor of 46.5keV ((210)Pb) and the 1764keV, 46.5 counts ratio has been developed in this work using triple superphosphate fertilizer samples. Similar correlation has been also developed for 63.3keV ((238)U). This correlation offers simple, fast, and accurate technique for self-attenuation determination of low energy gamma lines. Utilization of 46.5keV in the ratio has remarkably improved the technique sensitivity in comparison with other work, which used similar concept. The obtained results were used to assess the validity of transmission technique. PMID:27337648

  3. Fasting Increases Tobramycin Oral Absorption in Mice▿

    PubMed Central

    De Leo, Luigina; Di Toro, Nicola; Decorti, Giuliana; Malusà, Noelia; Ventura, Alessandro; Not, Tarcisio

    2010-01-01

    The pharmacokinetics of the aminoglycoside tobramycin was evaluated after oral administration to fed or fasting (15 h) mice. As expected, under normal feeding conditions, oral absorption was negligible; however, fasting induced a dramatic increase in tobramycin bioavailability. The dual-sugar test with lactulose and l-rhamnose confirmed increased small bowel permeability via the paracellular route in fasting animals. When experiments aimed at increasing the oral bioavailability of hydrophilic compounds are performed, timing of fasting should be extremely accurate. PMID:20086144

  4. The SILAC Fly Allows for Accurate Protein Quantification in Vivo*

    PubMed Central

    Sury, Matthias D.; Chen, Jia-Xuan; Selbach, Matthias

    2010-01-01

    Stable isotope labeling by amino acids in cell culture (SILAC) is widely used to quantify protein abundance in tissue culture cells. Until now, the only multicellular organism completely labeled at the amino acid level was the laboratory mouse. The fruit fly Drosophila melanogaster is one of the most widely used small animal models in biology. Here, we show that feeding flies with SILAC-labeled yeast leads to almost complete labeling in the first filial generation. We used these “SILAC flies” to investigate sexual dimorphism of protein abundance in D. melanogaster. Quantitative proteome comparison of adult male and female flies revealed distinct biological processes specific for each sex. Using a tudor mutant that is defective for germ cell generation allowed us to differentiate between sex-specific protein expression in the germ line and somatic tissue. We identified many proteins with known sex-specific expression bias. In addition, several new proteins with a potential role in sexual dimorphism were identified. Collectively, our data show that the SILAC fly can be used to accurately quantify protein abundance in vivo. The approach is simple, fast, and cost-effective, making SILAC flies an attractive model system for the emerging field of in vivo quantitative proteomics. PMID:20525996

  5. Pendulum: Rich physics from a simple system

    SciTech Connect

    Nelson, R.A.; Olsson, M.G.

    1986-02-01

    We provide a comprehensive discussion of the corrections needed to accurately measure the acceleration of gravity using a plane pendulum. A simple laboratory experiment is described in which g was determined to four significant figures of accuracy.

  6. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  7. Generalized Gradient Approximation Made Simple

    SciTech Connect

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-10-01

    Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

  8. Robot navigation using simple sensor fusion

    SciTech Connect

    Jollay, D.M.; Ricks, R.E.

    1988-01-01

    Sensors on an autonomous mobile system are essential in enviornment determination for navigation purposes. As is well documented in previous publications, sonar sensors are inadequate in providing a depiction of a real world environment and therefore do not provide accurate information for navigation, it not used in conjunction with another type of sensor. This paper describes a simple, inexpensive, and relatively fast navigation algorithm involving vision and sonar sensor fusion for use in navigating an autonomous robot in an unknown and potentially dynamic environment. Navigation of the mobile robot was accomplished by use of a TV camera as the primary sensor. Input data received from the camera were digitized through a video module and then processed using a dedicated vision system to enable detection of obstacles and to determine edge positions relative to the robot. Since 3D vision was not attempted due to its complex and time consuming nature, sonar sensors were then sued as secondary sensors in order to determine the proximity of detected obstacles. By then fusing the sensor data, the robot was able to navigate (quickly and collision free) to a given goal, achieving obstacle avoidance in real-time.

  9. Simple program calculates partial liquid volumes in vessels

    SciTech Connect

    Koch, P.

    1992-04-13

    This paper reports on a simple calculator program which solves problems of partial liquid volumes for a variety of storage and process vessels, including inclined cylindrical vessels and those with conical heads. Engineers in the oil refining and chemical industries are often confronted with the problem of estimating partial liquid volumes in storage tanks or process vessels. Cistern, the calculator program presented here, allows fast and accurate resolution of problems for a wide range of vessels without user intervention, other than inputting the problem data. Running the program requires no mathematical skills. Cistern is written for Hewlett-Packard HP 41CV or HP 41CX programmable calculators (or HP 41C with extended memory modules).

  10. Fast valve

    DOEpatents

    Van Dyke, William J.

    1992-01-01

    A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing.

  11. Fast valve

    DOEpatents

    Van Dyke, W.J.

    1992-04-07

    A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing. 4 figs.

  12. Application of solvent-assisted dispersive solid phase extraction as a new, fast, simple and reliable preconcentration and trace detection of lead and cadmium ions in fruit and water samples.

    PubMed

    Behbahani, Mohammad; Ghareh Hassanlou, Parmoon; Amini, Mostafa M; Omidi, Fariborz; Esrafili, Ali; Farzadkia, Mehdi; Bagheri, Akbar

    2015-11-15

    In this research, a new sample treatment technique termed solvent-assisted dispersive solid phase extraction (SA-DSPE) was developed. The new method was based on the dispersion of the sorbent into the sample to maximize the contact surface. In this approach, the dispersion of the sorbent at a very low milligram level was achieved by injecting a mixture solution of the sorbent and disperser solvent into the aqueous sample. Thereby, a cloudy solution formed. The cloudy solution resulted from the dispersion of the fine particles of the sorbent in the bulk aqueous sample. After extraction, the cloudy solution was centrifuged and the enriched analytes in the sediment phase dissolved in ethanol and determined by flame atomic absorption spectrophotometer. Under the optimized conditions, the detection limit for lead and cadmium ions was 1.2 μg L(-1) and 0.2 μg L(-1), respectively. Furthermore, the preconcentration factor was 299.3 and 137.1 for cadmium and lead ions, respectively. SA-DSPE was successfully applied for trace determination of lead and cadmium in fruit (Citrus limetta, Kiwi and pomegranate) and water samples. Finally, the introduced sample preparation method can be used as a simple, rapid, reliable, selective and sensitive method for flame atomic absorption spectrophotometric determination of trace levels of lead and cadmium ions in fruit and water samples.

  13. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  14. Evaluation of gravimetric and volumetric dispensers of particles of nuclear material. [Accurate dispensing of fissile and fertile fuel into fuel rods

    SciTech Connect

    Bayne, C.K.; Angelini, P.

    1981-08-01

    Theoretical and experimental studies compared the abilities of volumetric and gravimetric dispensers to dispense accurately fissile and fertile fuel particles. Such devices are being developed for the fabrication of sphere-pac fuel rods for high-temperature gas-cooled light water and fast breeder reactors. The theoretical examination suggests that, although the fuel particles are dispensed more accurately by the gravimetric dispenser, the amount of nuclear material in the fuel particles dispensed by the two methods is not significantly different. The experimental results demonstrated that the volumetric dispenser can dispense both fuel particles and nuclear materials that meet standards for fabricating fuel rods. Performance of the more complex gravimetric dispenser was not significantly better than that of the simple yet accurate volumetric dispenser.

  15. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  16. Accurate momentum transfer cross section for the attractive Yukawa potential

    SciTech Connect

    Khrapak, S. A.

    2014-04-15

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  17. Exploiting Resistive Guiding for Fast Ignition

    NASA Astrophysics Data System (ADS)

    Robinson, Alex

    2012-10-01

    Devising methods and schemes for controlling fast electron transport remains a major challenge in Fast Ignition research. Realistic estimates of the fast electron divergence angle require this control in order to ensure that the fast electron to hot spot coupling efficiency does not reach excessively low values. Resistivity gradients in the target will lead to strong magnetic field growth (via ∇ηxj) which can be exploited for the purposes of controlling the fast electron propagation (Robinson and Sherlock, PoP (2007)). There are a number of possible schemes which might be considered. Here we will report on numerical simulations that we have carried out on both simple configurations such as parabolic reflectors, and complex arrangements (Robinson, Key and Tabak, PRL (2012)). Substantial improvements to the fast electron to hot spot coupling efficiency have been found even for realistic fast electron divergence angles.

  18. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    RAZ scoring is a simple, accurate and inexpensive screening technique for cardiomyopathy. Although HF QRS ECG is highly sensitive for cardiomyopathy, its specificity may be compromised in patients with cardiac pathologies other than cardiomyopathy, such as uncomplicated coronary artery disease or multiple coronary disease risk factors. Further studies are required to determine whether HF QRS might be useful for monitoring cardiomyopathy severity or the efficacy of therapy in a longitudinal fashion.

  19. Fast Ion Conductors

    NASA Astrophysics Data System (ADS)

    Chadwick, Alan V.

    Fast ion conductors, sometimes referred to as superionic conductors or solid electrolytes, are solids with ionic conductivities that are comparable to those found in molten salts and aqueous solutions of strong electrolytes, i.e., 10-2-10 S cm-1. Such materials have been known of for a very long time and some typical examples of the conductivity are shown in Fig. 1, along with sodium chloride as the archetypal normal ionic solid. Faraday [1] first noted the high conductivity of solid lead fluoride (PbF2) and silver sulphide (Ag2S) in the 1830s and silver iodide was known to be unusually high ionic conductor to the German physicists early in the 1900s. However, the materials were regarded as anomalous until the mid 1960s when they became the focus of intense interest to academics and technologists and they have remained at the forefront of materials research [2-4]. The academic aim is to understand the fundamental origin of fast ion behaviour and the technological goal is to utilize the properties in applications, particularly in energy applications such as the electrolyte membranes in solid-state batteries and fuel cells, and in electrochemical sensors. The last four decades has seen an expansion of the types of material that exhibit fast ion behaviour that now extends beyond simple binary ionic crystals to complex solids and even polymeric materials. Over this same period computer simulations of solids has also developed (in fact these methods and the interest in fast ion conductors were almost coincidental in their time of origin) and the techniques have played a key role in this area of research.

  20. Simple method for correct enumeration of Staphylococcus aureus.

    PubMed

    Haaber, J; Cohn, M T; Petersen, A; Ingmer, H

    2016-06-01

    Optical density (OD) measurement is applied universally to estimate cell numbers of microorganisms growing in liquid cultures. It is a fast and reliable method but is based on the assumption that the bacteria grow as single cells of equal size and that the cells are dispersed evenly in the liquid culture. When grown in such liquid cultures, the human pathogen Staphylococcus aureus is characterized by its aggregation of single cells into clusters of variable size. Here, we show that aggregation during growth in the laboratory standard medium tryptic soy broth (TSB) is common among clinical and laboratory S. aureus isolates and that aggregation may introduce significant bias when applying standard enumeration methods on S. aureus growing in laboratory batch cultures. We provide a simple and efficient sonication procedure, which can be applied prior to optical density measurements to give an accurate estimate of cellular numbers in liquid cultures of S. aureus regardless of the aggregation level of the given strain. We further show that the sonication procedure is applicable for accurate determination of cell numbers using agar plate counting of aggregating strains. PMID:27080188

  1. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  2. Simple aging in molecular glasses

    NASA Astrophysics Data System (ADS)

    Niss, Kristine

    2015-03-01

    The glass transition takes place when the structural (alpha) relaxation freezes in and the liquid enters a non-equilibrium solid state. This usually happens when the relaxation time, τ, reaches a timescale of 1000 seconds, and τ = 1000 s is pragmatically used as a definition of the glass transition temperature Tg. However, if the glass is studied on a long enough time scale then relaxation is still seen as physical aging. Aging is a non-linear signature of the alpha relaxation in which the relaxation dynamics changes as a function of how far the system has relaxed. If the system is studied well below Tg then equilibrium will not be achieved, but just below or around Tg it is possible to systematically monitor the non-linear relaxation all the way to equilibrium. We have developed a micro crystat which is optimized for making fast changes in temperature and keeping temperature stable over days and even weeks. Combining this micro cryostat with a small dielectric cell it is possible to monitor non-linear relaxation in a dynamical range of more than 4 decades from 10 seconds to a 105 seconds. The aging is monitored after a fast temperature jump. This means that the aging itself is isotherm, and the data therefore directly shows, how the relaxation-rate changes as volume and structure change on the isotherm. We have studied several molecular liquids and find that the data to a very large extend can be understood in terms of a TNM formalism. This implies time-aging-time superposition and suggests a simple picture where the out of equlibrium ``states'' correspond to equilibrium states - at an other temperature. If the alpha relaxation is dynamically heterogeneous as it is commonly believed, then the aging results show that fast and slow ``modes'' of the relaxation are governed in the same way by structure and volume. We hypothesize that aging according to TNM formalism is an intrinsic property of Roskilde Simple liquids.

  3. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  4. Fast optimization of static axisymmetric shell structures

    NASA Astrophysics Data System (ADS)

    Jacoby, Jeffrey

    An axisymmetric shell optimization procedure is developed which is a fast, user-friendly and practical tool for design use in disciplines including aerospace, mechanical and civil engineering. The shape and thickness of a shell can be optimized to minimize shell mass, mass/volume ratio or stress with constraints imposed on von Mises stress and local buckling. The procedure was created with the aid of the GENOPT optimization development system (Dr. D. Bushnell, Lockheed Missiles and Space Co) and uses the FAST1 shell analysis program (Prof. C. R. Steele, Stanford University) to perform the constraint analysis. The optimization method used is the modified method of feasible directions. The procedure is fast because exact analysis methods allow complex shells to be modelled with only a few large shell elements and still retain a sufficiently accurate solution. This is of particular advantage near shell boundaries and intersections which can have small regions of very detailed variation in the solution. Finite element methods would require many small elements to capture accurately this detail with a resulting increase in computation time and model complexity. Reducing the complexity of the model also reduces the size of the required input and contributes to the simplicity of the procedure. Optimization design variables are the radial and axial coordinates of nodes and the shape parameters and thicknesses of the elements. Thickness distribution within an element can be optimized by specifying the thickness at evenly spaced control points. Spline interpolation is used to provide a smooth thickness variation between the control points. An effective method is developed for reducing the number of required stress constraint equations. Various shells have been optimized and include models for comparison with published results. Shape, thickness and shape/thickness optimization has been performed on examples including a simple aerobrake, sphere-nozzle intersections, ring

  5. Mirador: A Simple, Fast Search Interface for Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Strub, Richard; Seiler, Edward; Joshi, Talak; MacHarrie, Peter

    2008-01-01

    A major challenge for remote sensing science researchers is searching and acquiring relevant data files for their research projects based on content, space and time constraints. Several structured query (SQ) and hierarchical navigation (HN) search interfaces have been develop ed to satisfy this requirement, yet the dominant search engines in th e general domain are based on free-text search. The Goddard Earth Sci ences Data and Information Services Center has developed a free-text search interface named Mirador that supports space-time queries, inc luding a gazetteer and geophysical event gazetteer. In order to compe nsate for a slightly reduced search precision relative to SQ and HN t echniques, Mirador uses several search optimizations to return result s quickly. The quick response enables a more iterative search strateg y than is available with many SQ and HN techniques.

  6. Machine learning scheme for fast extraction of chemically interpretable interatomic potentials

    NASA Astrophysics Data System (ADS)

    Dolgirev, Pavel E.; Kruglov, Ivan A.; Oganov, Artem R.

    2016-08-01

    We present a new method for a fast, unbiased and accurate representation of interatomic interactions. It is a combination of an artificial neural network and our new approach for pair potential reconstruction. The potential reconstruction method is simple and computationally cheap and gives rich information about interactions in crystals. This method can be combined with structure prediction and molecular dynamics simulations, providing accuracy similar to ab initio methods, but at a small fraction of the cost. We present applications to real systems and discuss the insight provided by our method.

  7. SIMPLE: An Introduction.

    ERIC Educational Resources Information Center

    Endres, Frank L.

    Symbolic Interactive Matrix Processing Language (SIMPLE) is a conversational matrix-oriented source language suited to a batch or a time-sharing environment. The two modes of operation of SIMPLE are conversational mode and programing mode. This program uses a TAURUS time-sharing system and cathode ray terminals or teletypes. SIMPLE performs all…

  8. Gompertz kinetics model of fast chemical neurotransmission currents.

    PubMed

    Easton, Dexter M

    2005-10-01

    At a chemical synapse, transmitter molecules ejected from presynaptic terminal(s) bind reversibly with postsynaptic receptors and trigger an increase in channel conductance to specific ions. This paper describes a simple but accurate predictive model for the time course of the synaptic conductance transient, based on Gompertz kinetics. In the model, two simple exponential decay terms set the rates of development and decline of transmitter action. The first, r, triggering conductance activation, is surrogate for the decelerated rate of growth of conductance, G. The second, r', responsible for Y, deactivation of the conductance, is surrogate for the decelerated rate of decline of transmitter action. Therefore, the differential equation for the net conductance change, g, triggered by the transmitter is dg/dt=g(r-r'). The solution of that equation yields the product of G(t), representing activation, and Y(t), which defines the proportional decline (deactivation) of the current. The model fits, over their full-time course, published records of macroscopic ionic current associated with fast chemical transmission. The Gompertz model is a convenient and accurate method for routine analysis and comparison of records of synaptic current and putative transmitter time course. A Gompertz fit requiring only three independent rate constants plus initial current appears indistinguishable from a Markov fit using seven rate constants.

  9. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  10. The Fast Scattering Code (FSC): Validation Studies and Program Guidelines

    NASA Technical Reports Server (NTRS)

    Tinetti, Ana F.; Dunn, Mark H.

    2011-01-01

    The Fast Scattering Code (FSC) is a frequency domain noise prediction program developed at the NASA Langley Research Center (LaRC) to simulate the acoustic field produced by the interaction of known, time harmonic incident sound with bodies of arbitrary shape and surface impedance immersed in a potential flow. The code uses the equivalent source method (ESM) to solve an exterior 3-D Helmholtz boundary value problem (BVP) by expanding the scattered acoustic pressure field into a series of point sources distributed on a fictitious surface placed inside the actual scatterer. This work provides additional code validation studies and illustrates the range of code parameters that produce accurate results with minimal computational costs. Systematic noise prediction studies are presented in which monopole generated incident sound is scattered by simple geometric shapes - spheres (acoustically hard and soft surfaces), oblate spheroids, flat disk, and flat plates with various edge topologies. Comparisons between FSC simulations and analytical results and experimental data are presented.

  11. A simple animal support for convenient weighing

    USGS Publications Warehouse

    Pan, H.P.; Caslick, J.W.; Harke, D.T.; Decker, D.G.

    1965-01-01

    A simple animal support constructed of web belts to hold skittish pigs for weighing was developed. The support is easily made, noninjurious to the pigs, and compact, facilitating rapid, accurate weighing. With minor modifications, the support can probably be used in weighing other animals.

  12. A Fast Liquid Chromatography Tandem Mass Spectrometric Analysis of PETN (Pentaerythritol Tetranitrate), RDX (3,5-Trinitro-1,3,5-triazacyclohexane) and HMX (Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) in Soil, Utilizing a Simple Ultrasonic-Assisted Extraction with Minimum Solvent.

    PubMed

    Anilanmert, Beril; Aydin, Muhammet; Apak, Resat; Avci, Gülfidan Yenel; Cengiz, Salih

    2016-01-01

    Direct analyses of explosives in soil using liquid chromatography tandem mass spectrometry (LC-MS/MS) methods are very limited in the literature and require complex procedures or relatively high amount of solvent. A simple and rapid method was developed for the determination of pentaerythritol tetranitrate (PETN), 3,5-trinitro-1,3,5-triazacyclohexane (RDX) and octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), which are among the explosives used in terrorist attacks. A one-step extraction method for 1.00 g soil with 2.00 mL acetonitrile, and a 8-min LC-MS/MS method was developed. The detection limits for PETN, RDX and HMX were 5.2, 8.5 and 3.4 ng/g and quantitation limits were 10.0, 24.5, 6.0 ng/g. The intermediate precisions and Horwitz Ratio's were between 4.10 - 13.26% and 0.24 - 0.98, in order. This method was applied to a model post-blast debris collected from an artificial explosion and real samples collected after a terrorist attack in Istanbul. The method is easy and fast and requires less solvent use than other methods. PMID:27302580

  13. Fast Poisson, Fast Helmholtz and fast linear elastostatic solvers on rectangular parallelepipeds

    SciTech Connect

    Wiegmann, A.

    1999-06-01

    FFT-based fast Poisson and fast Helmholtz solvers on rectangular parallelepipeds for periodic boundary conditions in one-, two and three space dimensions can also be used to solve Dirichlet and Neumann boundary value problems. For non-zero boundary conditions, this is the special, grid-aligned case of jump corrections used in the Explicit Jump Immersed Interface method. Fast elastostatic solvers for periodic boundary conditions in two and three dimensions can also be based on the FFT. From the periodic solvers we derive fast solvers for the new 'normal' boundary conditions and essential boundary conditions on rectangular parallelepipeds. The periodic case allows a simple proof of existence and uniqueness of the solutions to the discretization of normal boundary conditions. Numerical examples demonstrate the efficiency of the fast elastostatic solvers for non-periodic boundary conditions. More importantly, the fast solvers on rectangular parallelepipeds can be used together with the Immersed Interface Method to solve problems on non-rectangular domains with general boundary conditions. Details of this are reported in the preprint The Explicit Jump Immersed Interface Method for 2D Linear Elastostatics by the author.

  14. Fast separable nonlocal means

    NASA Astrophysics Data System (ADS)

    Ghosh, Sanjay; Chaudhury, Kunal N.

    2016-03-01

    We propose a simple and fast algorithm called PatchLift for computing distances between patches (contiguous block of samples) extracted from a given one-dimensional signal. PatchLift is based on the observation that the patch distances can be efficiently computed from a matrix that is derived from the one-dimensional signal using lifting; importantly, the number of operations required to compute the patch distances using this approach does not scale with the patch length. We next demonstrate how PatchLift can be used for patch-based denoising of images corrupted with Gaussian noise. In particular, we propose a separable formulation of the classical nonlocal means (NLM) algorithm that can be implemented using PatchLift. We demonstrate that the PatchLift-based implementation of separable NLM is a few orders faster than standard NLM and is competitive with existing fast implementations of NLM. Moreover, its denoising performance is shown to be consistently superior to that of NLM and some of its variants, both in terms of peak signal-to-noise ratio/structural similarity index and visual quality.

  15. Fast Image Texture Classification Using Decision Trees

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  16. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  17. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  18. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  19. Must Kohn-Sham oscillator strengths be accurate at threshold?

    SciTech Connect

    Yang Zenghui; Burke, Kieron; Faassen, Meta van

    2009-09-21

    The exact ground-state Kohn-Sham (KS) potential for the helium atom is known from accurate wave function calculations of the ground-state density. The threshold for photoabsorption from this potential matches the physical system exactly. By carefully studying its absorption spectrum, we show the answer to the title question is no. To address this problem in detail, we generate a highly accurate simple fit of a two-electron spectrum near the threshold, and apply the method to both the experimental spectrum and that of the exact ground-state Kohn-Sham potential.

  20. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  1. Simple pulmonary eosinophilia

    MedlinePlus

    Pulmonary infiltrates with eosinophilia; Loffler syndrome; Eosinophilic pneumonia; Pneumonia - eosinophilic ... simple pulmonary eosinophilia is a severe type of pneumonia called acute idiopathic eosinophilic pneumonia.

  2. No Generalization of Practice for Nonzero Simple Addition

    ERIC Educational Resources Information Center

    Campbell, Jamie I. D.; Beech, Leah C.

    2014-01-01

    Several types of converging evidence have suggested recently that skilled adults solve very simple addition problems (e.g., 2 + 1, 4 + 2) using a fast, unconscious counting algorithm. These results stand in opposition to the long-held assumption in the cognitive arithmetic literature that such simple addition problems normally are solved by fact…

  3. Operator Priming and Generalization of Practice in Adults' Simple Arithmetic

    ERIC Educational Resources Information Center

    Chen, Yalin; Campbell, Jamie I. D.

    2016-01-01

    There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication,…

  4. Fast Fuzzy Arithmetic Operations

    NASA Technical Reports Server (NTRS)

    Hampton, Michael; Kosheleva, Olga

    1997-01-01

    In engineering applications of fuzzy logic, the main goal is not to simulate the way the experts really think, but to come up with a good engineering solution that would (ideally) be better than the expert's control, In such applications, it makes perfect sense to restrict ourselves to simplified approximate expressions for membership functions. If we need to perform arithmetic operations with the resulting fuzzy numbers, then we can use simple and fast algorithms that are known for operations with simple membership functions. In other applications, especially the ones that are related to humanities, simulating experts is one of the main goals. In such applications, we must use membership functions that capture every nuance of the expert's opinion; these functions are therefore complicated, and fuzzy arithmetic operations with the corresponding fuzzy numbers become a computational problem. In this paper, we design a new algorithm for performing such operations. This algorithm is applicable in the case when negative logarithms - log(u(x)) of membership functions u(x) are convex, and reduces computation time from O(n(exp 2))to O(n log(n)) (where n is the number of points x at which we know the membership functions u(x)).

  5. Simple Machine Junk Cars

    ERIC Educational Resources Information Center

    Herald, Christine

    2010-01-01

    During the month of May, the author's eighth-grade physical science students study the six simple machines through hands-on activities, reading assignments, videos, and notes. At the end of the month, they can easily identify the six types of simple machine: inclined plane, wheel and axle, pulley, screw, wedge, and lever. To conclude this unit,…

  6. Simple, Internally Adjustable Valve

    NASA Technical Reports Server (NTRS)

    Burley, Richard K.

    1990-01-01

    Valve containing simple in-line, adjustable, flow-control orifice made from ordinary plumbing fitting and two allen setscrews. Construction of valve requires only simple drilling, tapping, and grinding. Orifice installed in existing fitting, avoiding changes in rest of plumbing.

  7. A Simple "Tubeless" Telescope

    ERIC Educational Resources Information Center

    Straulino, S.; Bonechi, L.

    2010-01-01

    Two lenses make it possible to create a simple telescope with quite large magnification. The set-up is very simple and can be reproduced in schools, provided the laboratory has a range of lenses with different focal lengths. In this article, the authors adopt the Keplerian configuration, which is composed of two converging lenses. This instrument,…

  8. Simple parametrization of fragment reduced widths in heavy ion collisions.

    PubMed

    Tripathi, R K; Townsend, L W

    1994-04-01

    A systematic analysis of the observed reduced widths obtained in relativistic heavy ion fragmentation reactions is used to develop a phenomenological parametrization of these data. The parametrization is simple, accurate, and completely general in applicability.

  9. Primitive layered gabbros from fast-spreading lower oceanic crust

    NASA Astrophysics Data System (ADS)

    Gillis, Kathryn M.; Snow, Jonathan E.; Klaus, Adam; Abe, Natsue; Adrião, Álden B.; Akizawa, Norikatsu; Ceuleneer, Georges; Cheadle, Michael J.; Faak, Kathrin; Falloon, Trevor J.; Friedman, Sarah A.; Godard, Marguerite; Guerin, Gilles; Harigane, Yumiko; Horst, Andrew J.; Hoshide, Takashi; Ildefonse, Benoit; Jean, Marlon M.; John, Barbara E.; Koepke, Juergen; Machi, Sumiaki; Maeda, Jinichiro; Marks, Naomi E.; McCaig, Andrew M.; Meyer, Romain; Morris, Antony; Nozaka, Toshio; Python, Marie; Saha, Abhishek; Wintsch, Robert P.

    2014-01-01

    Three-quarters of the oceanic crust formed at fast-spreading ridges is composed of plutonic rocks whose mineral assemblages, textures and compositions record the history of melt transport and crystallization between the mantle and the sea floor. Despite the importance of these rocks, sampling them in situ is extremely challenging owing to the overlying dykes and lavas. This means that models for understanding the formation of the lower crust are based largely on geophysical studies and ancient analogues (ophiolites) that did not form at typical mid-ocean ridges. Here we describe cored intervals of primitive, modally layered gabbroic rocks from the lower plutonic crust formed at a fast-spreading ridge, sampled by the Integrated Ocean Drilling Program at the Hess Deep rift. Centimetre-scale, modally layered rocks, some of which have a strong layering-parallel foliation, confirm a long-held belief that such rocks are a key constituent of the lower oceanic crust formed at fast-spreading ridges. Geochemical analysis of these primitive lower plutonic rocks--in combination with previous geochemical data for shallow-level plutonic rocks, sheeted dykes and lavas--provides the most completely constrained estimate of the bulk composition of fast-spreading oceanic crust so far. Simple crystallization models using this bulk crustal composition as the parental melt accurately predict the bulk composition of both the lavas and the plutonic rocks. However, the recovered plutonic rocks show early crystallization of orthopyroxene, which is not predicted by current models of melt extraction from the mantle and mid-ocean-ridge basalt differentiation. The simplest explanation of this observation is that compositionally diverse melts are extracted from the mantle and partly crystallize before mixing to produce the more homogeneous magmas that erupt.

  10. Primitive layered gabbros from fast-spreading lower oceanic crust.

    PubMed

    Gillis, Kathryn M; Snow, Jonathan E; Klaus, Adam; Abe, Natsue; Adrião, Alden B; Akizawa, Norikatsu; Ceuleneer, Georges; Cheadle, Michael J; Faak, Kathrin; Falloon, Trevor J; Friedman, Sarah A; Godard, Marguerite; Guerin, Gilles; Harigane, Yumiko; Horst, Andrew J; Hoshide, Takashi; Ildefonse, Benoit; Jean, Marlon M; John, Barbara E; Koepke, Juergen; Machi, Sumiaki; Maeda, Jinichiro; Marks, Naomi E; McCaig, Andrew M; Meyer, Romain; Morris, Antony; Nozaka, Toshio; Python, Marie; Saha, Abhishek; Wintsch, Robert P

    2014-01-01

    Three-quarters of the oceanic crust formed at fast-spreading ridges is composed of plutonic rocks whose mineral assemblages, textures and compositions record the history of melt transport and crystallization between the mantle and the sea floor. Despite the importance of these rocks, sampling them in situ is extremely challenging owing to the overlying dykes and lavas. This means that models for understanding the formation of the lower crust are based largely on geophysical studies and ancient analogues (ophiolites) that did not form at typical mid-ocean ridges. Here we describe cored intervals of primitive, modally layered gabbroic rocks from the lower plutonic crust formed at a fast-spreading ridge, sampled by the Integrated Ocean Drilling Program at the Hess Deep rift. Centimetre-scale, modally layered rocks, some of which have a strong layering-parallel foliation, confirm a long-held belief that such rocks are a key constituent of the lower oceanic crust formed at fast-spreading ridges. Geochemical analysis of these primitive lower plutonic rocks--in combination with previous geochemical data for shallow-level plutonic rocks, sheeted dykes and lavas--provides the most completely constrained estimate of the bulk composition of fast-spreading oceanic crust so far. Simple crystallization models using this bulk crustal composition as the parental melt accurately predict the bulk composition of both the lavas and the plutonic rocks. However, the recovered plutonic rocks show early crystallization of orthopyroxene, which is not predicted by current models of melt extraction from the mantle and mid-ocean-ridge basalt differentiation. The simplest explanation of this observation is that compositionally diverse melts are extracted from the mantle and partly crystallize before mixing to produce the more homogeneous magmas that erupt.

  11. Accurate free energy calculation along optimized paths.

    PubMed

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  12. A simple method for simulating gasdynamic systems

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.

    1991-01-01

    A simple method for performing digital simulation of gasdynamic systems is presented. The approach is somewhat intuitive, and requires some knowledge of the physics of the problem as well as an understanding of the finite difference theory. The method is explicitly shown in appendix A which is taken from the book by P.J. Roache, 'Computational Fluid Dynamics,' Hermosa Publishers, 1982. The resulting method is relatively fast while it sacrifices some accuracy.

  13. Simple Bond Cleavage

    SciTech Connect

    Gary S. Groenewold

    2005-08-01

    Simple bond cleavage is a class of fragmentation reactions in which a single bond is broken, without formation of new bonds between previously unconnected atoms. Because no bond making is involved, simple bond cleavages are endothermic, and activation energies are generally higher than for rearrangement eliminations. The rate of simple bond cleavage reactions is a strong function of the internal energy of the molecular ion, which reflects a loose transition state that resembles reaction products, and has a high density of accessible states. For this reason, simple bond cleavages tend to dominate fragmentation reactions for highly energized molecular ions. Simple bond cleavages have negligible reverse activation energy, and hence they are used as valuable probes of ion thermochemistry, since the energy dependence of the reactions can be related to the bond energy. In organic mass spectrometry, simple bond cleavages of odd electron ions can be either homolytic or heterolytic, depending on whether the fragmentation is driven by the radical site or the charge site. Simple bond cleavages of even electron ions tend to be heterolytic, producing even electron product ions and neutrals.

  14. A CFD-based wind solver for a fast response transport and dispersion model

    SciTech Connect

    Gowardhan, Akshay A; Brown, Michael J; Pardyjak, Eric R; Senocak, Inanc

    2010-01-01

    In many cities, ambient air quality is deteriorating leading to concerns about the health of city inhabitants. In urban areas with narrow streets surrounded by clusters of tall buildings, called street canyons, air pollution from traffic emissions and other sources is difficult to disperse and may accumulate resulting in high pollutant concentrations. For various situations, including the evacuation of populated areas in the event of an accidental or deliberate release of chemical, biological and radiological agents, it is important that models should be developed that produce urban flow fields quickly. For these reasons it has become important to predict the flow field in urban street canyons. Various computational techniques have been used to calculate these flow fields, but these techniques are often computationally intensive. Most fast response models currently in use are at a disadvantage in these cases as they are unable to correlate highly heterogeneous urban structures with the diagnostic parameterizations on which they are based. In this paper, a fast and reasonably accurate computational fluid dynamics (CFD) technique that solves the Navier-Stokes equations for complex urban areas has been developed called QUIC-CFD (Q-CFD). This technique represents an intermediate balance between fast (on the order of minutes for a several block problem) and reasonably accurate solutions. The paper details the solution procedure and validates this model for various simple and complex urban geometries.

  15. A Simple Raman Spectrometer.

    ERIC Educational Resources Information Center

    Blond, J. P.; Boggett, D. M.

    1980-01-01

    Discusses some basic physical ideas about light scattering and describes a simple Raman spectrometer, a single prism monochromator and a multiplier detector. This discussion is intended for British undergraduate physics students. (HM)

  16. Early Childhood: Simple Science.

    ERIC Educational Resources Information Center

    Jones, Clare B.; Shafer, Kathryn E.

    1987-01-01

    Encourages teachers to take advantage of the natural curiosity of young children in enhancing their interest in science. Describes four simple activities involving water, living and non-living things, air pollution, and food. (TW)

  17. Simple Machines Simply Put.

    ERIC Educational Resources Information Center

    Kirkwood, James J.

    1994-01-01

    Students explore the workings of the lever, wheel and axle, and the inclined plane as they build simple toys--a bulldozer and a road grader. The project takes four weeks. Diagrams and procedures are included. (PR)

  18. A Simple Water Channel

    ERIC Educational Resources Information Center

    White, A. S.

    1976-01-01

    Describes a simple water channel, for use with an overhead projector. It is run from a water tap and may be used for flow visualization experiments, including the effect of streamlining and elementary building aerodynamics. (MLH)

  19. Simple Ontology Format (SOFT)

    SciTech Connect

    Sorokine, Alexandre

    2011-10-01

    Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layout system using customized styles.

  20. Fast batch injection analysis system for on-site determination of ethanol in gasohol and fuel ethanol.

    PubMed

    Pereira, Polyana F; Marra, Mariana C; Munoz, Rodrigo A A; Richter, Eduardo M

    2012-02-15

    A simple, accurate and fast (180 injections h(-1)) batch injection analysis (BIA) system with multiple-pulse amperometric detection has been developed for selective determination of ethanol in gasohol and fuel ethanol. A sample aliquot (100 μL) was directly injected onto a gold electrode immersed in 0.5 mol L(-1) NaOH solution (unique reagent). The proposed BIA method requires minimal sample manipulation and can be easily used for on-site analysis. The results obtained with the BIA method were compared to those obtained by gas-chromatography and similar results were obtained (at 95% of confidence level).

  1. A fast and explicit algorithm for simulating the dynamics of small dust grains with smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Price, Daniel J.; Laibe, Guillaume

    2015-07-01

    We describe a simple method for simulating the dynamics of small grains in a dusty gas, relevant to micron-sized grains in the interstellar medium and grains of centimetre size and smaller in protoplanetary discs. The method involves solving one extra diffusion equation for the dust fraction in addition to the usual equations of hydrodynamics. This `diffusion approximation for dust' is valid when the dust stopping time is smaller than the computational timestep. We present a numerical implementation using smoothed particle hydrodynamics that is conservative, accurate and fast. It does not require any implicit timestepping and can be straightforwardly ported into existing 3D codes.

  2. Simple heuristics in over-the-counter drug choices: a new hint for medical education and practice

    PubMed Central

    Riva, Silvia; Monti, Marco; Antonietti, Alessandro

    2011-01-01

    Introduction Over-the-counter (OTC) drugs are widely available and often purchased by consumers without advice from a health care provider. Many people rely on self-management of medications to treat common medical conditions. Although OTC medications are regulated by the National and the International Health and Drug Administration, many people are unaware of proper dosing, side effects, adverse drug reactions, and possible medication interactions. Purpose This study examined how subjects make their decisions to select an OTC drug, evaluating the role of cognitive heuristics which are simple and adaptive rules that help the decision-making process of people in everyday contexts. Subjects and methods By analyzing 70 subjects’ information-search and decision-making behavior when selecting OTC drugs, we examined the heuristics they applied in order to assess whether simple decision-making processes were also accurate and relevant. Subjects were tested with a sequence of two experimental tests based on a computerized Java system devised to analyze participants’ choices in a virtual environment. Results We found that subjects’ information-search behavior reflected the use of fast and frugal heuristics. In addition, although the heuristics which correctly predicted subjects’ decisions implied significantly fewer cues on average than the subjects did in the information-search task, they were accurate in describing order of information search. A simple combination of a fast and frugal tree and a tallying rule predicted more than 78% of subjects’ decisions. Conclusion The current emphasis in health care is to shift some responsibility onto the consumer through expansion of self medication. To know which cognitive mechanisms are behind the choice of OTC drugs is becoming a relevant purpose of current medical education. These findings have implications both for the validity of simple heuristics describing information searches in the field of OTC drug choices and

  3. A simple and accurate equation of state for two-dimensional hard-body fluids

    NASA Astrophysics Data System (ADS)

    Maeso, M. J.; Solana, J. R.

    1995-06-01

    A model relating the equation of state of two-dimensional linear hard-body fluids to the equation of state of the hard disk fluid is derived from the pressure equation in a similar way to that previously described for three-dimensional hard-body fluids. The equation of state reproduces simulation data practically within their accuracy for fluids with a great variety of molecular shapes.

  4. A simple and reliable sensor for accurate measurement of angular speed for low speed rotating machinery

    NASA Astrophysics Data System (ADS)

    Kuosheng, Jiang; Guanghua, Xu; Tangfei, Tao; Lin, Liang; Yi, Wang; Sicong, Zhang; Ailing, Luo

    2014-01-01

    This paper presents the theory and implementation of a novel sensor system for measuring the angular speed (AS) of a shaft rotating at a very low speed range, nearly zero speed. The sensor system consists mainly of an eccentric sleeve rotating with the shaft on which the angular speed to be measured, and an eddy current displacement sensor to obtain the profile of the sleeve for AS calculation. When the shaft rotates at constant speed the profile will be a pure sinusoidal trace. However, the profile will be a phase modulated signal when the shaft speed is varied. By applying a demodulating procedure, the AS can be obtained in a straightforward manner. The sensor system was validated experimentally based on a gearbox test rig and the result shows that the AS obtained are consistent with that obtained by a conventional encoder. However, the new sensor gives very smooth and stable traces of the AS, demonstrating its higher accuracy and reliability in obtaining the AS of the low speed operations with speed-up and down transients. In addition, the experiment also shows that it is easy and cost-effective to be realised in different applications such as condition monitoring and process control.

  5. A simple and reliable sensor for accurate measurement of angular speed for low speed rotating machinery.

    PubMed

    Kuosheng, Jiang; Guanghua, Xu; Tangfei, Tao; Lin, Liang; Yi, Wang; Sicong, Zhang; Ailing, Luo

    2014-01-01

    This paper presents the theory and implementation of a novel sensor system for measuring the angular speed (AS) of a shaft rotating at a very low speed range, nearly zero speed. The sensor system consists mainly of an eccentric sleeve rotating with the shaft on which the angular speed to be measured, and an eddy current displacement sensor to obtain the profile of the sleeve for AS calculation. When the shaft rotates at constant speed the profile will be a pure sinusoidal trace. However, the profile will be a phase modulated signal when the shaft speed is varied. By applying a demodulating procedure, the AS can be obtained in a straightforward manner. The sensor system was validated experimentally based on a gearbox test rig and the result shows that the AS obtained are consistent with that obtained by a conventional encoder. However, the new sensor gives very smooth and stable traces of the AS, demonstrating its higher accuracy and reliability in obtaining the AS of the low speed operations with speed-up and down transients. In addition, the experiment also shows that it is easy and cost-effective to be realised in different applications such as condition monitoring and process control.

  6. Bronchoalveolar lavage cell differential on microscope glass cover. A simple and accurate technique

    SciTech Connect

    Laviolette, M.; Carreau, M.; Coulombe, R.

    1988-08-01

    We describe a quick and easy technique to perform cell differentials on bronchoalveolar lavage: the microscope glass cover. Lavage fluids of 72 subjects were analyzed by 3 techniques: glass cover, filter, and cytocentrifuge preparations. Seventy-seven other lavages were analyzed by glass cover and cytocentrifuge preparations alone. Data for the 72 subjects studied by all 3 techniques showed that the cell counts on glass cover and filter preparations were similar, e.g., lymphocytes, 19.2% (range, 0.5 to 94%) and 20.9% (range, 3 to 95%), respectively (Spearman's correlation coefficient, 0.98). However, on cytocentrifuge preparations, lymphocyte counts were lower (8.3%; range, zero to 87%) and macrophage counts were higher (p less than 0.005). Comparison of glass cover and cytocentrifuge preparation mixtures with varying amounts (20 to 80%) of purified blood leukocytes labeled with 51Cr (greater than or equal to 72% lymphocytes) showed that a significant amount of radioactive cells was lost during the cytocentrifuge technique in contrast to the glass cover technique. Because neutrophils represented a low proportion of lavage cells, we also evaluated cell suspensions with known neutrophil contents (10 to 70%); we found no difference in neutrophil counts obtained with the 3 techniques. Lavage data analysis of 40 young nonsmoking volunteers showed that glass cover lymphocyte count was also higher than counts on cytocentrifuge preparations: 16.5% (range, 3 to 45%) and 8.2% (range, 2.5 to 35%), respectively. In this group, the distribution of glass cover lymphocyte percentages was normal (p = 0.21, chi 2 test), and the one-tailed 95% confidence interval was 18.6 to 34.7% (mean plus 1.65 standard deviation).

  7. Chromatic Information and Feature Detection in Fast Visual Analysis

    PubMed Central

    Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.

    2016-01-01

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-and-white movies provide compelling representations of real world scenes. Also, the contrast sensitivity of color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. We conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in. PMID:27478891

  8. Fast imaging of live organisms with sculpted light sheets.

    PubMed

    Chmielewski, Aleksander K; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F

    2015-04-20

    Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed.

  9. Fast imaging of live organisms with sculpted light sheets

    NASA Astrophysics Data System (ADS)

    Chmielewski, Aleksander K.; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T.; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F.

    2015-04-01

    Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed.

  10. Chromatic Information and Feature Detection in Fast Visual Analysis.

    PubMed

    Del Viva, Maria M; Punzi, Giovanni; Shevell, Steven K

    2016-01-01

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-and-white movies provide compelling representations of real world scenes. Also, the contrast sensitivity of color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. We conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in. PMID:27478891

  11. Fast imaging of live organisms with sculpted light sheets

    PubMed Central

    Chmielewski, Aleksander K.; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T.; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F.

    2015-01-01

    Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed. PMID:25893952

  12. Efficiency of current drive by fast waves

    SciTech Connect

    Karney, C.F.F.; Fisch, N.J.

    1984-08-01

    The Rosenbluth form for the collision operator for a weakly relativistic plasma is derived. The formalism adopted by Antonsen and Chu can then be used to calculate the efficiency of current drive by fast waves in a relativistic plasma. Accurate numerical results and analytic asymptotic limits for the efficiencies are given.

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  15. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  16. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  17. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  18. Strategy as simple rules.

    PubMed

    Eisenhardt, K M; Sull, D N

    2001-01-01

    The success of Yahoo!, eBay, Enron, and other companies that have become adept at morphing to meet the demands of changing markets can't be explained using traditional thinking about competitive strategy. These companies have succeeded by pursuing constantly evolving strategies in market spaces that were considered unattractive according to traditional measures. In this article--the third in an HBR series by Kathleen Eisenhardt and Donald Sull on strategy in the new economy--the authors ask, what are the sources of competitive advantage in high-velocity markets? The secret, they say, is strategy as simple rules. The companies know that the greatest opportunities for competitive advantage lie in market confusion, but they recognize the need for a few crucial strategic processes and a few simple rules. In traditional strategy, advantage comes from exploiting resources or stable market positions. In strategy as simple rules, advantage comes from successfully seizing fleeting opportunities. Key strategic processes, such as product innovation, partnering, or spinout creation, place the company where the flow of opportunities is greatest. Simple rules then provide the guidelines within which managers can pursue such opportunities. Simple rules, which grow out of experience, fall into five broad categories: how- to rules, boundary conditions, priority rules, timing rules, and exit rules. Companies with simple-rules strategies must follow the rules religiously and avoid the temptation to change them too frequently. A consistent strategy helps managers sort through opportunities and gain short-term advantage by exploiting the attractive ones. In stable markets, managers rely on complicated strategies built on detailed predictions of the future. But when business is complicated, strategy should be simple. PMID:11189455

  19. Predicting human walking gaits with a simple planar model.

    PubMed

    Martin, Anne E; Schmiedeler, James P

    2014-04-11

    Models of human walking with moderate complexity have the potential to accurately capture both joint kinematics and whole body energetics, thereby offering more simultaneous information than very simple models and less computational cost than very complex models. This work examines four- and six-link planar biped models with knees and rigid circular feet. The two differ in that the six-link model includes ankle joints. Stable periodic walking gaits are generated for both models using a hybrid zero dynamics-based control approach. To establish a baseline of how well the models can approximate normal human walking, gaits were optimized to match experimental human walking data, ranging in speed from very slow to very fast. The six-link model well matched the experimental step length, speed, and mean absolute power, while the four-link model did not, indicating that ankle work is a critical element in human walking models of this type. Beyond simply matching human data, the six-link model can be used in an optimization framework to predict normal human walking using a torque-squared objective function. The model well predicted experimental step length, joint motions, and mean absolute power over the full range of speeds.

  20. Accurately measuring dynamic coefficient of friction in ultraform finishing

    NASA Astrophysics Data System (ADS)

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  1. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  2. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  3. Simple Ontology Format (SOFT)

    2011-10-01

    Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layoutmore » system using customized styles.« less

  4. On Simple Science.

    ERIC Educational Resources Information Center

    Cole, K.C.

    1982-01-01

    Discusses San Francisco's Exploratorium, a science teaching center with 500 exhibits focusing on human perception, but extending to everything from the mechanics of voice to the art of illusion, from holograms to harmonics. The Exploratorium emphasizes "simple science" (refractions/resonances, sounds/shadows) to tune in the senses and turn on the…

  5. Entropy Is Simple, Qualitatively.

    ERIC Educational Resources Information Center

    Lambert, Frank L.

    2002-01-01

    Suggests that qualitatively, entropy is simple. Entropy increase from a macro viewpoint is a measure of the dispersal of energy from localized to spread out at a temperature T. Fundamentally based on statistical and quantum mechanics, this approach is superior to the non-fundamental "disorder" as a descriptor of entropy change. (MM)

  6. Simple epibulbar cartilaginous choristoma.

    PubMed

    Alyahya, Ahmed; Alkhalidi, Hisham; Alsuhaibani, Adel H

    2011-02-01

    A 15-year-old boy was referred for management of a medial, pedunculated, subconjunctival epibulbar mass of 5 months' duration in the left eye. The lesion was removed with complication, and histopathology confirmed a cartilaginous choristoma. To our knowledge this is the first reported case of a simple epibulbar cartilaginous choristoma.

  7. Simple Lookup Service

    SciTech Connect

    2013-05-01

    Simple Lookup Service (sLS) is a REST/JSON based lookup service that allows users to publish information in the form of key-value pairs and search for the published information. The lookup service supports both pull and push model. This software can be used to create a distributed architecture/cloud.

  8. A Simple Hydrogen Electrode

    ERIC Educational Resources Information Center

    Eggen, Per-Odd

    2009-01-01

    This article describes the construction of an inexpensive, robust, and simple hydrogen electrode, as well as the use of this electrode to measure "standard" potentials. In the experiment described here the students can measure the reduction potentials of metal-metal ion pairs directly, without using a secondary reference electrode. Measurements…

  9. Working with Simple Machines

    ERIC Educational Resources Information Center

    Norbury, John W.

    2006-01-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that…

  10. Not So "Simple Justice"

    ERIC Educational Resources Information Center

    Urban, Wayne

    2004-01-01

    In this article, the author provides his analyses on Richard Kluger's "Simple Justice," a book that portrays the major players involved in the landmark "Brown" decision. He comments generally on Kluger and highlights a few interesting aspects of his analysis, including his interpretation of the actions of then clerk and later justice and still…

  11. A Simple Wave Driver

    ERIC Educational Resources Information Center

    Temiz, Burak Kagan; Yavuz, Ahmet

    2015-01-01

    This study was done to develop a simple and inexpensive wave driver that can be used in experiments on string waves. The wave driver was made using a battery-operated toy car, and the apparatus can be used to produce string waves at a fixed frequency. The working principle of the apparatus is as follows: shortly after the car is turned on, the…

  12. Simple Library Bookkeeping.

    ERIC Educational Resources Information Center

    Hoffman, Herbert H.

    A simple and cheap manual double entry continuous transaction posting system with running balances is developed for bookkeeping by small libraries. A very small library may operate without any system of fiscal control but when a library's budget approaches three figures, some kind of bookkeeping must be introduced. To maintain control over his…

  13. Climate Change Made Simple

    ERIC Educational Resources Information Center

    Shallcross, Dudley E.; Harrison, Tim G.

    2007-01-01

    The newly revised specifications for GCSE science involve greater consideration of climate change. This topic appears in either the chemistry or biology section, depending on the examination board, and is a good example of "How Science Works." It is therefore timely that students are given an opportunity to conduct some simple climate modelling.…

  14. Fast food tips (image)

    MedlinePlus

    ... challenge to eat healthy when going to a fast food place. In general, avoiding items that are deep ... challenge to eat healthy when going to a fast food place. In general, avoiding items that are deep ...

  15. Fast food (image)

    MedlinePlus

    Fast foods are quick, reasonably priced, and readily available alternatives to home cooking. While convenient and economical for a busy lifestyle, fast foods are typically high in calories, fat, saturated fat, ...

  16. Is fast food addictive?

    PubMed

    Garber, Andrea K; Lustig, Robert H

    2011-09-01

    Studies of food addiction have focused on highly palatable foods. While fast food falls squarely into that category, it has several other attributes that may increase its salience. This review examines whether the nutrients present in fast food, the characteristics of fast food consumers or the presentation and packaging of fast food may encourage substance dependence, as defined by the American Psychiatric Association. The majority of fast food meals are accompanied by a soda, which increases the sugar content 10-fold. Sugar addiction, including tolerance and withdrawal, has been demonstrated in rodents but not humans. Caffeine is a "model" substance of dependence; coffee drinks are driving the recent increase in fast food sales. Limited evidence suggests that the high fat and salt content of fast food may increase addictive potential. Fast food restaurants cluster in poorer neighborhoods and obese adults eat more fast food than those who are normal weight. Obesity is characterized by resistance to insulin, leptin and other hormonal signals that would normally control appetite and limit reward. Neuroimaging studies in obese subjects provide evidence of altered reward and tolerance. Once obese, many individuals meet criteria for psychological dependence. Stress and dieting may sensitize an individual to reward. Finally, fast food advertisements, restaurants and menus all provide environmental cues that may trigger addictive overeating. While the concept of fast food addiction remains to be proven, these findings support the role of fast food as a potentially addictive substance that is most likely to create dependence in vulnerable populations.

  17. Is fast food addictive?

    PubMed

    Garber, Andrea K; Lustig, Robert H

    2011-09-01

    Studies of food addiction have focused on highly palatable foods. While fast food falls squarely into that category, it has several other attributes that may increase its salience. This review examines whether the nutrients present in fast food, the characteristics of fast food consumers or the presentation and packaging of fast food may encourage substance dependence, as defined by the American Psychiatric Association. The majority of fast food meals are accompanied by a soda, which increases the sugar content 10-fold. Sugar addiction, including tolerance and withdrawal, has been demonstrated in rodents but not humans. Caffeine is a "model" substance of dependence; coffee drinks are driving the recent increase in fast food sales. Limited evidence suggests that the high fat and salt content of fast food may increase addictive potential. Fast food restaurants cluster in poorer neighborhoods and obese adults eat more fast food than those who are normal weight. Obesity is characterized by resistance to insulin, leptin and other hormonal signals that would normally control appetite and limit reward. Neuroimaging studies in obese subjects provide evidence of altered reward and tolerance. Once obese, many individuals meet criteria for psychological dependence. Stress and dieting may sensitize an individual to reward. Finally, fast food advertisements, restaurants and menus all provide environmental cues that may trigger addictive overeating. While the concept of fast food addiction remains to be proven, these findings support the role of fast food as a potentially addictive substance that is most likely to create dependence in vulnerable populations. PMID:21999689

  18. Light Field Imaging Based Accurate Image Specular Highlight Removal.

    PubMed

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into "unsaturated" and "saturated" category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  19. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  20. Note-accurate audio segmentation based on MPEG-7

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens

    2003-12-01

    Segmenting audio data into the smallest musical components is the basis for many further meta data extraction algorithms. For example, an automatic music transcription system needs to know where the exact boundaries of each tone are. In this paper a note accurate audio segmentation algorithm based on MPEG-7 low level descriptors is introduced. For a reliable detection of different notes, both features in the time and the frequency domain are used. Because of this, polyphonic instrument mixes and even melodies characterized by human voices can be examined with this alogrithm. For testing and verification of the note accurate segmentation, a simple music transcription system was implemented. The dominant frequency within each segment is used to build a MIDI file representing the processed audio data.

  1. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  2. Benchmarking accurate spectral phase retrieval of single attosecond pulses

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Le, Anh-Thu; Morishita, Toru; Yu, Chao; Lin, C. D.

    2015-02-01

    A single extreme-ultraviolet (XUV) attosecond pulse or pulse train in the time domain is fully characterized if its spectral amplitude and phase are both determined. The spectral amplitude can be easily obtained from photoionization of simple atoms where accurate photoionization cross sections have been measured from, e.g., synchrotron radiations. To determine the spectral phase, at present the standard method is to carry out XUV photoionization in the presence of a dressing infrared (IR) laser. In this work, we examine the accuracy of current phase retrieval methods (PROOF and iPROOF) where the dressing IR is relatively weak such that photoelectron spectra can be accurately calculated by second-order perturbation theory. We suggest a modified method named swPROOF (scattering wave phase retrieval by omega oscillation filtering) which utilizes accurate one-photon and two-photon dipole transition matrix elements and removes the approximations made in PROOF and iPROOF. We show that the swPROOF method can in general retrieve accurate spectral phase compared to other simpler models that have been suggested. We benchmark the accuracy of these phase retrieval methods through simulating the spectrogram by solving the time-dependent Schrödinger equation numerically using several known single attosecond pulses with a fixed spectral amplitude but different spectral phases.

  3. Diagnostics for Fast Ignition Science

    SciTech Connect

    MacPhee, A; Akli, K; Beg, F; Chen, C; Chen, H; Clarke, R; Hey, D; Freeman, R; Kemp, A; Key, M; King, J; LePape, S; Link, A; Ma, T; Nakamura, N; Offermann, D; Ovchinnikov, V; Patel, P; Phillips, T; Stephens, R; Town, R; Wei, M; VanWoerkom, L; Mackinnon, A

    2008-05-06

    The concept for Electron Fast Ignition Inertial Confinement Fusion demands sufficient laser energy be transferred from the ignitor pulse to the assembled fuel core via {approx}MeV electrons. We have assembled a suite of diagnostics to characterize such transfer. Recent experiments have simultaneously fielded absolutely calibrated extreme ultraviolet multilayer imagers at 68 and 256eV; spherically bent crystal imagers at 4 and 8keV; multi-keV crystal spectrometers; MeV x-ray bremmstrahlung and electron and proton spectrometers (along the same line of sight); nuclear activation samples and a picosecond optical probe based interferometer. These diagnostics allow careful measurement of energy transport and deposition during and following laser-plasma interactions at extremely high intensities in both planar and conical targets. Augmented with accurate on-shot laser focal spot and pre-pulse characterization, these measurements are yielding new insight into energy coupling and are providing critical data for validating numerical PIC and hybrid PIC simulation codes in an area that is crucial for many applications, particularly fast ignition. Novel aspects of these diagnostics and how they are combined to extract quantitative data on ultra high intensity laser plasma interactions are discussed, together with implications for full-scale fast ignition experiments.

  4. Fast Offset Laser Phase-Locking System

    NASA Technical Reports Server (NTRS)

    Shaddock, Daniel; Ware, Brent

    2008-01-01

    Figure 1 shows a simplified block diagram of an improved optoelectronic system for locking the phase of one laser to that of another laser with an adjustable offset frequency specified by the user. In comparison with prior systems, this system exhibits higher performance (including higher stability) and is much easier to use. The system is based on a field-programmable gate array (FPGA) and operates almost entirely digitally; hence, it is easily adaptable to many different systems. The system achieves phase stability of less than a microcycle. It was developed to satisfy the phase-stability requirement for a planned spaceborne gravitational-wave-detecting heterodyne laser interferometer (LISA). The system has potential terrestrial utility in communications, lidar, and other applications. The present system includes a fast phasemeter that is a companion to the microcycle-accurate one described in High-Accuracy, High-Dynamic-Range Phase-Measurement System (NPO-41927), NASA Tech Briefs, Vol. 31, No. 6 (June 2007), page 22. In the present system (as in the previously reported one), beams from the two lasers (here denoted the master and slave lasers) interfere on a photodiode. The heterodyne photodiode output is digitized and fed to the fast phasemeter, which produces suitably conditioned, low-latency analog control signals which lock the phase of the slave laser to that of the master laser. These control signals are used to drive a thermal and a piezoelectric transducer that adjust the frequency and phase of the slave-laser output. The output of the photodiode is a heterodyne signal at the difference between the frequencies of the two lasers. (The difference is currently required to be less than 20 MHz due to the Nyquist limit of the current sampling rate. We foresee few problems in doubling this limit using current equipment.) Within the phasemeter, the photodiode-output signal is digitized to 15 bits at a sampling frequency of 40 MHz by use of the same analog

  5. Probabilistic simple splicing systems

    NASA Astrophysics Data System (ADS)

    Selvarajoo, Mathuri; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod

    2014-06-01

    A splicing system, one of the early theoretical models for DNA computing was introduced by Head in 1987. Splicing systems are based on the splicing operation which, informally, cuts two strings of DNA molecules at the specific recognition sites and attaches the prefix of the first string to the suffix of the second string, and the prefix of the second string to the suffix of the first string, thus yielding the new strings. For a specific type of splicing systems, namely the simple splicing systems, the recognition sites are the same for both strings of DNA molecules. It is known that splicing systems with finite sets of axioms and splicing rules only generate regular languages. Hence, different types of restrictions have been considered for splicing systems in order to increase their computational power. Recently, probabilistic splicing systems have been introduced where the probabilities are initially associated with the axioms, and the probabilities of the generated strings are computed from the probabilities of the initial strings. In this paper, some properties of probabilistic simple splicing systems are investigated. We prove that probabilistic simple splicing systems can also increase the computational power of the splicing languages generated.

  6. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  7. An accurate method for two-point boundary value problems

    NASA Technical Reports Server (NTRS)

    Walker, J. D. A.; Weigand, G. G.

    1979-01-01

    A second-order method for solving two-point boundary value problems on a uniform mesh is presented where the local truncation error is obtained for use with the deferred correction process. In this simple finite difference method the tridiagonal nature of the classical method is preserved but the magnitude of each term in the truncation error is reduced by a factor of two. The method is applied to a number of linear and nonlinear problems and it is shown to produce more accurate results than either the classical method or the technique proposed by Keller (1969).

  8. Accurate frequency noise measurement of free-running lasers.

    PubMed

    Schiemangk, Max; Spiessberger, Stefan; Wicht, Andreas; Erbert, Götz; Tränkle, Günther; Peters, Achim

    2014-10-20

    We present a simple method to accurately measure the frequency noise power spectrum of lasers. It relies on creating the beat note between two lasers, capturing the corresponding signal in the time domain, and appropriately postprocessing the data to derive the frequency noise power spectrum. In contrast to methods already established, it does not require stabilization of the laser to an optical reference, i.e., a second laser, to an optical cavity or to an atomic transition. It further omits a frequency discriminator and hence avoids bandwidth limitation and nonlinearity effects common to high-resolution frequency discriminators.

  9. Beam Profile Monitor With Accurate Horizontal And Vertical Beam Profiles

    DOEpatents

    Havener, Charles C [Knoxville, TN; Al-Rejoub, Riad [Oak Ridge, TN

    2005-12-26

    A widely used scanner device that rotates a single helically shaped wire probe in and out of a particle beam at different beamline positions to give a pair of mutually perpendicular beam profiles is modified by the addition of a second wire probe. As a result, a pair of mutually perpendicular beam profiles is obtained at a first beamline position, and a second pair of mutually perpendicular beam profiles is obtained at a second beamline position. The simple modification not only provides more accurate beam profiles, but also provides a measurement of the beam divergence and quality in a single compact device.

  10. Data assimilation on the exponentially accurate slow manifold.

    PubMed

    Cotter, Colin

    2013-05-28

    I describe an approach to data assimilation making use of an explicit map that defines a coordinate system on the slow manifold in the semi-geostrophic scaling in Lagrangian coordinates, and apply the approach to a simple toy system that has previously been proposed as a low-dimensional model for the semi-geostrophic scaling. The method can be extended to Lagrangian particle methods such as Hamiltonian particle-mesh and smooth-particle hydrodynamics applied to the rotating shallow-water equations, and many of the properties will remain for more general Eulerian methods. Making use of Hamiltonian normal-form theory, it has previously been shown that, if initial conditions for the system are chosen as image points of the map, then the fast components of the system have exponentially small magnitude for exponentially long times as ε→0, and this property is preserved if one uses a symplectic integrator for the numerical time stepping. The map may then be used to parametrize initial conditions near the slow manifold, allowing data assimilation to be performed without introducing any fast degrees of motion (more generally, the precise amount of fast motion can be selected).

  11. Fast food: friendly?

    PubMed

    Rice, S; McAllister, E J; Dhurandhar, N V

    2007-06-01

    Fast food is routinely blamed for the obesity epidemic and consequentially excluded from professional dietary recommendations. However, several sections of society including senior citizens, low-income adult and children, minority and homeless children, or those pressed for time appear to rely on fast food as an important source of meals. Considering the dependence of these nutritionally vulnerable population groups on fast food, we examined the possibility of imaginative selection of fast food, which would attenuate the potentially unfavorable nutrient composition. We present a sample menu to demonstrate that it is possible to design a fast food menu that provides reasonable level of essential nutrients without exceeding the caloric recommendations. We would like to alert health-care professionals that fast food need not be forbidden under all circumstances, and that a fresh look at the role of fast food may enable its inclusion in meal planning for those who depend on it out of necessity, while adding flexibility.

  12. FAST User Guide

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Clucas, Jean; McCabe, R. Kevin; Plessel, Todd; Potter, R.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The Flow Analysis Software Toolkit, FAST, is a software environment for visualizing data. FAST is a collection of separate programs (modules) that run simultaneously and allow the user to examine the results of numerical and experimental simulations. The user can load data files, perform calculations on the data, visualize the results of these calculations, construct scenes of 3D graphical objects, and plot, animate and record the scenes. Computational Fluid Dynamics (CFD) visualization is the primary intended use of FAST, but FAST can also assist in the analysis of other types of data. FAST combines the capabilities of such programs as PLOT3D, RIP, SURF, and GAS into one environment with modules that share data. Sharing data between modules eliminates the drudgery of transferring data between programs. All the modules in the FAST environment have a consistent, highly interactive graphical user interface. Most commands are entered by pointing and'clicking. The modular construction of FAST makes it flexible and extensible. The environment can be custom configured and new modules can be developed and added as needed. The following modules have been developed for FAST: VIEWER, FILE IO, CALCULATOR, SURFER, TOPOLOGY, PLOTTER, TITLER, TRACER, ARCGRAPH, GQ, SURFERU, SHOTET, and ISOLEVU. A utility is also included to make the inclusion of user defined modules in the FAST environment easy. The VIEWER module is the central control for the FAST environment. From VIEWER, the user can-change object attributes, interactively position objects in three-dimensional space, define and save scenes, create animations, spawn new FAST modules, add additional view windows, and save and execute command scripts. The FAST User Guide uses text and FAST MAPS (graphical representations of the entire user interface) to guide the user through the use of FAST. Chapters include: Maps, Overview, Tips, Getting Started Tutorial, a separate chapter for each module, file formats, and system

  13. Accurate whole human genome sequencing using reversible terminator chemistry.

    PubMed

    Bentley, David R; Balasubramanian, Shankar; Swerdlow, Harold P; Smith, Geoffrey P; Milton, John; Brown, Clive G; Hall, Kevin P; Evers, Dirk J; Barnes, Colin L; Bignell, Helen R; Boutell, Jonathan M; Bryant, Jason; Carter, Richard J; Keira Cheetham, R; Cox, Anthony J; Ellis, Darren J; Flatbush, Michael R; Gormley, Niall A; Humphray, Sean J; Irving, Leslie J; Karbelashvili, Mirian S; Kirk, Scott M; Li, Heng; Liu, Xiaohai; Maisinger, Klaus S; Murray, Lisa J; Obradovic, Bojan; Ost, Tobias; Parkinson, Michael L; Pratt, Mark R; Rasolonjatovo, Isabelle M J; Reed, Mark T; Rigatti, Roberto; Rodighiero, Chiara; Ross, Mark T; Sabot, Andrea; Sankar, Subramanian V; Scally, Aylwyn; Schroth, Gary P; Smith, Mark E; Smith, Vincent P; Spiridou, Anastassia; Torrance, Peta E; Tzonev, Svilen S; Vermaas, Eric H; Walter, Klaudia; Wu, Xiaolin; Zhang, Lu; Alam, Mohammed D; Anastasi, Carole; Aniebo, Ify C; Bailey, David M D; Bancarz, Iain R; Banerjee, Saibal; Barbour, Selena G; Baybayan, Primo A; Benoit, Vincent A; Benson, Kevin F; Bevis, Claire; Black, Phillip J; Boodhun, Asha; Brennan, Joe S; Bridgham, John A; Brown, Rob C; Brown, Andrew A; Buermann, Dale H; Bundu, Abass A; Burrows, James C; Carter, Nigel P; Castillo, Nestor; Chiara E Catenazzi, Maria; Chang, Simon; Neil Cooley, R; Crake, Natasha R; Dada, Olubunmi O; Diakoumakos, Konstantinos D; Dominguez-Fernandez, Belen; Earnshaw, David J; Egbujor, Ugonna C; Elmore, David W; Etchin, Sergey S; Ewan, Mark R; Fedurco, Milan; Fraser, Louise J; Fuentes Fajardo, Karin V; Scott Furey, W; George, David; Gietzen, Kimberley J; Goddard, Colin P; Golda, George S; Granieri, Philip A; Green, David E; Gustafson, David L; Hansen, Nancy F; Harnish, Kevin; Haudenschild, Christian D; Heyer, Narinder I; Hims, Matthew M; Ho, Johnny T; Horgan, Adrian M; Hoschler, Katya; Hurwitz, Steve; Ivanov, Denis V; Johnson, Maria Q; James, Terena; Huw Jones, T A; Kang, Gyoung-Dong; Kerelska, Tzvetana H; Kersey, Alan D; Khrebtukova, Irina; Kindwall, Alex P; Kingsbury, Zoya; Kokko-Gonzales, Paula I; Kumar, Anil; Laurent, Marc A; Lawley, Cynthia T; Lee, Sarah E; Lee, Xavier; Liao, Arnold K; Loch, Jennifer A; Lok, Mitch; Luo, Shujun; Mammen, Radhika M; Martin, John W; McCauley, Patrick G; McNitt, Paul; Mehta, Parul; Moon, Keith W; Mullens, Joe W; Newington, Taksina; Ning, Zemin; Ling Ng, Bee; Novo, Sonia M; O'Neill, Michael J; Osborne, Mark A; Osnowski, Andrew; Ostadan, Omead; Paraschos, Lambros L; Pickering, Lea; Pike, Andrew C; Pike, Alger C; Chris Pinkard, D; Pliskin, Daniel P; Podhasky, Joe; Quijano, Victor J; Raczy, Come; Rae, Vicki H; Rawlings, Stephen R; Chiva Rodriguez, Ana; Roe, Phyllida M; Rogers, John; Rogert Bacigalupo, Maria C; Romanov, Nikolai; Romieu, Anthony; Roth, Rithy K; Rourke, Natalie J; Ruediger, Silke T; Rusman, Eli; Sanches-Kuiper, Raquel M; Schenker, Martin R; Seoane, Josefina M; Shaw, Richard J; Shiver, Mitch K; Short, Steven W; Sizto, Ning L; Sluis, Johannes P; Smith, Melanie A; Ernest Sohna Sohna, Jean; Spence, Eric J; Stevens, Kim; Sutton, Neil; Szajkowski, Lukasz; Tregidgo, Carolyn L; Turcatti, Gerardo; Vandevondele, Stephanie; Verhovsky, Yuli; Virk, Selene M; Wakelin, Suzanne; Walcott, Gregory C; Wang, Jingwen; Worsley, Graham J; Yan, Juying; Yau, Ling; Zuerlein, Mike; Rogers, Jane; Mullikin, James C; Hurles, Matthew E; McCooke, Nick J; West, John S; Oaks, Frank L; Lundberg, Peter L; Klenerman, David; Durbin, Richard; Smith, Anthony J

    2008-11-01

    DNA sequence information underpins genetic research, enabling discoveries of important biological or medical benefit. Sequencing projects have traditionally used long (400-800 base pair) reads, but the existence of reference sequences for the human and many other genomes makes it possible to develop new, fast approaches to re-sequencing, whereby shorter reads are compared to a reference to identify intraspecies genetic variation. Here we report an approach that generates several billion bases of accurate nucleotide sequence per experiment at low cost. Single molecules of DNA are attached to a flat surface, amplified in situ and used as templates for synthetic sequencing with fluorescent reversible terminator deoxyribonucleotides. Images of the surface are analysed to generate high-quality sequence. We demonstrate application of this approach to human genome sequencing on flow-sorted X chromosomes and then scale the approach to determine the genome sequence of a male Yoruba from Ibadan, Nigeria. We build an accurate consensus sequence from >30x average depth of paired 35-base reads. We characterize four million single-nucleotide polymorphisms and four hundred thousand structural variants, many of which were previously unknown. Our approach is effective for accurate, rapid and economical whole-genome re-sequencing and many other biomedical applications.

  14. Dimensional analysis made simple

    NASA Astrophysics Data System (ADS)

    Lira, Ignacio

    2013-11-01

    An inductive strategy is proposed for teaching dimensional analysis to second- or third-year students of physics, chemistry, or engineering. In this strategy, Buckingham's theorem is seen as a consequence and not as the starting point. In order to concentrate on the basics, the mathematics is kept as elementary as possible. Simple examples are suggested for classroom demonstrations of the power of the technique and others are put forward for homework or experimentation, but instructors are encouraged to produce examples of their own.

  15. Extremely simple holographic projection of color images

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej

    2012-03-01

    A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).

  16. A Simple Iterative Solution of Nonlinear Algebraic Systems

    NASA Astrophysics Data System (ADS)

    Gousidou, Maria; Koutitas, Christopher

    2009-09-01

    A simple, robust, easily programmable and efficient method for the iterative solution of nonlinear algebraic systems, commonly appearing in nonlinear mechanics, based on Newton-Raphson method (without repeatedly solving linear algebraic systems), is proposed, synoptically described and experimentally investigated. Fast convergence and easy programming are its main qualifications.

  17. Fast particle confinement with optimized coil currents in the W7-X stellarator

    NASA Astrophysics Data System (ADS)

    Drevlak, M.; Geiger, J.; Helander, P.; Turkin, Y.

    2014-07-01

    One of the principal goals of the W7-X stellarator is to demonstrate good confinement of energetic ions at finite β. This confinement, however, is sensitive to the magnetic field configuration and is thus vulnerable to design modifications of the coil geometry. The collisionless drift orbit losses for 60 keV protons in W7-X are studied using the ANTS code. Particles in this energy range will be produced by the neutral beam injection (NBI) system being constructed for W7-X, and are particularly important because protons at this energy accurately mimick the behaviour of 3.5 MeV α-particles in a HELIAS reactor. To investigate the possibility of improved fast particle confinement, several approaches to adjust the coil currents (5 main field coil currents +2 auxiliary coil currents) were explored. These strategies include simple rules of thumb as well as computational optimization of various properties of the magnetic field. It is shown that significant improvement of collisionless fast particle confinement can be achieved in W7-X for particle populations similar to α particles produced in fusion reactions. Nevertheless, the experimental goal of demonstrating confinement improvement with rising plasma pressure using an NBI-generated population appears to be difficult based on optimization of the coil currents only. The principal reason for this difficulty is that the NBI deposition profile is broader than the region of good fast-ion confinement around the magnetic axis.

  18. fast-matmul

    SciTech Connect

    Grey Ballard, Austin Benson

    2014-11-26

    This software provides implementations of fast matrix multiplication algorithms. These algorithms perform fewer floating point operations than the classical cubic algorithm. The software uses code generation to automatically implement the fast algorithms based on high-level descriptions. The code serves two general purposes. The first is to demonstrate that these fast algorithms can out-perform vendor matrix multiplication algorithms for modest problem sizes on a single machine. The second is to rapidly prototype many variations of fast matrix multiplication algorithms to encourage future research in this area. The implementations target sequential and shared memory parallel execution.

  19. Fast robust correlation.

    PubMed

    Fitch, Alistair J; Kadyrov, Alexander; Christmas, William J; Kittler, Josef

    2005-08-01

    A new, fast, statistically robust, exhaustive, translational image-matching technique is presented: fast robust correlation. Existing methods are either slow or non-robust, or rely on optimization. Fast robust correlation works by expressing a robust matching surface as a series of correlations. Speed is obtained by computing correlations in the frequency domain. Computational cost is analyzed and the method is shown to be fast. Speed is comparable to conventional correlation and, for large images, thousands of times faster than direct robust matching. Three experiments demonstrate the advantage of the technique over standard correlation.

  20. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.