NASA Astrophysics Data System (ADS)
Vizireanu, D. N.; Halunga, S. V.
2012-04-01
A simple, fast and accurate amplitude estimation algorithm of sinusoidal signals for DSP based instrumentation is proposed. It is shown that eight samples, used in two steps, are sufficient. A practical analytical formula for amplitude estimation is obtained. Numerical results are presented. Simulations have been performed when the sampled signal is affected by white Gaussian noise and when the samples are quantized on a given number of bits.
Mackie, David M.; Jahnke, Justin P.; Benyamin, Marcus S.; Sumner, James J.
2016-01-01
The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users’ purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells. PMID:26977411
Orio, Patricio; Soudry, Daniel
2012-01-01
Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when
Fast and Provably Accurate Bilateral Filtering
NASA Astrophysics Data System (ADS)
Chaudhury, Kunal N.; Dabhade, Swapnil D.
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.
Fast and Provably Accurate Bilateral Filtering.
Chaudhury, Kunal N; Dabhade, Swapnil D
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722
Fast and accurate propagation of coherent light
Lewis, R. D.; Beylkin, G.; Monzón, L.
2013-01-01
We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184
Fast and accurate determination of the Wigner rotation matrices in the fast multipole method.
Dachsel, Holger
2006-04-14
In the rotation based fast multipole method the accurate determination of the Wigner rotation matrices is essential. The combination of two recurrence relations and the control of the error accumulations allow a very precise determination of the Wigner rotation matrices. The recurrence formulas are simple, efficient, and numerically stable. The advantages over other recursions are documented. PMID:16626188
Fast and Accurate Exhaled Breath Ammonia Measurement
Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.
2014-01-01
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141
Reverse radiance: a fast accurate method for determining luminance
NASA Astrophysics Data System (ADS)
Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay
2012-10-01
Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.
A Fast and Accurate Unconstrained Face Detector.
Liao, Shengcai; Jain, Anil K; Li, Stan Z
2016-02-01
We propose a method to address challenges in unconstrained face detection, such as arbitrary pose variations and occlusions. First, a new image feature called Normalized Pixel Difference (NPD) is proposed. NPD feature is computed as the difference to sum ratio between two pixel values, inspired by the Weber Fraction in experimental psychology. The new feature is scale invariant, bounded, and is able to reconstruct the original image. Second, we propose a deep quadratic tree to learn the optimal subset of NPD features and their combinations, so that complex face manifolds can be partitioned by the learned rules. This way, only a single soft-cascade classifier is needed to handle unconstrained face detection. Furthermore, we show that the NPD features can be efficiently obtained from a look up table, and the detection template can be easily scaled, making the proposed face detector very fast. Experimental results on three public face datasets (FDDB, GENKI, and CMU-MIT) show that the proposed method achieves state-of-the-art performance in detecting unconstrained faces with arbitrary pose variations and occlusions in cluttered scenes. PMID:26761729
Fast and accurate estimation for astrophysical problems in large databases
NASA Astrophysics Data System (ADS)
Richards, Joseph W.
2010-10-01
A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems
Fast, accurate, robust and Open Source Brain Extraction Tool (OSBET)
NASA Astrophysics Data System (ADS)
Namias, R.; Donnelly Kehoe, P.; D'Amato, J. P.; Nagel, J.
2015-12-01
The removal of non-brain regions in neuroimaging is a critical task to perform a favorable preprocessing. The skull-stripping depends on different factors including the noise level in the image, the anatomy of the subject being scanned and the acquisition sequence. For these and other reasons, an ideal brain extraction method should be fast, accurate, user friendly, open-source and knowledge based (to allow for the interaction with the algorithm in case the expected outcome is not being obtained), producing stable results and making it possible to automate the process for large datasets. There are already a large number of validated tools to perform this task but none of them meets the desired characteristics. In this paper we introduced an open source brain extraction tool (OSBET), composed of four steps using simple well-known operations such as: optimal thresholding, binary morphology, labeling and geometrical analysis that aims to assemble all the desired features. We present an experiment comparing OSBET with other six state-of-the-art techniques against a publicly available dataset consisting of 40 T1-weighted 3D scans and their corresponding manually segmented images. OSBET gave both: a short duration with an excellent accuracy, getting the best Dice Coefficient metric. Further validation should be performed, for instance, in unhealthy population, to generalize its usage for clinical purposes.
Simple and accurate optical height sensor for wafer inspection systems
NASA Astrophysics Data System (ADS)
Shimura, Kei; Nakai, Naoya; Taniguchi, Koichi; Itoh, Masahide
2016-02-01
An accurate method for measuring the wafer surface height is required for wafer inspection systems to adjust the focus of inspection optics quickly and precisely. A method for projecting a laser spot onto the wafer surface obliquely and for detecting its image displacement using a one-dimensional position-sensitive detector is known, and a variety of methods have been proposed for improving the accuracy by compensating the measurement error due to the surface patterns. We have developed a simple and accurate method in which an image of a reticle with eight slits is projected on the wafer surface and its reflected image is detected using an image sensor. The surface height is calculated by averaging the coordinates of the images of the slits in both the two directions in the captured image. Pattern-related measurement error was reduced by applying the coordinates averaging to the multiple-slit-projection method. Accuracy of better than 0.35 μm was achieved for a patterned wafer at the reference height and ±0.1 mm from the reference height in a simple configuration.
Accurate molecular classification of cancer using simple rules
Wang, Xiaosheng; Gotoh, Osamu
2009-01-01
Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV) of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML]), lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML). Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction. PMID:19874631
Fast and accurate line scanner based on white light interferometry
NASA Astrophysics Data System (ADS)
Lambelet, Patrick; Moosburger, Rudolf
2013-04-01
White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is
Accurate Anisotropic Fast Marching for Diffusion-Based Geodesic Tractography
Jbabdi, S.; Bellec, P.; Toro, R.; Daunizeau, J.; Pélégrini-Issac, M.; Benali, H.
2008-01-01
Using geodesics for inferring white matter fibre tracts from diffusion-weighted MR data is an attractive method for at least two reasons: (i) the method optimises a global criterion, and hence is less sensitive to local perturbations such as noise or partial volume effects, and (ii) the method is fast, allowing to infer on a large number of connexions in a reasonable computational time. Here, we propose an improved fast marching algorithm to infer on geodesic paths. Specifically, this procedure is designed to achieve accurate front propagation in an anisotropic elliptic medium, such as DTI data. We evaluate the numerical performance of this approach on simulated datasets, as well as its robustness to local perturbation induced by fiber crossing. On real data, we demonstrate the feasibility of extracting geodesics to connect an extended set of brain regions. PMID:18299703
An accurate and simple quantum model for liquid water.
Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A
2006-11-14
The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics
A Simple and Accurate Method for Measuring Enzyme Activity.
ERIC Educational Resources Information Center
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
Learning accurate very fast decision trees from uncertain data streams
NASA Astrophysics Data System (ADS)
Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo
2015-12-01
Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.
Learning fast accurate movements requires intact frontostriatal circuits
Shabbott, Britne; Ravindran, Roshni; Schumacher, Joseph W.; Wasserman, Paula B.; Marder, Karen S.; Mazzoni, Pietro
2013-01-01
The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound), and difficulties in distinguishing learning deficits from execution impairments (performance confound). We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements. We addressed the definition and performance confounds by: (1) focusing on an operationally defined core element of motor skill learning (speed-accuracy learning), and (2) using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington's disease (HD), a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate) than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning. PMID:24312037
A fast and accurate computational approach to protein ionization
Spassov, Velin Z.; Yan, Lisa
2008-01-01
We report a very fast and accurate physics-based method to calculate pH-dependent electrostatic effects in protein molecules and to predict the pK values of individual sites of titration. In addition, a CHARMm-based algorithm is included to construct and refine the spatial coordinates of all hydrogen atoms at a given pH. The present method combines electrostatic energy calculations based on the Generalized Born approximation with an iterative mobile clustering approach to calculate the equilibria of proton binding to multiple titration sites in protein molecules. The use of the GBIM (Generalized Born with Implicit Membrane) CHARMm module makes it possible to model not only water-soluble proteins but membrane proteins as well. The method includes a novel algorithm for preliminary refinement of hydrogen coordinates. Another difference from existing approaches is that, instead of monopeptides, a set of relaxed pentapeptide structures are used as model compounds. Tests on a set of 24 proteins demonstrate the high accuracy of the method. On average, the RMSD between predicted and experimental pK values is close to 0.5 pK units on this data set, and the accuracy is achieved at very low computational cost. The pH-dependent assignment of hydrogen atoms also shows very good agreement with protonation states and hydrogen-bond network observed in neutron-diffraction structures. The method is implemented as a computational protocol in Accelrys Discovery Studio and provides a fast and easy way to study the effect of pH on many important mechanisms such as enzyme catalysis, ligand binding, protein–protein interactions, and protein stability. PMID:18714088
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
Toward accurate and fast iris segmentation for iris biometrics.
He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao
2009-09-01
Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed. PMID:19574626
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
A fast approach for accurate content-adaptive mesh generation.
Yang, Yongyi; Wernick, Miles N; Brankov, Jovan G
2003-01-01
Mesh modeling is an important problem with many applications in image processing. A key issue in mesh modeling is how to generate a mesh structure that well represents an image by adapting to its content. We propose a new approach to mesh generation, which is based on a theoretical result derived on the error bound of a mesh representation. In the proposed method, the classical Floyd-Steinberg error-diffusion algorithm is employed to place mesh nodes in the image domain so that their spatial density varies according to the local image content. Delaunay triangulation is next applied to connect the mesh nodes. The result of this approach is that fine mesh elements are placed automatically in regions of the image containing high-frequency features while coarse mesh elements are used to represent smooth areas. The proposed algorithm is noniterative, fast, and easy to implement. Numerical results demonstrate that, at very low computational cost, the proposed approach can produce mesh representations that are more accurate than those produced by several existing methods. Moreover, it is demonstrated that the proposed algorithm performs well with images of various kinds, even in the presence of noise. PMID:18237961
Progress in Fast, Accurate Multi-scale Climate Simulations
Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter
2015-01-01
We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
Fast and accurate Coulomb calculation with Gaussian functions.
Füsti-Molnár, László; Kong, Jing
2005-02-15
Coulomb interaction is one of the major time-consuming components in a density functional theory (DFT) calculation. In the last decade, dramatic progresses have been made to improve the efficiency of Coulomb calculation, including continuous fast multipole method (CFMM) and J-engine method, all developed first inside Q-Chem. The most recent development is the advent of Fourier transform Coulomb method developed by Fusti-Molnar and Pulay, and an improved version of the method has been recently implemented in Q-Chem. It replaces the least efficient part of the previous Coulomb methods with an accurate numerical integration scheme that scales in O(N2) instead of O(N4) with the basis size. The result is a much smaller slope in the linear scaling with respect to the molecular size and we will demonstrate through a series of benchmark calculations that it speeds up the calculation of Coulomb energy by several folds over the efficient existing code, i.e., the combination of CFMM and J-engine, without loss of accuracy. Furthermore, we will show that it is complementary to the latter and together the three methods offer the best performance for Coulomb part of DFT calculations, making the DFT calculations affordable for very large systems involving thousands of basis functions. PMID:15743222
IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator
Massively Parallel Processing for Fast and Accurate Stamping Simulations
NASA Astrophysics Data System (ADS)
Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu
2005-08-01
The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.
Fast and Accurate Construction of Confidence Intervals for Heritability.
Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran
2016-06-01
Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052
Fast and simple fat grafting of the breast
Kristensen, Rasmus Nygård; Gunnarsson, Gudjon L.; Børsen-Koch, Mikkel; Reddy, Ashwin; Ømark, Henrik; Sørensen, Jens Ahm
2015-01-01
Fat grafting (FG) is being used at an escalating rate for correction of shape and volume of all types of breast surgery in order to optimize the aesthetic result in spite of an ongoing debate of the oncologic safety. In this paper we demonstrate our simple and fast sedimentation based FG technique in the attached video as visualized surgery. We have used this simple approach for 348 procedures in 176 women to optimize and correct the aesthetic result following all types of breast surgery. We prefer this simple technique as no technique has been shown to be superior to other more costly techniques and furthermore there are still questions about the oncologic safety in using adipose derived stem cells (ADSC). Simple fat harvesting using low vacuum and preparation by sedimentation is a fast and effective method to perform FG successfully for correction of shape and volume deficits of the breast following both ablative surgery as well as benign conditions with a high margin of safety. PMID:26645013
A fast and accurate decoder for underwater acoustic telemetry
NASA Astrophysics Data System (ADS)
Ingraham, J. M.; Deng, Z. D.; Li, X.; Fu, T.; McMichael, G. A.; Trumbo, B. A.
2014-07-01
The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.
A fast and accurate decoder for underwater acoustic telemetry.
Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A
2014-07-01
The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system. PMID:25085162
BBMap: A Fast, Accurate, Splice-Aware Aligner
Bushnell, Brian
2014-03-17
Alignment of reads is one of the primary computational tasks in bioinformatics. Of paramount importance to resequencing, alignment is also crucial to other areas - quality control, scaffolding, string-graph assembly, homology detection, assembly evaluation, error-correction, expression quantification, and even as a tool to evaluate other tools. An optimal aligner would greatly improve virtually any sequencing process, but optimal alignment is prohibitively expensive for gigabases of data. Here, we will present BBMap [1], a fast splice-aware aligner for short and long reads. We will demonstrate that BBMap has superior speed, sensitivity, and specificity to alternative high-throughput aligners bowtie2 [2], bwa [3], smalt, [4] GSNAP [5], and BLASR [6].
A new simple multidomain fast multipole boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2016-09-01
A simple multidomain fast multipole boundary element method (BEM) for solving potential problems is presented in this paper, which can be applied to solve a true multidomain problem or a large-scale single domain problem using the domain decomposition technique. In this multidomain BEM, the coefficient matrix is formed simply by assembling the coefficient matrices of each subdomain and the interface conditions between subdomains without eliminating any unknown variables on the interfaces. Compared with other conventional multidomain BEM approaches, this new approach is more efficient with the fast multipole method, regardless how the subdomains are connected. Instead of solving the linear system of equations directly, the entire coefficient matrix is partitioned and decomposed using Schur complement in this new approach. Numerical results show that the new multidomain fast multipole BEM uses fewer iterations in most cases with the iterative equation solver and less CPU time than the traditional fast multipole BEM in solving large-scale BEM models. A large-scale fuel cell model with more than 6 million elements was solved successfully on a cluster within 3 h using the new multidomain fast multipole BEM.
A new simple multidomain fast multipole boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2016-06-01
A simple multidomain fast multipole boundary element method (BEM) for solving potential problems is presented in this paper, which can be applied to solve a true multidomain problem or a large-scale single domain problem using the domain decomposition technique. In this multidomain BEM, the coefficient matrix is formed simply by assembling the coefficient matrices of each subdomain and the interface conditions between subdomains without eliminating any unknown variables on the interfaces. Compared with other conventional multidomain BEM approaches, this new approach is more efficient with the fast multipole method, regardless how the subdomains are connected. Instead of solving the linear system of equations directly, the entire coefficient matrix is partitioned and decomposed using Schur complement in this new approach. Numerical results show that the new multidomain fast multipole BEM uses fewer iterations in most cases with the iterative equation solver and less CPU time than the traditional fast multipole BEM in solving large-scale BEM models. A large-scale fuel cell model with more than 6 million elements was solved successfully on a cluster within 3 h using the new multidomain fast multipole BEM.
Fast and accurate automated cell boundary determination for fluorescence microscopy
NASA Astrophysics Data System (ADS)
Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider
2013-07-01
Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.
A fast and accurate FPGA based QRS detection system.
Shukla, Ashish; Macchiarulo, Luca
2008-01-01
An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach. PMID:19163797
Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows
Johnson, B M; Guan, X; Gammie, F
2008-04-11
In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.
Fast AT: A simple procedure for quasi direct orientation
NASA Astrophysics Data System (ADS)
Blázquez, M.; Colomina, I.
2012-07-01
Over the past two decades, the development of Global Navigation Satellite System (GNSS) technology, inertial navigation technology and Inertial Navigation Systems (INS) and their application to sensor orientation in photogrammetry and remote sensing has led to more precise, accurate, reliable and cost efficient orientation and calibration methods and procedures. Today, most airborne photogrammetric and remote sensing systems are equipped with GNSS receivers and inertial sensors. To a large extent and more or less independently from the imaging geometry and sensor type, orientation is performed with the "direct" and "integrated" methods. In this paper we introduce a new orientation method that we call "Fast AT" for frame images. The new method combines image measurements, ground control and aerial control observations in novel quantitative and qualitative ways. Depending on project specifications, Fast AT can be a robust alternative to direct orientation and, at the very least, a fast quality control tool for any orientation task. We analyze the performance of Fast AT with analogue and digital frame imagery and draw conclusions on its general properties.
Robust, accurate and fast automatic segmentation of the spinal cord.
De Leener, Benjamin; Kadoury, Samuel; Cohen-Adad, Julien
2014-09-01
Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord. PMID:24780696
Fast and Accurate Detection of Multiple Quantitative Trait Loci
Nettelblad, Carl; Holmgren, Sverker
2013-01-01
Abstract We present a new computational scheme that enables efficient and reliable quantitative trait loci (QTL) scans for experimental populations. Using a standard brute-force exhaustive search effectively prohibits accurate QTL scans involving more than two loci to be performed in practice, at least if permutation testing is used to determine significance. Some more elaborate global optimization approaches, for example, DIRECT have been adopted earlier to QTL search problems. Dramatic speedups have been reported for high-dimensional scans. However, since a heuristic termination criterion must be used in these types of algorithms, the accuracy of the optimization process cannot be guaranteed. Indeed, earlier results show that a small bias in the significance thresholds is sometimes introduced. Our new optimization scheme, PruneDIRECT, is based on an analysis leading to a computable (Lipschitz) bound on the slope of a transformed objective function. The bound is derived for both infinite- and finite-size populations. Introducing a Lipschitz bound in DIRECT leads to an algorithm related to classical Lipschitz optimization. Regions in the search space can be permanently excluded (pruned) during the optimization process. Heuristic termination criteria can thus be avoided. Hence, PruneDIRECT has a well-defined error bound and can in practice be guaranteed to be equivalent to a corresponding exhaustive search. We present simulation results that show that for simultaneous mapping of three QTLS using permutation testing, PruneDIRECT is typically more than 50 times faster than exhaustive search. The speedup is higher for stronger QTL. This could be used to quickly detect strong candidate eQTL networks. PMID:23919387
CT-Analyst: fast and accurate CBR emergency assessment
NASA Astrophysics Data System (ADS)
Boris, Jay; Fulton, Jack E., Jr.; Obenschain, Keith; Patnaik, Gopal; Young, Theodore, Jr.
2004-08-01
An urban-oriented emergency assessment system for airborne Chemical, Biological, and Radiological (CBR) threats, called CT-Analyst and based on new principles, gives greater accuracy and much greater speed than possible with current alternatives. This paper explains how this has been done. The increased accuracy derives from detailed, three-dimensional CFD computations including, solar heating, buoyancy, complete building geometry specification, trees, wind fluctuations, and particle and droplet distributions (as appropriate). This paper shows how a very finite number of such computations for a given area can be extended to all wind directions and speeds, and all likely sources and source locations using a new data structure called Dispersion Nomographs. Finally, we demonstrate a portable, entirely graphical software tool called CT-Analyst that embodies this entirely new, high-resolution technology and runs effectively on small personal computers. Real-time users don't have to wait for results because accurate answers are available with near zero-latency (that is 10 - 20 scenarios per second). Entire sequences of cases (e.g. a continuously changing source location or wind direction) can be computed and displayed as continuous-action movies. Since the underlying database has been precomputed, the door is wide open for important new real-time, zero-latency functions such as sensor data fusion, backtracking to an unknown source location, and even evacuation route planning. Extensions of the technology to sensor location optimization, buildings, tunnels, and integration with other advanced technologies, e.g. micrometeorology or detailed wind field measurements, will be discussed briefly here.
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space
NASA Astrophysics Data System (ADS)
Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
NASA Astrophysics Data System (ADS)
Seifert, S.; Steenbergen, J. H. L.; van Dam, H. T.; Schaart, D. R.
2012-09-01
In this work we present a measurement setup for the determination of scintillation pulse shapes of fast scintillators. It is based on a time-correlated single photon counting approach that utilizes the correlation between 511 keV annihilation photons to produce start and stop signals in two separate crystals. The measurement is potentially cost-effective and simple to set up while maintaining an excellent system timing resolution of 125 ps. As a proof-of-concept the scintillation photon arrival time histograms were recorded for two well-known, fast scintillators: LYSO:Ce and LaBr3:5%Ce. The scintillation pulse shapes were modeled as a linear combination of exponentially distributed charge transfer and photon emission processes. Correcting for the system timing resolution, the exponential time constants were extracted from the recorded histograms. A decay time of 43 ns and a rise time of 72 ps were determined for LYSO:Ce thus demonstrating the capability of the system to accurately measure very fast rise times. In the case of LaBr3:5%Ce two processes were observed to contribute to the rising edge of the scintillation pulse. The faster component (270 ps) contributes with 72% to the rising edge of the scintillation pulse while the second, slower component (2.0 ns) contributes with 27%. The decay of the LaBr3:5%Ce scintillation pulse was measured to be 15.4 ns with a small contribution (2%) of a component with a larger time constant (130 ns).
Effects of Fast Simple Numerical Calculation Training on Neural Systems
Takeuchi, Hikaru; Nagase, Tomomi; Taki, Yasuyuki; Sassa, Yuko; Hashizume, Hiroshi; Nouchi, Rui; Kawashima, Ryuta
2016-01-01
Cognitive training, including fast simple numerical calculation (FSNC), has been shown to improve performance on untrained processing speed and executive function tasks in the elderly. However, the effects of FSNC training on cognitive functions in the young and on neural mechanisms remain unknown. We investigated the effects of 1-week intensive FSNC training on cognitive function, regional gray matter volume (rGMV), and regional cerebral blood flow at rest (resting rCBF) in healthy young adults. FSNC training was associated with improvements in performance on simple processing speed, speeded executive functioning, and simple and complex arithmetic tasks. FSNC training was associated with a reduction in rGMV and an increase in resting rCBF in the frontopolar areas and a weak but widespread increase in resting rCBF in an anatomical cluster in the posterior region. These results provide direct evidence that FSNC training alone can improve performance on processing speed and executive function tasks as well as plasticity of brain structures and perfusion. Our results also indicate that changes in neural systems in the frontopolar areas may underlie these cognitive improvements. PMID:26881117
Weaver, Phoebe G; Jagow, Devin M; Portune, Cameron M; Kenney, John W
2016-01-01
The design and operation of a simple liquid nitrogen Dewar/cryostat apparatus based upon a small fused silica optical Dewar, a thermocouple assembly, and a CCD spectrograph are described. The experiments for which this Dewar/cryostat is designed require fast sample loading, fast sample freezing, fast alignment of the sample, accurate and stable sample temperatures, and small size and portability of the Dewar/cryostat cryogenic unit. When coupled with the fast data acquisition rates of the CCD spectrograph, this Dewar/cryostat is capable of supporting cryogenic luminescence spectroscopic measurements on luminescent samples at a series of known, stable temperatures in the 77-300 K range. A temperature-dependent study of the oxygen quenching of luminescence in a rhodium(III) transition metal complex is presented as an example of the type of investigation possible with this Dewar/cryostat. In the context of this apparatus, a stable temperature for cryogenic spectroscopy means a luminescent sample that is thermally equilibrated with either liquid nitrogen or gaseous nitrogen at a known measureable temperature that does not vary (ΔT < 0.1 K) during the short time scale (~1-10 sec) of the spectroscopic measurement by the CCD. The Dewar/cryostat works by taking advantage of the positive thermal gradient dT/dh that develops above liquid nitrogen level in the Dewar where h is the height of the sample above the liquid nitrogen level. The slow evaporation of the liquid nitrogen results in a slow increase in h over several hours and a consequent slow increase in the sample temperature T over this time period. A quickly acquired luminescence spectrum effectively catches the sample at a constant, thermally equilibrated temperature. PMID:27501355
FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.
El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant
2016-01-01
A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein
FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues
EL-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant
2016-01-01
A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein
2013-01-01
Background Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. Results We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. Conclusion CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets. PMID:23617892
Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements
NASA Astrophysics Data System (ADS)
Singh, Chandan; Saini, Jaswinder Singh
2016-07-01
In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.
Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements
NASA Astrophysics Data System (ADS)
Singh, Chandan; Saini, Jaswinder Singh
2016-05-01
In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.
Fast and accurate circle detection using gradient-direction-based segmentation.
Wu, Jianping; Chen, Ke; Gao, Xiaohui
2013-06-01
We present what is to our knowledge the first-ever fitting-based circle detection algorithm, namely, the fast and accurate circle (FACILE) detection algorithm, based on gradient-direction-based edge clustering and direct least square fitting. Edges are segmented into sections based on gradient directions, and each section is validated separately; valid arcs are then fitted and further merged to extract more accurate circle information. We implemented the algorithm with the C++ language and compared it with four other algorithms. Testing on simulated data showed FACILE was far superior to the randomized Hough transform, standard Hough transform, and fast circle detection using gradient pair vectors with regard to processing speed and detection reliability. Testing on publicly available standard datasets showed FACILE outperformed robust and precise circular detection, a state-of-art arc detection method, by 35% with regard to recognition rate and is also a significant improvement over the latter in processing speed. PMID:24323106
Simple accurate approximations for the optical properties of metallic nanospheres and nanoshells.
Schebarchov, Dmitri; Auguié, Baptiste; Le Ru, Eric C
2013-03-28
This work aims to provide simple and accurate closed-form approximations to predict the scattering and absorption spectra of metallic nanospheres and nanoshells supporting localised surface plasmon resonances. Particular attention is given to the validity and accuracy of these expressions in the range of nanoparticle sizes relevant to plasmonics, typically limited to around 100 nm in diameter. Using recent results on the rigorous radiative correction of electrostatic solutions, we propose a new set of long-wavelength polarizability approximations for both nanospheres and nanoshells. The improvement offered by these expressions is demonstrated with direct comparisons to other approximations previously obtained in the literature, and their absolute accuracy is tested against the exact Mie theory. PMID:23358525
Fast and accurate image recognition algorithms for fresh produce food safety sensing
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.
2011-06-01
This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.
A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns
NASA Astrophysics Data System (ADS)
Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae
2004-05-01
Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.
Ergül, Özgür; Gürel, Levent
2013-03-01
Accurate electromagnetic modeling of complicated optical structures poses several challenges. Optical metamaterial and plasmonic structures are composed of multiple coexisting dielectric and/or conducting parts. Such composite structures may possess diverse values of conductivities and dielectric constants, including negative permittivity and permeability. Further challenges are the large sizes of the structures with respect to wavelength and the complexities of the geometries. In order to overcome these challenges and to achieve rigorous and efficient electromagnetic modeling of three-dimensional optical composite structures, we have developed a parallel implementation of the multilevel fast multipole algorithm (MLFMA). Precise formulation of composite structures is achieved with the so-called "electric and magnetic current combined-field integral equation." Surface integral equations are carefully discretized with piecewise linear basis functions, and the ensuing dense matrix equations are solved iteratively with parallel MLFMA. The hierarchical strategy is used for the efficient parallelization of MLFMA on distributed-memory architectures. In this paper, fast and accurate solutions of large-scale canonical and complicated real-life problems, such as optical metamaterials, discretized with tens of millions of unknowns are presented in order to demonstrate the capabilities of the proposed electromagnetic solver. PMID:23456127
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.
Saccà, Alessandro
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
Spinelli, Orietta; Rambaldi, Alessandro; Rigo, Francesca; Zanghì, Pamela; D'Agostini, Elena; Amicarelli, Giulia; Colotta, Francesco; Divona, Mariadomenica; Ciardi, Claudia; Coco, Francesco Lo; Minnucci, Giulia
2015-01-01
The diagnostic work-up of acute promyelocytic leukemia (APL) includes the cytogenetic demonstration of the t(15;17) translocation and/or the PML-RARA chimeric transcript by RQ-PCR or RT-PCR. This latter assays provide suitable results in 3-6 hours. We describe here two new, rapid and specific assays that detect PML-RARA transcripts, based on the RT-QLAMP (Reverse Transcription-Quenching Loop-mediated Isothermal Amplification) technology in which RNA retrotranscription and cDNA amplification are carried out in a single tube with one enzyme at one temperature, in fluorescence and real time format. A single tube triplex assay detects bcr1 and bcr3 PML-RARA transcripts along with GUS housekeeping gene. A single tube duplex assay detects bcr2 and GUSB. In 73 APL cases, these assays detected in 16 minutes bcr1, bcr2 and bcr3 transcripts. All 81 non-APL samples were negative by RT-QLAMP for chimeric transcripts whereas GUSB was detectable. In 11 APL patients in which RT-PCR yielded equivocal breakpoint type results, RT-QLAMP assays unequivocally and accurately defined the breakpoint type (as confirmed by sequencing). Furthermore, RT-QLAMP could amplify two bcr2 transcripts with particularly extended PML exon 6 deletions not amplified by RQ-PCR. RT-QLAMP reproducible sensitivity is 10−3 for bcr1 and bcr3 and 10−2 for bcr2 thus making this assay particularly attractive at diagnosis and leaving RQ-PCR for the molecular monitoring of minimal residual disease during the follow up. In conclusion, PML-RARA RT-QLAMP compared to RT-PCR or RQ-PCR is a valid improvement to perform rapid, simple and accurate molecular diagnosis of APL. PMID:25815362
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
Three-dimensional shape measurement with a fast and accurate approach
Wang Zhaoyang; Du Hua; Park, Seungbae; Xie Huimin
2009-02-20
A noncontact, fast, accurate, low-cost, broad-range, full-field, easy-to-implement three-dimensional (3D) shape measurement technique is presented. The technique is based on a generalized fringe projection profilometry setup that allows each system component to be arbitrarily positioned. It employs random phase-shifting, multifrequency projection fringes, ultrafast direct phase unwrapping, and inverse self-calibration schemes to perform 3D shape determination with enhanced accuracy in a fast manner. The relative measurement accuracy can reach 1/10,000 or higher, and the acquisition speed is faster than two 3D views per second. The validity and practicability of the proposed technique have been verified by experiments. Because of its superior capability, the proposed 3D shape measurement technique is suitable for numerous applications in a variety of fields.
Automated Fast and Accurate Display Calibration Using ADT Compensated LCD for Mobile Phone
NASA Astrophysics Data System (ADS)
Han, Chan-Ho; Park, Kil-Houm
Gamma correction is an essential function and is time consuming task in every display device such as CRT and LCD. And gray scale CCT reproduction in most LCD are quite different from those of standard CRT. An automated fast and accurate display adjusment method and system for gamma correction and for constant gray scale CCT calibration of mobile phone LCD is presented in this paper. We develop the test pattern disply and register control program in mobile phone and devleop automatic measure program in computer using spectroradimeter. The proposed system is maintain given gamma values and CCT values accuratly. In addition, This system is possible to fast mobile phone LCD adjusment within one hour.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
Groeneboom, N. E.; Dahle, H.
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
Fast and accurate mock catalogue generation for low-mass galaxies
NASA Astrophysics Data System (ADS)
Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe
2016-06-01
We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.
Fast and accurate dating of nuclear events using La-140/Ba-140 isotopic activity ratio.
Yamba, Kassoum; Sanogo, Oumar; Kalinowski, Martin B; Nikkinen, Mika; Koulidiati, Jean
2016-06-01
This study reports on a fast and accurate assessment of zero time of certain nuclear events using La-140/Ba-140 isotopic activity ratio. For a non-steady nuclear fission reaction, the dating is not possible. For the hypothesis of a nuclear explosion and for a release from a steady state nuclear fission reaction the zero-times will differ. This assessment is fast, because we propose some constants that can be used directly for the calculation of zero time and its upper and lower age limits. The assessment is accurate because of the calculation of zero time using a mathematical method, namely the weighted least-squares method, to evaluate an average value of the age of a nuclear event. This was done using two databases that exhibit differences between the values of some nuclear parameters. As an example, the calculation method is applied for the detection of radionuclides La-140 and Ba-140 in May 2010 at the radionuclides station JPP37 (Okinawa Island, Japan). PMID:27058322
A Simple and Fast Spline Filtering Algorithm for Surface Metrology
Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei
2015-01-01
Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement. PMID:26958443
Fast and Accurate Semiautomatic Segmentation of Individual Teeth from Dental CT Images.
Kang, Ho Chul; Choi, Chankyu; Shin, Juneseuk; Lee, Jeongjin; Shin, Yeong-Gil
2015-01-01
In this paper, we propose a fast and accurate semiautomatic method to effectively distinguish individual teeth from the sockets of teeth in dental CT images. Parameter values of thresholding and shapes of the teeth are propagated to the neighboring slice, based on the separated teeth from reference images. After the propagation of threshold values and shapes of the teeth, the histogram of the current slice was analyzed. The individual teeth are automatically separated and segmented by using seeded region growing. Then, the newly generated separation information is iteratively propagated to the neighboring slice. Our method was validated by ten sets of dental CT scans, and the results were compared with the manually segmented result and conventional methods. The average error of absolute value of volume measurement was 2.29 ± 0.56%, which was more accurate than conventional methods. Boosting up the speed with the multicore processors was shown to be 2.4 times faster than a single core processor. The proposed method identified the individual teeth accurately, demonstrating that it can give dentists substantial assistance during dental surgery. PMID:26413143
Fast and Accurate Semiautomatic Segmentation of Individual Teeth from Dental CT Images
Kang, Ho Chul; Choi, Chankyu; Shin, Juneseuk; Lee, Jeongjin; Shin, Yeong-Gil
2015-01-01
DIn this paper, we propose a fast and accurate semiautomatic method to effectively distinguish individual teeth from the sockets of teeth in dental CT images. Parameter values of thresholding and shapes of the teeth are propagated to the neighboring slice, based on the separated teeth from reference images. After the propagation of threshold values and shapes of the teeth, the histogram of the current slice was analyzed. The individual teeth are automatically separated and segmented by using seeded region growing. Then, the newly generated separation information is iteratively propagated to the neighboring slice. Our method was validated by ten sets of dental CT scans, and the results were compared with the manually segmented result and conventional methods. The average error of absolute value of volume measurement was 2.29 ± 0.56%, which was more accurate than conventional methods. Boosting up the speed with the multicore processors was shown to be 2.4 times faster than a single core processor. The proposed method identified the individual teeth accurately, demonstrating that it can give dentists substantial assistance during dental surgery. PMID:26413143
A simple backscattering microscope for fast tracking of biological molecules.
Sowa, Yoshiyuki; Steel, Bradley C; Berry, Richard M
2010-11-01
Recent developments in techniques for observing single molecules under light microscopes have helped reveal the mechanisms by which molecular machines work. A wide range of markers can be used to detect molecules, from single fluorophores to micron sized markers, depending on the research interest. Here, we present a new and simple objective-type backscattering microscope to track gold nanoparticles with nanometer and microsecond resolution. The total noise of our system in a 55 kHz bandwidth is ~0.6 nm per axis, sufficient to measure molecular movement. We found our backscattering microscopy to be useful not only for in vitro but also for in vivo experiments because of lower background scattering from cells than in conventional dark-field microscopy. We demonstrate the application of this technique to measuring the motion of a biological rotary molecular motor, the bacterial flagellar motor, in live Escherichia coli cells. PMID:21133475
Loco, Daniele; Jurinovich, Sandro; Di Bari, Lorenzo; Mennucci, Benedetta
2016-01-14
We present and discuss a simple and fast computational approach to the calculation of electronic circular dichroism spectra of nucleic acids. It is based on a exciton model in which the couplings are obtained in terms of the full transition-charge distributions, as resulting from TDDFT methods applied on the individual nucleobases. We validated the method on two systems, a DNA G-quadruplex and a RNA β-hairpin whose solution structures have been accurately determined by means of NMR. We have shown that the different characteristics of composition and structure of the two systems can lead to quite important differences in the dependence of the accuracy of the simulation on the excitonic parameters. The accurate reproduction of the CD spectra together with their interpretation in terms of the excitonic composition suggest that this method may lend itself as a general computational tool to both predict the spectra of hypothetic structures and define clear relationships between structural and ECD properties. PMID:26646952
Simple but accurate GCM-free approach for quantifying anthropogenic climate change
NASA Astrophysics Data System (ADS)
Lovejoy, S.
2014-12-01
We are so used to analysing the climate with the help of giant computer models (GCM's) that it is easy to get the impression that they are indispensable. Yet anthropogenic warming is so large (roughly 0.9oC) that it turns out that it is straightforward to quantify it with more empirically based methodologies that can be readily understood by the layperson. The key is to use the CO2 forcing as a linear surrogate for all the anthropogenic effects from 1880 to the present (implicitly including all effects due to Greenhouse Gases, aerosols and land use changes). To a good approximation, double the economic activity, double the effects. The relationship between the forcing and global mean temperature is extremely linear as can be seen graphically and understood without fancy statistics, [Lovejoy, 2014a] (see the attached figure and http://www.physics.mcgill.ca/~gang/Lovejoy.htm). To an excellent approximation, the deviations from the linear forcing - temperature relation can be interpreted as the natural variability. For example, this direct - yet accurate approach makes it graphically obvious that the "pause" or "hiatus" in the warming since 1998 is simply a natural cooling event that has roughly offset the anthropogenic warming [Lovejoy, 2014b]. Rather than trying to prove that the warming is anthropogenic, with a little extra work (and some nonlinear geophysics theory and pre-industrial multiproxies) we can disprove the competing theory that it is natural. This approach leads to the estimate that the probability of the industrial scale warming being a giant natural fluctuation is ≈0.1%: it can be dismissed. This destroys the last climate skeptic argument - that the models are wrong and the warming is natural. It finally allows for a closure of the debate. In this talk we argue that this new, direct, simple, intuitive approach provides an indispensable tool for communicating - and convincing - the public of both the reality and the amplitude of anthropogenic warming
CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval
Karim, Rezaul; Aziz, Mohd. Momin Al; Shatabda, Swakkhar; Rahman, M. Sohel; Mia, Md. Abul Kashem; Zaman, Farhana; Rakin, Salman
2015-01-01
The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226
Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes
Sjostrom, Travis; Daligault, Jerome
2014-10-10
Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lower temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.
Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes
Sjostrom, Travis; Daligault, Jerome
2014-10-10
Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lowermore » temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.« less
CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval.
Karim, Rezaul; Aziz, Mohd Momin Al; Shatabda, Swakkhar; Rahman, M Sohel; Mia, Md Abul Kashem; Zaman, Farhana; Rakin, Salman
2015-01-01
The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226
Fast and accurate border detection in dermoscopy images using statistical region merging
NASA Astrophysics Data System (ADS)
Celebi, M. Emre; Kingravi, Hassan A.; Iyatomi, Hitoshi; Lee, JeongKyu; Aslandogan, Y. Alp; Van Stoecker, William; Moss, Randy; Malters, Joseph M.; Marghoob, Ashfaq A.
2007-03-01
As a result of advances in skin imaging technology and the development of suitable image processing techniques during the last decade, there has been a significant increase of interest in the computer-aided diagnosis of melanoma. Automated border detection is one of the most important steps in this procedure, since the accuracy of the subsequent steps crucially depends on it. In this paper, a fast and unsupervised approach to border detection in dermoscopy images of pigmented skin lesions based on the Statistical Region Merging algorithm is presented. The method is tested on a set of 90 dermoscopy images. The border detection error is quantified by a metric in which a set of dermatologist-determined borders is used as the ground-truth. The proposed method is compared to six state-of-the-art automated methods (optimized histogram thresholding, orientation-sensitive fuzzy c-means, gradient vector flow snakes, dermatologist-like tumor extraction algorithm, meanshift clustering, and the modified JSEG method) and borders determined by a second dermatologist. The results demonstrate that the presented method achieves both fast and accurate border detection in dermoscopy images.
Flexible, Fast and Accurate Sequence Alignment Profiling on GPGPU with PaSWAS
Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J. L.; Nap, Jan Peter
2015-01-01
Motivation To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. Results With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation. PMID:25830241
ERIC Educational Resources Information Center
Beare, R. A.
2008-01-01
Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…
Application of the G-JF discrete-time thermostat for fast and accurate molecular simulations
NASA Astrophysics Data System (ADS)
Grønbech-Jensen, Niels; Hayre, Natha Robert; Farago, Oded
2014-02-01
A new Langevin-Verlet thermostat that preserves the fluctuation-dissipation relationship for discrete time steps is applied to molecular modeling and tested against several popular suites (AMBER, GROMACS, LAMMPS) using a small molecule as an example that can be easily simulated by all three packages. Contrary to existing methods, the new thermostat exhibits no detectable changes in the sampling statistics as the time step is varied in the entire numerical stability range. The simple form of the method, which we express in the three common forms (Velocity-Explicit, Störmer-Verlet, and Leap-Frog), allows for easy implementation within existing molecular simulation packages to achieve faster and more accurate results with no cost in either computing time or programming complexity.
Highly accurate and fast optical penetration-based silkworm gender separation system
NASA Astrophysics Data System (ADS)
Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Chanhorm, Sataporn
2015-07-01
Based on our research work in the last five years, this paper highlights our innovative optical sensing system that can identify and separate silkworm gender highly suitable for sericulture industry. The key idea relies on our proposed optical penetration concepts and once combined with simple image processing operations leads to high accuracy in identifying of silkworm gender. Inside the system, there are electronic and mechanical parts that assist in controlling the overall system operation, processing the optical signal, and separating the female from male silkworm pupae. With current system performance, we achieve a very highly accurate more than 95% in identifying gender of silkworm pupae with an average system operational speed of 30 silkworm pupae/minute. Three of our systems are already in operation at Thailand's Queen Sirikit Sericulture Centers.
A fast GNU method to draw accurate scientific illustrations for taxonomy.
Montesanto, Giuseppe
2015-01-01
Nowadays only digital figures are accepted by the most important journals of taxonomy. These may be produced by scanning conventional drawings, made with high precision technical ink-pens, which normally use capillary cartridge and various line widths. Digital drawing techniques that use vector graphics, have already been described in literature to support scientists in drawing figures and plates for scientific illustrations; these techniques use many different software and hardware devices. The present work gives step-by-step instructions on how to make accurate line drawings with a new procedure that uses bitmap graphics with the GNU Image Manipulation Program (GIMP). This method is noteworthy: it is very accurate, producing detailed lines at the highest resolution; the raster lines appear as realistic ink-made drawings; it is faster than the traditional way of making illustrations; everyone can use this simple technique; this method is completely free as it does not use expensive and licensed software and it can be used with different operating systems. The method has been developed drawing figures of terrestrial isopods and some examples are here given. PMID:26261449
A fast GNU method to draw accurate scientific illustrations for taxonomy
Montesanto, Giuseppe
2015-01-01
Abstract Nowadays only digital figures are accepted by the most important journals of taxonomy. These may be produced by scanning conventional drawings, made with high precision technical ink-pens, which normally use capillary cartridge and various line widths. Digital drawing techniques that use vector graphics, have already been described in literature to support scientists in drawing figures and plates for scientific illustrations; these techniques use many different software and hardware devices. The present work gives step-by-step instructions on how to make accurate line drawings with a new procedure that uses bitmap graphics with the GNU Image Manipulation Program (GIMP). This method is noteworthy: it is very accurate, producing detailed lines at the highest resolution; the raster lines appear as realistic ink-made drawings; it is faster than the traditional way of making illustrations; everyone can use this simple technique; this method is completely free as it does not use expensive and licensed software and it can be used with different operating systems. The method has been developed drawing figures of terrestrial isopods and some examples are here given. PMID:26261449
RICO: A New Approach for Fast and Accurate Representation of the Cosmological Recombination History
NASA Astrophysics Data System (ADS)
Fendt, W. A.; Chluba, J.; Rubiño-Martín, J. A.; Wandelt, B. D.
2009-04-01
We present RICO, a code designed to compute the ionization fraction of the universe during the epoch of hydrogen and helium recombination with an unprecedented combination of speed and accuracy. This is accomplished by training the machine learning code PICO on the calculations of a multilevel cosmological recombination code which self-consistently includes several physical processes that were neglected previously. After training, RICO is used to fit the free electron fraction as a function of the cosmological parameters. While, for example, at low redshifts (z lsim 900), much of the net change in the ionization fraction can be captured by lowering the hydrogen fudge factor in RECFAST by about 3%, RICO provides a means of effectively using the accurate ionization history of the full recombination code in the standard cosmological parameter estimation framework without the need to add new or refined fudge factors or functions to a simple recombination model. Within the new approach presented here, it is easy to update RICO whenever a more accurate full recombination code becomes available. Once trained, RICO computes the cosmological ionization history with negligible fitting error in ~10 ms, a speedup of at least 106 over the full recombination code that was used here. Also RICO is able to reproduce the ionization history of the full code to a level well below 0.1%, thereby ensuring that the theoretical power spectra of cosmic microwave background (CMB) fluctuations can be computed to sufficient accuracy and speed for analysis from upcoming CMB experiments like Planck. Furthermore, it will enable cross-checking different recombination codes across cosmological parameter space, a comparison that will be very important in order to assure the accurate interpretation of future CMB data.
FastME 2.0: A Comprehensive, Accurate, and Fast Distance-Based Phylogeny Inference Program.
Lefort, Vincent; Desper, Richard; Gascuel, Olivier
2015-10-01
FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/). PMID:26130081
Fast and Accurate Large-Scale Detection of β-Lactamase Genes Conferring Antibiotic Resistance
Lee, Jae Jin; Lee, Jung Hun; Kwon, Dae Beom; Jeon, Jeong Ho; Park, Kwang Seung; Lee, Chang-Ro
2015-01-01
Fast detection of β-lactamase (bla) genes allows improved surveillance studies and infection control measures, which can minimize the spread of antibiotic resistance. Although several molecular diagnostic methods have been developed to detect limited bla gene types, these methods have significant limitations, such as their failure to detect almost all clinically available bla genes. We developed a fast and accurate molecular method to overcome these limitations using 62 primer pairs, which were designed through elaborate optimization processes. To verify the ability of this large-scale bla detection method (large-scaleblaFinder), assays were performed on previously reported bacterial control isolates/strains. To confirm the applicability of the large-scaleblaFinder, the assays were performed on unreported clinical isolates. With perfect specificity and sensitivity in 189 control isolates/strains and 403 clinical isolates, the large-scaleblaFinder detected almost all clinically available bla genes. Notably, the large-scaleblaFinder detected 24 additional unreported bla genes in the isolates/strains that were previously studied, suggesting that previous methods detecting only limited types of bla genes can miss unexpected bla genes existing in pathogenic bacteria, and our method has the ability to detect almost all bla genes existing in a clinical isolate. The ability of large-scaleblaFinder to detect bla genes on a large scale enables prompt application to the detection of almost all bla genes present in bacterial pathogens. The widespread use of the large-scaleblaFinder in the future will provide an important aid for monitoring the emergence and dissemination of bla genes and minimizing the spread of resistant bacteria. PMID:26169415
Fast and Accurate Large-Scale Detection of β-Lactamase Genes Conferring Antibiotic Resistance.
Lee, Jae Jin; Lee, Jung Hun; Kwon, Dae Beom; Jeon, Jeong Ho; Park, Kwang Seung; Lee, Chang-Ro; Lee, Sang Hee
2015-10-01
Fast detection of β-lactamase (bla) genes allows improved surveillance studies and infection control measures, which can minimize the spread of antibiotic resistance. Although several molecular diagnostic methods have been developed to detect limited bla gene types, these methods have significant limitations, such as their failure to detect almost all clinically available bla genes. We developed a fast and accurate molecular method to overcome these limitations using 62 primer pairs, which were designed through elaborate optimization processes. To verify the ability of this large-scale bla detection method (large-scaleblaFinder), assays were performed on previously reported bacterial control isolates/strains. To confirm the applicability of the large-scaleblaFinder, the assays were performed on unreported clinical isolates. With perfect specificity and sensitivity in 189 control isolates/strains and 403 clinical isolates, the large-scaleblaFinder detected almost all clinically available bla genes. Notably, the large-scaleblaFinder detected 24 additional unreported bla genes in the isolates/strains that were previously studied, suggesting that previous methods detecting only limited types of bla genes can miss unexpected bla genes existing in pathogenic bacteria, and our method has the ability to detect almost all bla genes existing in a clinical isolate. The ability of large-scaleblaFinder to detect bla genes on a large scale enables prompt application to the detection of almost all bla genes present in bacterial pathogens. The widespread use of the large-scaleblaFinder in the future will provide an important aid for monitoring the emergence and dissemination of bla genes and minimizing the spread of resistant bacteria. PMID:26169415
[Fast and accurate extraction of ring-down time in cavity ring-down spectroscopy].
Wang, Dan; Hu, Ren-Zhi; Xie, Pin-Hua; Qin, Min; Ling, Liu-Yi; Duan, Jun
2014-10-01
Research is conducted to accurate and efficient algorithms for extracting ring-down time (r) in cavity ring-down spectroscopy (CRDS) which is used to measure NO3 radical in the atmosphere. Fast and accurate extraction of ring-down time guarantees more precise and higher speed of measurement. In this research, five kinds of commonly used algorithms are selected to extract ring-down time which respectively are fast Fourier transform (FFT) algorithm, discrete Fourier transform (DFT) algorithm, linear regression of the sum (LRS) algorithm, Levenberg-Marquardt (LM) algorithm and least squares (LS) algorithm. Simulated ring-down signals with various amplitude levels of white noises are fitted by using five kinds of the above-mentioned algorithms, and comparison and analysis is conducted to the fitting results of five kinds of algorithms from four respects: the vulnerability to noises, the accuracy and precision of the fitting, the speed of the fitting and preferable fitting ring-down signal waveform length The research results show that Levenberg-Marquardt algorithm and linear regression of the sum algorithm are able to provide more precise results and prove to have higher noises immunity, and by comparison, the fitting speed of Leven- berg-Marquardt algorithm turns out to be slower. In addition, by analysis of simulated ring-down signals, five to ten times of ring-down time is selected to be the best fitting waveform length because in this case, standard deviation of fitting results of five kinds of algorithms proves to be the minimum. External modulation diode laser and cavity which consists of two high reflectivity mirrors are used to construct a cavity ring-down spectroscopy detection system. According to our experimental conditions, in which the noise level is 0.2%, linear regression of the sum algorithm and Levenberg-Marquardt algorithm are selected to process experimental data. The experimental results show that the accuracy and precision of linear regression of
Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping
NASA Astrophysics Data System (ADS)
Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.
2016-06-01
High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.
NASA Astrophysics Data System (ADS)
Grimminck, Dennis L. A. G.; Polman, Ben J. W.; Kentgens, Arno P. M.; Leo Meerts, W.
2011-08-01
A fast and accurate fit program is presented for deconvolution of one-dimensional solid-state quadrupolar NMR spectra of powdered materials. Computational costs of the synthesis of theoretical spectra are reduced by the use of libraries containing simulated time/frequency domain data. These libraries are calculated once and with the use of second-party simulation software readily available in the NMR community, to ensure a maximum flexibility and accuracy with respect to experimental conditions. EASY-GOING deconvolution ( EGdeconv) is equipped with evolutionary algorithms that provide robust many-parameter fitting and offers efficient parallellised computing. The program supports quantification of relative chemical site abundances and (dis)order in the solid-state by incorporation of (extended) Czjzek and order parameter models. To illustrate EGdeconv's current capabilities, we provide three case studies. Given the program's simple concept it allows a straightforward extension to include other NMR interactions. The program is available as is for 64-bit Linux operating systems.
Khoomrung, Sakda; Chumnanpuen, Pramote; Jansa-ard, Suwanee; Nookaew, Intawat; Nielsen, Jens
2012-06-01
We present a fast and accurate method for preparation of fatty acid methyl esters (FAMEs) using microwave-assisted derivatization of fatty acids present in yeast samples. The esterification of free/bound fatty acids to FAMEs was completed within 5 min, which is 24 times faster than with conventional heating methods. The developed method was validated in two ways: (1) through comparison with a conventional method (hot plate) and (2) through validation with the standard reference material (SRM) 3275-2 omega-3 and omega-6 fatty acids in fish oil (from the Nation Institute of Standards and Technology, USA). There were no significant differences (P>0.05) in yields of FAMEs with both validations. By performing a simple modification of closed-vessel microwave heating, it was possible to carry out the esterification in Pyrex glass tubes kept inside the closed vessel. Hereby, we are able to increase the number of sample preparations to several hundred samples per day as the time for preparation of reused vessels was eliminated. Pretreated cell disruption steps are not required, since the direct FAME preparation provides equally quantitative results. The new microwave-assisted derivatization method facilitates the preparation of FAMEs directly from yeast cells, but the method is likely to also be applicable for other biological samples. PMID:22569641
NASA Astrophysics Data System (ADS)
Chae, Kyu-Hyun
2002-04-01
Fourier series solutions to the deflection and magnification by a family of three-dimensional cusped two-power-law ellipsoidal mass distributions are presented. The cusped two-power-law ellipsoidal mass distributions are characterized by inner and outer power-law radial indices and a break (or transition) radius. The model family includes mass models mimicking Jaffe, Hernquist, and η models and dark matter halo profiles from numerical simulations. The Fourier series solutions for the cusped two-power-law mass distributions are relatively simple and allow a very fast calculation, even for a chosen small fractional calculational error (e.g., 10-5). These results will be particularly useful for studying lensed systems that provide a number of accurate lensing constraints and for systematic analyses of large numbers of lenses. Subroutines employing these results for the two-power-law model and the results by Chae, Khersonsky, & Turnshek for the generalized single-power-law mass model are made publicly available.
Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR
Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho
2014-01-01
Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341
SMARTIES: User-friendly codes for fast and accurate calculations of light scattering by spheroids
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-05-01
We provide a detailed user guide for SMARTIES, a suite of MATLAB codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a MATLAB implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarized, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for non-spherical particles with a challenging aspect ratio and/or refractive index contrast.
Visell, Yon
2015-04-01
This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates. PMID:26357094
NASA Astrophysics Data System (ADS)
Chen, Yu; Mu, Chenpeng; Intes, Xavier; Blessington, Dana; Chance, Britton
2003-07-01
Near-infrared (NIR) diffuse optical imaging has become a promising method for noninvasive in vivo detection of breast cancer with intrinsic chromophores. Recent developments in molecular specific targeting fluorescent contrast agents offer high tumor to normal tissue contrast, and are capable of selectively labeling various precancer/cancer signatures, thus enhancing both the sensitivity and specificity of cancer detection. To detect a subsurface tumor labeled by fluorescent contrast agents, we have developed a phase cancellation imaging system for fast localization of fluorescent object embedded several centimeters deep inside the turbid media. The instrument is a frequency domain (50 MHz) phase modulation system with dual out-of-phase sources. The excitation wavelength is 780 nm and the fluorescence photons are collected through an 830±10 nm band-pass filter. Localization of fluorescent objects inside the scattering media is accurate using a phase cancellation device. The localization error for a 5 mm diameter sphere filled with 1 nanomole fluorescent dye and 3 cm deep inside the turbid media is about 2 mm. The accuracy of the localization suggests that this system could be helpful in guiding clinical fine-needle biopsy, and would benefit the early detection of breast tumors.
Linaro, Daniele; Storace, Marco; Giugliano, Michele
2011-01-01
Stochastic channel gating is the major source of intrinsic neuronal noise whose functional consequences at the microcircuit- and network-levels have been only partly explored. A systematic study of this channel noise in large ensembles of biophysically detailed model neurons calls for the availability of fast numerical methods. In fact, exact techniques employ the microscopic simulation of the random opening and closing of individual ion channels, usually based on Markov models, whose computational loads are prohibitive for next generation massive computer models of the brain. In this work, we operatively define a procedure for translating any Markov model describing voltage- or ligand-gated membrane ion-conductances into an effective stochastic version, whose computer simulation is efficient, without compromising accuracy. Our approximation is based on an improved Langevin-like approach, which employs stochastic differential equations and no Montecarlo methods. As opposed to an earlier proposal recently debated in the literature, our approximation reproduces accurately the statistical properties of the exact microscopic simulations, under a variety of conditions, from spontaneous to evoked response features. In addition, our method is not restricted to the Hodgkin-Huxley sodium and potassium currents and is general for a variety of voltage- and ligand-gated ion currents. As a by-product, the analysis of the properties emerging in exact Markov schemes by standard probability calculus enables us for the first time to analytically identify the sources of inaccuracy of the previous proposal, while providing solid ground for its modification and improvement we present here. PMID:21423712
Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham
2015-01-01
Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943
PRIMAL: Fast and Accurate Pedigree-based Imputation from Sequence Data in a Founder Population
Livne, Oren E.; Han, Lide; Alkorta-Aranburu, Gorka; Wentworth-Sheilds, William; Abney, Mark; Ober, Carole; Nicolae, Dan L.
2015-01-01
Founder populations and large pedigrees offer many well-known advantages for genetic mapping studies, including cost-efficient study designs. Here, we describe PRIMAL (PedigRee IMputation ALgorithm), a fast and accurate pedigree-based phasing and imputation algorithm for founder populations. PRIMAL incorporates both existing and original ideas, such as a novel indexing strategy of Identity-By-Descent (IBD) segments based on clique graphs. We were able to impute the genomes of 1,317 South Dakota Hutterites, who had genome-wide genotypes for ~300,000 common single nucleotide variants (SNVs), from 98 whole genome sequences. Using a combination of pedigree-based and LD-based imputation, we were able to assign 87% of genotypes with >99% accuracy over the full range of allele frequencies. Using the IBD cliques we were also able to infer the parental origin of 83% of alleles, and genotypes of deceased recent ancestors for whom no genotype information was available. This imputed data set will enable us to better study the relative contribution of rare and common variants on human phenotypes, as well as parental origin effect of disease risk alleles in >1,000 individuals at minimal cost. PMID:25735005
Fast and accurate computation of two-dimensional non-separable quadratic-phase integrals.
Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus
2010-06-01
We report a fast and accurate algorithm for numerical computation of two-dimensional non-separable linear canonical transforms (2D-NS-LCTs). Also known as quadratic-phase integrals, this class of integral transforms represents a broad class of optical systems including Fresnel propagation in free space, propagation in graded-index media, passage through thin lenses, and arbitrary concatenations of any number of these, including anamorphic/astigmatic/non-orthogonal cases. The general two-dimensional non-separable case poses several challenges which do not exist in the one-dimensional case and the separable two-dimensional case. The algorithm takes approximately N log N time, where N is the two-dimensional space-bandwidth product of the signal. Our method properly tracks and controls the space-bandwidth products in two dimensions, in order to achieve information theoretically sufficient, but not wastefully redundant, sampling required for the reconstruction of the underlying continuous functions at any stage of the algorithm. Additionally, we provide an alternative definition of general 2D-NS-LCTs that shows its kernel explicitly in terms of its ten parameters, and relate these parameters bidirectionally to conventional ABCD matrix parameters. PMID:20508697
Zhou, Nengji; Chen, Lipeng; Huang, Zhongkai; Sun, Kewei; Tanimura, Yoshitaka; Zhao, Yang
2016-03-10
By employing the Dirac-Frenkel time-dependent variational principle, we study the dynamical properties of the Holstein molecular crystal model with diagonal and off-diagonal exciton-phonon coupling. A linear combination of the Davydov D1 (D2) ansatz, referred to as the "multi-D1 ansatz" ("multi-D2 ansatz"), is used as the trial state with enhanced accuracy but without sacrificing efficiency. The time evolution of the exciton probability is found to be in perfect agreement with that of the hierarchy equations of motion, demonstrating the promise the multiple Davydov trial states hold as an efficient, robust description of dynamics of complex quantum systems. In addition to the linear absorption spectra computed for both diagonal and off-diagonal cases, for the first time, 2D spectra have been calculated for systems with off-diagonal exciton-phonon coupling by employing the multiple D2 ansatz to compute the nonlinear response function, testifying to the great potential of the multiple D2 ansatz for fast, accurate implementation of multidimensional spectroscopy. It is found that the signal exhibits a single peak for weak off-diagonal coupling, while a vibronic multipeak structure appears for strong off-diagonal coupling. PMID:26871592
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
Xu, Jing; Ding, Yunhong; Peucheret, Christophe; Xue, Weiqi; Seoane, Jorge; Zsigri, Beáta; Jeppesen, Palle; Mørk, Jesper
2011-01-01
Although patterning effects (PEs) are known to be a limiting factor of ultrafast photonic switches based on semiconductor optical amplifiers (SOAs), a simple approach for their evaluation in numerical simulations and experiments is missing. In this work, we experimentally investigate and verify a theoretical prediction of the pseudo random binary sequence (PRBS) length needed to capture the full impact of PEs. A wide range of SOAs and operation conditions are investigated. The very simple form of the PRBS length condition highlights the role of two parameters, i.e. the recovery time of the SOAs as well as the operation bit rate. Furthermore, a simple and effective method for probing the maximum PEs is demonstrated, which may relieve the computational effort or the experimental difficulties associated with the use of long PRBSs for the simulation or characterization of SOA-based switches. Good agrement with conventional PRBS characterization is obtained. The method is suitable for quick and systematic estimation and optimization of the switching performance. PMID:21263552
Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros
2014-01-01
When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601
Boyle, John J; Kume, Maiko; Wyczalkowski, Matthew A; Taber, Larry A; Pless, Robert B; Xia, Younan; Genin, Guy M; Thomopoulos, Stavros
2014-11-01
When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601
Two fast and accurate heuristic RBF learning rules for data classification.
Rouhani, Modjtaba; Javan, Dawood S
2016-03-01
This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance. PMID:26797472
NASA Astrophysics Data System (ADS)
Rawlinson, N.; Sambridge, M.
2003-12-01
The accurate prediction of seismic traveltimes in layered media is required in many areas of seismology. In addition to simple refractions and reflections, complex phases comprising numerous transmission and reflection branches may exist; for instance, the so-called ``multiples" frequently identified in marine reflection seismology. We present a grid-based method for the accurate determination of multi-phase traveltimes in layered media of significant complexity. A finite difference eikonal solver known as the Fast Marching Method (FMM) is used to track wavefronts within a layer. FMM is a fast and unconditionally stable upwind scheme that is well suited to complex models, and can be used sequentially to track the multiple refraction and/or reflection branches of virtually any required phase. Although FMM was initially introduced as a first-order scheme, higher order operators can be used. A mixed-order scheme that preferentially uses second-order operators, but reverts to first-order operators when the required upwind traveltimes are unavailable, is one possibility. Despite improved accuracy, this scheme still suffers from first-order convergence due to high wavefront curvature and first-order accuracy in the vicinity of the source. To overcome this problem, we implement local grid refinement about the source. In order to retain stability, the edge of the refined grid conforms to the shape of the wavefront, so that information only flows out of the refined grid, and never back into it. Application of our new scheme to complex velocity media shows that grid refinement typically improves accuracy by an order of magnitude, with only a small increase in computation time ( ˜5%). Significantly, first-order convergence is replaced by near second-order convergence, even in media with velocity contrasts as large as 8:1. In one example, with a velocity grid defined by 257,121 nodes, reflection traveltimes from a strongly undulating interface were calculated with an error of
Galescu, Ovidiu; George, Minu; Basetty, Sudhakar; Predescu, Iuliana; Mongia, Anil; Ten, Svetlana; Bhangoo, Amrit
2012-01-01
Background. Blood pressure (BP) percentiles in childhood are assessed according to age, gender, and height. Objective. To create a simple BP/height ratio for both systolic BP (SBP) and diastolic BP (DBP). To study the relationship between BP/height ratios and corresponding BP percentiles in children. Methods. We analyzed data on height and BP from 2006-2007 NHANES data. BP percentiles were calculated for 3775 children. Receiver-operating characteristic (ROC) curve analyses were performed to calculate sensitivity and specificity of BP/height ratios as diagnostic tests for elevated BP (>90%). Correlation analysis was performed between BP percentiles and BP/height ratios. Results. The average age was 12.54 ± 2.67 years. SBP/height and DBP/height ratios strongly correlated with SBP & DBP percentiles in both boys (P < 0.001, R(2) = 0.85, R(2) = 0.86) and girls (P < 0.001, R(2) = 0.85, R(2) = 0.90). The cutoffs of SBP/height and DBP/height ratios in boys were ≥0.75 and ≥0.46, respectively; in girls the ratios were ≥0.75 and ≥0.48, respectively with sensitivity and specificity in range of 83-100%. Conclusion. BP/height ratios are simple with high sensitivity and specificity to detect elevated BP in children. These ratios can be easily used in routine medical care of children. PMID:22577400
Galescu, Ovidiu; George, Minu; Basetty, Sudhakar; Predescu, Iuliana; Mongia, Anil; Ten, Svetlana; Bhangoo, Amrit
2012-01-01
Background. Blood pressure (BP) percentiles in childhood are assessed according to age, gender, and height. Objective. To create a simple BP/height ratio for both systolic BP (SBP) and diastolic BP (DBP). To study the relationship between BP/height ratios and corresponding BP percentiles in children. Methods. We analyzed data on height and BP from 2006-2007 NHANES data. BP percentiles were calculated for 3775 children. Receiver-operating characteristic (ROC) curve analyses were performed to calculate sensitivity and specificity of BP/height ratios as diagnostic tests for elevated BP (>90%). Correlation analysis was performed between BP percentiles and BP/height ratios. Results. The average age was 12.54 ± 2.67 years. SBP/height and DBP/height ratios strongly correlated with SBP & DBP percentiles in both boys (P < 0.001, R2 = 0.85, R2 = 0.86) and girls (P < 0.001, R2 = 0.85, R2 = 0.90). The cutoffs of SBP/height and DBP/height ratios in boys were ≥0.75 and ≥0.46, respectively; in girls the ratios were ≥0.75 and ≥0.48, respectively with sensitivity and specificity in range of 83–100%. Conclusion. BP/height ratios are simple with high sensitivity and specificity to detect elevated BP in children. These ratios can be easily used in routine medical care of children. PMID:22577400
Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia
2016-01-01
The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (BiologTM) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole
Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia
2016-01-01
The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (Biolog(TM)) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime
2016-03-01
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.
Simple yet accurate noncontact device for measuring the radius of curvature of a spherical mirror
Spiridonov, Maxim; Toebaert, David
2006-09-10
An easily reproducible device is demonstrated to be capable of measuring the radii of curvature of spherical mirrors, both convex and concave, without resorting to high-end interferometric or tactile devices. The former are too elaborate for our purposes,and the latter cannot be used due to the delicate nature of the coatings applied to mirrors used in high-power CO2 laser applications. The proposed apparatus is accurate enough to be useful to anyone using curved optics and needing a quick way to assess the values of the radii of curvature, be it for entrance quality control or trouble shooting an apparently malfunctioning optical system. Specifically, the apparatus was designed for checking 50 mm diameter resonator(typically flat or tens of meters concave) and telescope (typically some meters convex and concave) mirrors for a high-power CO2 laser, but it can easily be adapted to any other type of spherical mirror by a straightforward resizing.
A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat.
Liu, Jian; Li, Dezhang; Liu, Xinzijian
2016-07-14
We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used. PMID:27421393
NASA Astrophysics Data System (ADS)
Balderas-López, J. A.; Mandelis, A.
2001-06-01
A simple methodology for the direct measurement of the thermal wavelength using a thermal-wave cavity, and its application to the evaluation of the thermal diffusivity of liquids is described. The simplicity and robustness of this technique lie in its relative measurement features for both the thermal-wave phase and cavity length, thus eliminating the need for taking into account difficult-to-quantify and time-consuming instrumental phase shifts. Two liquid samples were used: distilled water and ethylene glycol. Excellent agreement was found with reported results in the literature. The accuracy of the thermal diffusivity measurements using the new methodology originates in the use of only difference measurements in the thermal-wave phase and cavity length. Measurement precision is directly related to the corresponding precision on the measurement of the thermal wavelength.
Stam, H; van den Berg, B; Bogaard, J M; Versprille, A
1987-08-01
Energy expenditure and the amount of metabolised carbohydrate, protein and lipid can be calculated from the O2 consumption, CO2 production and nitrogen excretion using indirect calorimetry. A low-cost automatic system has been developed suitable for short- and long-term measurements during artificial ventilation, in which the gas analysers were calibrated automatically every 10 min and in which the desired variables were calculated and printed every 5 min. O2 and CO2 concentrations of mixed expired and inspiratory gas, the expired minute volume VE, and patient's rectal temperature, were sampled at regular time intervals and a simple programmable calculator with printer was used for the on-line data analysis. Tests on accuracy, stability, reproducibility and feasibility showed this system to be suitable for clinical application. PMID:3113814
A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat
NASA Astrophysics Data System (ADS)
Liu, Jian; Li, Dezhang; Liu, Xinzijian
2016-07-01
We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.
NINJA-OPS: Fast Accurate Marker Gene Alignment Using Concatenated Ribosomes
Al-Ghalith, Gabriel A.; Montassier, Emmanuel; Ward, Henry N.; Knights, Dan
2016-01-01
The explosion of bioinformatics technologies in the form of next generation sequencing (NGS) has facilitated a massive influx of genomics data in the form of short reads. Short read mapping is therefore a fundamental component of next generation sequencing pipelines which routinely match these short reads against reference genomes for contig assembly. However, such techniques have seldom been applied to microbial marker gene sequencing studies, which have mostly relied on novel heuristic approaches. We propose NINJA Is Not Just Another OTU-Picking Solution (NINJA-OPS, or NINJA for short), a fast and highly accurate novel method enabling reference-based marker gene matching (picking Operational Taxonomic Units, or OTUs). NINJA takes advantage of the Burrows-Wheeler (BW) alignment using an artificial reference chromosome composed of concatenated reference sequences, the “concatesome,” as the BW input. Other features include automatic support for paired-end reads with arbitrary insert sizes. NINJA is also free and open source and implements several pre-filtering methods that elicit substantial speedup when coupled with existing tools. We applied NINJA to several published microbiome studies, obtaining accuracy similar to or better than previous reference-based OTU-picking methods while achieving an order of magnitude or more speedup and using a fraction of the memory footprint. NINJA is a complete pipeline that takes a FASTA-formatted input file and outputs a QIIME-formatted taxonomy-annotated BIOM file for an entire MiSeq run of human gut microbiome 16S genes in under 10 minutes on a dual-core laptop. PMID:26820746
Westendorp, Hendrik; Nuver, Tonnis T; Moerland, Marinus A; Minken, André W
2015-10-21
The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant. PMID:26439900
NASA Astrophysics Data System (ADS)
Westendorp, Hendrik; Nuver, Tonnis T.; Moerland, Marinus A.; Minken, André W.
2015-10-01
The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant.
Simple and fast determination of perfluorinated compounds in Taihu Lake by SPE-UHPLC-MS/MS.
Zhu, Pengfei; Ling, Xia; Liu, Wenwei; Kong, Lingcan; Yao, Yuyang
2016-09-15
A simple and fast analytical method for determination of eleven Polyfluorinated Compounds (PFCs) in source water was developed in the present work. The water sample was prepared without filtered through microfiltration membrane and 500mL of source water was enriched by the solid phase extraction (SPE). The targent compounds were analyzed by ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS). The optimized analytical method was validated in terms of recovery, precision and method detection limits (MDLs). The recovery values after correction with the corresponding labeled standard were between 97.3 and 113.0% for samples spiked at 5ng/L, 10ng/L and 20ng/L. All PFCs showed good linearity and the linear correlation coefficient was over 0.99. The precisions were 1.0-9.0% (n=6). As the result of the enrichment, the MDL values ranged from 0.03 to 1.9ng/L and were enough for analysis of the trace levels of PFCs in the Taihu Lake. The method was further validated in determining the source water and the results showed that PFHxS, PFHxA, PFOA and PFOS were the primary PFCs in Taihu Lake which might be different from the other researches. The method can be used for determination of PFCs in water with a stable recovery, good reproducibility, low detection limit, less solvent consumption, time saving and labor saving. To our knowledge, this is the first method that describes the effect of the filter membrane on the determination of PFCs in water which might acquire more accurate concentration of PFCs in Taihu Lake. PMID:27454901
NASA Astrophysics Data System (ADS)
Smith, R.; Flynn, C.; Candlish, G. N.; Fellhauer, M.; Gibson, B. K.
2015-04-01
We present accurate models of the gravitational potential produced by a radially exponential disc mass distribution. The models are produced by combining three separate Miyamoto-Nagai discs. Such models have been used previously to model the disc of the Milky Way, but here we extend this framework to allow its application to discs of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disc treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disc by <0.4 per cent out to 4 disc scalelengths, and <1.9 per cent out to 10 disc scalelengths. We tabulate fitting parameters which facilitate construction of exponential discs for any scalelength, and a wide range of disc thickness (a user-friendly, web-based interface is also available). Our recipe is well suited for numerical modelling of the tidal effects of a giant disc galaxy on star clusters or dwarf galaxies. We consider three worked examples; the Milky Way thin and thick disc, and a discy dwarf galaxy.
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.
Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P
2016-07-01
Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. PMID:27154647
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.
Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish
2016-04-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Gu, S
2016-08-01
Despite its low accuracy and consistency, growing degree days (GDD) has been widely used to approximate growing heat summation (GHS) for regional classification and phenological prediction. GDD is usually calculated from the mean of daily minimum and maximum temperatures (GDDmm) above a growing base temperature (T gb). To determine approximation errors and accuracy, daily and cumulative GDDmm was compared to GDD based on daily average temperature (GDDavg), growing degree hours (GDH) based on hourly temperatures, and growing degree minutes (GDM) based on minute-by-minute temperatures. Finite error, due to the difference between measured and true temperatures above T gb is large in GDDmm but is negligible in GDDavg, GDH, and GDM, depending only upon the number of measured temperatures used for daily approximation. Hidden negative error, due to the temperatures below T gb when being averaged for approximation intervals larger than measuring interval, is large in GDDmm and GDDavg but is negligible in GDH and GDM. Both GDH and GDM improve GHS approximation accuracy over GDDmm or GDDavg by summation of multiple integration rectangles to reduce both finite and hidden negative errors. GDH is proposed as the standardized GHS approximation protocol, providing adequate accuracy and high precision independent upon T gb while requiring simple data recording and processing. PMID:26589826
NASA Astrophysics Data System (ADS)
Gu, S.
2016-08-01
Despite its low accuracy and consistency, growing degree days (GDD) has been widely used to approximate growing heat summation (GHS) for regional classification and phenological prediction. GDD is usually calculated from the mean of daily minimum and maximum temperatures (GDDmm) above a growing base temperature ( T gb). To determine approximation errors and accuracy, daily and cumulative GDDmm was compared to GDD based on daily average temperature (GDDavg), growing degree hours (GDH) based on hourly temperatures, and growing degree minutes (GDM) based on minute-by-minute temperatures. Finite error, due to the difference between measured and true temperatures above T gb is large in GDDmm but is negligible in GDDavg, GDH, and GDM, depending only upon the number of measured temperatures used for daily approximation. Hidden negative error, due to the temperatures below T gb when being averaged for approximation intervals larger than measuring interval, is large in GDDmm and GDDavg but is negligible in GDH and GDM. Both GDH and GDM improve GHS approximation accuracy over GDDmm or GDDavg by summation of multiple integration rectangles to reduce both finite and hidden negative errors. GDH is proposed as the standardized GHS approximation protocol, providing adequate accuracy and high precision independent upon T gb while requiring simple data recording and processing.
Energy expenditure during level human walking: seeking a simple and accurate predictive solution.
Ludlow, Lindsay W; Weyand, Peter G
2016-03-01
Accurate prediction of the metabolic energy that walking requires can inform numerous health, bodily status, and fitness outcomes. We adopted a two-step approach to identifying a concise, generalized equation for predicting level human walking metabolism. Using literature-aggregated values we compared 1) the predictive accuracy of three literature equations: American College of Sports Medicine (ACSM), Pandolf et al., and Height-Weight-Speed (HWS); and 2) the goodness-of-fit possible from one- vs. two-component descriptions of walking metabolism. Literature metabolic rate values (n = 127; speed range = 0.4 to 1.9 m/s) were aggregated from 25 subject populations (n = 5-42) whose means spanned a 1.8-fold range of heights and a 4.2-fold range of weights. Population-specific resting metabolic rates (V̇o2 rest) were determined using standardized equations. Our first finding was that the ACSM and Pandolf et al. equations underpredicted nearly all 127 literature-aggregated values. Consequently, their standard errors of estimate (SEE) were nearly four times greater than those of the HWS equation (4.51 and 4.39 vs. 1.13 ml O2·kg(-1)·min(-1), respectively). For our second comparison, empirical best-fit relationships for walking metabolism were derived from the data set in one- and two-component forms for three V̇o2-speed model types: linear (∝V(1.0)), exponential (∝V(2.0)), and exponential/height (∝V(2.0)/Ht). We found that the proportion of variance (R(2)) accounted for, when averaged across the three model types, was substantially lower for one- vs. two-component versions (0.63 ± 0.1 vs. 0.90 ± 0.03) and the predictive errors were nearly twice as great (SEE = 2.22 vs. 1.21 ml O2·kg(-1)·min(-1)). Our final analysis identified the following concise, generalized equation for predicting level human walking metabolism: V̇o2 total = V̇o2 rest + 3.85 + 5.97·V(2)/Ht (where V is measured in m/s, Ht in meters, and V̇o2 in ml O2·kg(-1)·min(-1)). PMID:26679617
Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin
2016-01-01
Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203
NASA Astrophysics Data System (ADS)
Guerlet, Sandrine; Spiga, A.; Sylvestre, M.; Fouchet, T.; Millour, E.; Wordsworth, R.; Leconte, J.; Forget, F.
2013-10-01
Recent observations of Saturn’s stratospheric thermal structure and composition revealed new phenomena: an equatorial oscillation in temperature, reminiscent of the Earth's Quasi-Biennal Oscillation ; strong meridional contrasts of hydrocarbons ; a warm “beacon” associated with the powerful 2010 storm. Those signatures cannot be reproduced by 1D photochemical and radiative models and suggest that atmospheric dynamics plays a key role. This motivated us to develop a complete 3D General Circulation Model (GCM) for Saturn, based on the LMDz hydrodynamical core, to explore the circulation, seasonal variability, and wave activity in Saturn's atmosphere. In order to closely reproduce Saturn's radiative forcing, a particular emphasis was put in obtaining fast and accurate radiative transfer calculations. Our radiative model uses correlated-k distributions and spectral discretization tailored for Saturn's atmosphere. We include internal heat flux, ring shadowing and aerosols. We will report on the sensitivity of the model to spectral discretization, spectroscopic databases, and aerosol scenarios (varying particle sizes, opacities and vertical structures). We will also discuss the radiative effect of the ring shadowing on Saturn's atmosphere. We will present a comparison of temperature fields obtained with this new radiative equilibrium model to that inferred from Cassini/CIRS observations. In the troposphere, our model reproduces the observed temperature knee caused by heating at the top of the tropospheric aerosol layer. In the lower stratosphere (20mbar
Zaki, S.K.; Bretan, P.N.; Go, R.T.; Rehm, P.K.; Streem, S.B.; Novick, A.C. )
1990-06-01
Orthoiodohippurate renal scanning has proved to be a reliable, noninvasive method for the evaluation and followup of renal allograft function. However, a standardized system for grading renal function with this test is not available. We propose a simple grading system to distinguish the different functional phases of hippurate scanning in renal transplant recipients. This grading system was studied in 138 patients who were evaluated 1 week after renal transplantation. There was a significant correlation between the isotope renographic functional grade and clinical correlates of allograft function such as the serum creatinine level (p = 0.0001), blood urea nitrogen level (p = 0.0001), urine output (p = 0.005) and need for hemodialysis (p = 0.007). We recommend this grading system as a simple and accurate method to interpret orthoiodohippurate renal scans in the evaluation and followup of renal allograft recipients.
Xin, Hongyi; Greth, John; Emmons, John; Pekhimenko, Gennady; Kingsford, Carl; Alkan, Can; Mutlu, Onur
2015-01-01
Motivation: Calculating the edit-distance (i.e. minimum number of insertions, deletions and substitutions) between short DNA sequences is the primary task performed by seed-and-extend based mappers, which compare billions of sequences. In practice, only sequence pairs with a small edit-distance provide useful scientific data. However, the majority of sequence pairs analyzed by seed-and-extend based mappers differ by significantly more errors than what is typically allowed. Such error-abundant sequence pairs needlessly waste resources and severely hinder the performance of read mappers. Therefore, it is crucial to develop a fast and accurate filter that can rapidly and efficiently detect error-abundant string pairs and remove them from consideration before more computationally expensive methods are used. Results: We present a simple and efficient algorithm, Shifted Hamming Distance (SHD), which accelerates the alignment verification procedure in read mapping, by quickly filtering out error-abundant sequence pairs using bit-parallel and SIMD-parallel operations. SHD only filters string pairs that contain more errors than a user-defined threshold, making it fully comprehensive. It also maintains high accuracy with moderate error threshold (up to 5% of the string length) while achieving a 3-fold speedup over the best previous algorithm (Gene Myers’s bit-vector algorithm). SHD is compatible with all mappers that perform sequence alignment for verification. Availability and implementation: We provide an implementation of SHD in C with Intel SSE instructions at: https://github.com/CMU-SAFARI/SHD. Contact: hxin@cmu.edu, calkan@cs.bilkent.edu.tr or onur@cmu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25577434
An accurate tool for the fast generation of dark matter halo catalogues
NASA Astrophysics Data System (ADS)
Monaco, P.; Sefusatti, E.; Borgani, S.; Crocce, M.; Fosalba, P.; Sheth, R. K.; Theuns, T.
2013-08-01
We present a new parallel implementation of the PINpointing Orbit Crossing-Collapsed HIerarchical Objects (PINOCCHIO) algorithm, a quick tool, based on Lagrangian Perturbation Theory, for the hierarchical build-up of dark matter (DM) haloes in cosmological volumes. To assess its ability to predict halo correlations on large scales, we compare its results with those of an N-body simulation of a 3 h-1 Gpc box sampled with 20483 particles taken from the MICE suite, matching the same seeds for the initial conditions. Thanks to the Fastest Fourier Transforms in the West (FFTW) libraries and to the relatively simple design, the code shows very good scaling properties. The CPU time required by PINOCCHIO is a tiny fraction (˜1/2000) of that required by the MICE simulation. Varying some of PINOCCHIO numerical parameters allows one to produce a universal mass function that lies in the range allowed by published fits, although it underestimates the MICE mass function of Friends-of-Friends (FoF) haloes in the high-mass tail. We compare the matter-halo and the halo-halo power spectra with those of the MICE simulation and find that these two-point statistics are well recovered on large scales. In particular, when catalogues are matched in number density, agreement within 10 per cent is achieved for the halo power spectrum. At scales k > 0.1 h Mpc-1, the inaccuracy of the Zel'dovich approximation in locating halo positions causes an underestimate of the power spectrum that can be modelled as a Gaussian factor with a damping scale of d = 3 h-1 Mpc at z = 0, decreasing at higher redshift. Finally, a remarkable match is obtained for the reduced halo bispectrum, showing a good description of non-linear halo bias. Our results demonstrate the potential of PINOCCHIO as an accurate and flexible tool for generating large ensembles of mock galaxy surveys, with interesting applications for the analysis of large galaxy redshift surveys.
NASA Astrophysics Data System (ADS)
Lima, F. M. S.
2009-11-01
In a previous work, O'Connell (Phys. Teach. 40, 24 (2002)) investigated the time dependence of the tension in the string of a simple pendulum oscillating within the small-angle regime. In spite of the approximation sin θ ≈ θ being accurate only for amplitudes below 7°, his experimental results are for a pendulum oscillating with an amplitude of about 18°, therefore beyond the small-angle regime. This lapse may also be found in some textbooks, laboratory manuals and internet. By noting that the exact analytical solution for this problem involves the so-called Jacobi elliptic functions, which are unknown to most students (even instructors), I take into account a sinusoidal approximate solution for the pendulum equation I introduced in a recent work (Eur. J. Phys. 29 1091 (2008)) for deriving a simple trigonometric approximation for the tension valid for all possible amplitudes. This approximation is compared to both the O'Connell and the exact results, revealing that it is accurate enough for analysing large-angle pendulum experiments.
Zhao, Minghua; Liu, Yonghong; Feng, Yaning; Zhang, Ming; He, Lifeng; Suzuki, Kenji
2016-01-01
Accurate lung segmentation is an essential step in developing a computer-aided lung disease diagnosis system. However, because of the high variability of computerized tomography (CT) images, it remains a difficult task to accurately segment lung tissue in CT slices using a simple strategy. Motived by the aforementioned, a novel CT lung segmentation method based on the integration of multiple strategies was proposed in this paper. Firstly, in order to avoid noise, the input CT slice was smoothed using the guided filter. Then, the smoothed slice was transformed into a binary image using an optimized threshold. Next, a region growing strategy was employed to extract thorax regions. Then, lung regions were segmented from the thorax regions using a seed-based random walk algorithm. The segmented lung contour was then smoothed and corrected with a curvature-based correction method on each axis slice. Finally, with the lung masks, the lung region was automatically segmented from a CT slice. The proposed method was validated on a CT database consisting of 23 scans, including a number of 883 2D slices (the number of slices per scan is 38 slices), by comparing it to the commonly used lung segmentation method. Experimental results show that the proposed method accurately segmented lung regions in CT slices.
NASA Astrophysics Data System (ADS)
Deb, S.; Maitra, K.; Roychoudhuri, A.
1985-06-01
In the wake of the energy crisis, attempts are being made to develop a variety of energy conversion devices, such as solar cells. The single most important operational characteristic for a conversion element generating electricity is the V against I curve. Three points on this characteristic curve are of paramount importance, including the short-circuit, the open-circuit, and the maximum power point. The present paper has the objective to propose a new simple and accurate method of determining the maximum power point (Vm, Im) of the V against I characteristics, based on a geometrical interpretation. The method is general enough to be applicable to any energy conversion device having a nonlinear V against I characteristic. The paper provides also a method for determining the fill factor (FF), the series resistance (Rs), and the diode ideality factor (A) from a single set of connected observations.
Heath, S; Bain, R; Andrews, A; Chida, S; Kitchen, S; Walters, M
2003-01-01
Objective: To reduce the time between arrival at hospital of a patient with acute myocardial infarction and administration of thrombolytic therapy (door to needle time) by the introduction of nurse initiated thrombolysis in the accident and emergency department. Methods: Two acute chest pain nurse specialists (ACPNS) based in A&E for 62.5 hours of the week were responsible for initiating thrombolysis in the A&E department. The service reverts to a "fast track" system outside of these hours, with the on call medical team prescribing thrombolysis on the coronary care unit. Prospectively gathered data were analysed for a nine month period and a head to head comparison made between the mean and median door to needle times for both systems of thrombolysis delivery. Results: Data from 91 patients were analysed; 43 (47%) were thrombolysed in A&E by the ACPNS and 48 (53%) were thrombolysed in the coronary care unit by the on call medical team. The ACPNS achieved a median door to needle time of 23 minutes (IQR=17 to 32) compared with 56 minutes (IQR=34 to 79.5) for the fast track. The proportion of patients thrombolysed in 30 minutes by the ACPNS and fast track system was 72% (31 of 43) and 21% (10 of 48) respectively (difference=51%, 95% confidence intervals 34% to 69%, p<0.05). Conclusion: Diagnosis of acute myocardial infarction and administration of thrombolysis by experienced cardiology nurses in A&E is a safe and effective strategy for reducing door to needle times, even when compared with a conventional fast track system. PMID:12954678
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Zeb, Alam; Ullah, Fareed
2016-01-01
A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries. PMID:27123360
Zeb, Alam; Ullah, Fareed
2016-01-01
A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries. PMID:27123360
Nanoparticle film deposition using a simple and fast centrifuge sedimentation method
NASA Astrophysics Data System (ADS)
Markelonis, Andrew R.; Wang, Joanna S.; Ullrich, Bruno; Wai, Chien M.; Brown, Gail J.
2015-04-01
Colloidal nanoparticles (NPs) can be deposited uniformly on flat or rough and uneven substrate surfaces employing a standard centrifuge and common solvents. This method is suitable for depositing different types of nanoparticles on a variety of substrates including glass, silicon wafer, aluminum foil, copper sheet, polymer film, plastic, and paper, etc. The thickness of the films can be controlled by the amount of the colloidal nanoparticle solution used in the preparation. The method offers a fast and simple procedure compared to other currently known nanoparticle deposition techniques for studying the optical properties of nanoparticle films.
Woods: A fast and accurate functional annotator and classifier of genomic and metagenomic sequences.
Sharma, Ashok K; Gupta, Ankit; Kumar, Sanjiv; Dhakan, Darshan B; Sharma, Vineet K
2015-07-01
Functional annotation of the gigantic metagenomic data is one of the major time-consuming and computationally demanding tasks, which is currently a bottleneck for the efficient analysis. The commonly used homology-based methods to functionally annotate and classify proteins are extremely slow. Therefore, to achieve faster and accurate functional annotation, we have developed an orthology-based functional classifier 'Woods' by using a combination of machine learning and similarity-based approaches. Woods displayed a precision of 98.79% on independent genomic dataset, 96.66% on simulated metagenomic dataset and >97% on two real metagenomic datasets. In addition, it performed >87 times faster than BLAST on the two real metagenomic datasets. Woods can be used as a highly efficient and accurate classifier with high-throughput capability which facilitates its usability on large metagenomic datasets. PMID:25863333
Simple and fast screening of G-quadruplex ligands with electrochemical detection system.
Fan, Qiongxuan; Li, Chao; Tao, Yaqin; Mao, Xiaoxia; Li, Genxi
2016-11-01
Small molecules that may facilitate and stabilize the formation of G-quadruplexes can be used for cancer treatments, because the G-quadruplex structure can inhibit the activity of telomerase, an enzyme over-expressed in many cancer cells. Therefore, there is considerable interest in developing a simple and high-performance method for screening small molecules binding to G-quadruplex. Here, we have designed a simple electrochemical approach to screen such ligands based on the fact that the formation and stabilization of G-quadruplex by ligand may inhibit electron transfer of redox species to electrode surface. As a proof-of-concept study, two types of classical G-quadruplex ligands, TMPyP4 and BRACO-19, are studied in this work, which demonstrates that this method is fast and robust and it may be applied to screen G-quadruplex ligands for anticancer drugs testing and design in the future. PMID:27591598
Sozet, Martin; Neauport, Jérôme; Lavastre, Eric; Roquin, Nadja; Gallais, Laurent; Lamaignère, Laurent
2016-02-15
Standard test protocols need several laser shots to assess the laser-induced damage threshold of optics and, consequently, large areas are necessary. Taking into account the dominating intrinsic mechanisms of laser damage in the sub-picosecond regime, a simple, fast, and accurate method, based on correlating the fluence distribution with the damage morphology after only one shot in optics is therein presented. Several materials and components have been tested using this method and compared to the results obtained with the classical 1/1 method. Both lead to the same threshold value with an accuracy in the same order of magnitude. Therefore, this mono-shot testing could be a straightforward protocol to evaluate damage threshold in short pulse regime. PMID:26872193
FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+
NASA Astrophysics Data System (ADS)
Sahoo, B. K.
2010-12-01
We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.
Mammalian choices: combining fast-but-inaccurate and slow-but-accurate decision-making systems
Trimmer, Pete C; Houston, Alasdair I; Marshall, James A.R; Bogacz, Rafal; Paul, Elizabeth S; Mendl, Mike T; McNamara, John M
2008-01-01
Empirical findings suggest that the mammalian brain has two decision-making systems that act at different speeds. We represent the faster system using standard signal detection theory. We represent the slower (but more accurate) cortical system as the integration of sensory evidence over time until a certain level of confidence is reached. We then consider how two such systems should be combined optimally for a range of information linkage mechanisms. We conclude with some performance predictions that will hold if our representation is realistic. PMID:18611852
RapGene: a fast and accurate strategy for synthetic gene assembly in Escherichia coli
Zampini, Massimiliano; Stevens, Pauline Rees; Pachebat, Justin A.; Kingston-Smith, Alison; Mur, Luis A. J.; Hayes, Finbarr
2015-01-01
The ability to assemble DNA sequences de novo through efficient and powerful DNA fabrication methods is one of the foundational technologies of synthetic biology. Gene synthesis, in particular, has been considered the main driver for the emergence of this new scientific discipline. Here we describe RapGene, a rapid gene assembly technique which was successfully tested for the synthesis and cloning of both prokaryotic and eukaryotic genes through a ligation independent approach. The method developed in this study is a complete bacterial gene synthesis platform for the quick, accurate and cost effective fabrication and cloning of gene-length sequences that employ the widely used host Escherichia coli. PMID:26062748
A localized basis that allows fast and accurate second order Moller-Plesset calculations
Subotnik, Joseph E.; Head-Gordon, Martin
2004-10-27
We present a method for computing a basis of localized orthonormal orbitals (both occupied and virtual), in whose representation the Fock matrix is extremely diagonal-dominant. The existence of these orbitals is shown empirically to be sufficient for achieving highly accurate MP@ energies, calculated according to Kapuy's method. This method (which we abbreviate KMP2), which involves a different partitioning of the n-electron Hamiltonian, scales at most quadratically with potential for linearity in the number of electrons. As such, we believe the KMP2 algorithm presented here could be the basis of a viable approach to local correlation calculations.
Barros, Ana I R N A; Silva, Ana P; Gonçalves, Berta; Nunes, Fernando M
2010-03-01
A reliable method for the determination of total vitamin C must be able to resolve ascorbic acid (AA) and the epimeric isoascorbic acid (IAA) and determine the sum of AA and its oxidized form dehydroascorbic acid. AA and IAA are polar molecules with a low retention time in conventional reversed phase systems, and hence of difficult resolution. Hydrophilic interaction chromatography using a TSKgel Amide-80 stationary phase with isocratic elution was successful in resolving the two epimers. The column was compatible with injections of high concentrations of metaphosphoric acid, tris(2-carboxyethyl)-phosphine, and EDTA without drift of baseline and retention time. Total AA and IAA were extracted, stabilized, and reduced in one step at 40 °C, using 5% m-phosphoric acid, 2 mM of EDTA, and 2 mM of tris(2-carboxyethyl)-phosphine as reducing agent. This simple, fast, and robust hydrophilic interaction chromatography-DAD method was applied for the analysis of food products namely fruit juices, chestnut, and ham and also in pharmaceutical and multivitamin tablets. Method validation was performed on the food products, including parameters of precision, accuracy, linearity, limit of detection, and quantification (LOQ). The absence of matrix interferences was assessed by the standard addition method and Youden calibration. The method was fast, accurate, and precise with a LOQ(AA) of 1.5 mg/L and LOQ(IAA) of 3.7 mg/L. The simple experimental procedure, completed in 1 h, the possibility of using IAA as an internal standard, and low probability of artifacts are the major advantages of the proposed method for the routine determination of these compounds in a large number of samples. PMID:20091158
Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus
2016-02-01
We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images. PMID:26915000
Fast and accurate sensitivity analysis of IMPT treatment plans using Polynomial Chaos Expansion
NASA Astrophysics Data System (ADS)
Perkó, Zoltán; van der Voort, Sebastian R.; van de Water, Steven; Hartman, Charlotte M. H.; Hoogeman, Mischa; Lathouwers, Danny
2016-06-01
The highly conformal planned dose distribution achievable in intensity modulated proton therapy (IMPT) can severely be compromised by uncertainties in patient setup and proton range. While several robust optimization approaches have been presented to address this issue, appropriate methods to accurately estimate the robustness of treatment plans are still lacking. To fill this gap we present Polynomial Chaos Expansion (PCE) techniques which are easily applicable and create a meta-model of the dose engine by approximating the dose in every voxel with multidimensional polynomials. This Polynomial Chaos (PC) model can be built in an automated fashion relatively cheaply and subsequently it can be used to perform comprehensive robustness analysis. We adapted PC to provide among others the expected dose, the dose variance, accurate probability distribution of dose-volume histogram (DVH) metrics (e.g. minimum tumor or maximum organ dose), exact bandwidths of DVHs, and to separate the effects of random and systematic errors. We present the outcome of our verification experiments based on 6 head-and-neck (HN) patients, and exemplify the usefulness of PCE by comparing a robust and a non-robust treatment plan for a selected HN case. The results suggest that PCE is highly valuable for both research and clinical applications.
Fast and accurate sensitivity analysis of IMPT treatment plans using Polynomial Chaos Expansion.
Perkó, Zoltán; van der Voort, Sebastian R; van de Water, Steven; Hartman, Charlotte M H; Hoogeman, Mischa; Lathouwers, Danny
2016-06-21
The highly conformal planned dose distribution achievable in intensity modulated proton therapy (IMPT) can severely be compromised by uncertainties in patient setup and proton range. While several robust optimization approaches have been presented to address this issue, appropriate methods to accurately estimate the robustness of treatment plans are still lacking. To fill this gap we present Polynomial Chaos Expansion (PCE) techniques which are easily applicable and create a meta-model of the dose engine by approximating the dose in every voxel with multidimensional polynomials. This Polynomial Chaos (PC) model can be built in an automated fashion relatively cheaply and subsequently it can be used to perform comprehensive robustness analysis. We adapted PC to provide among others the expected dose, the dose variance, accurate probability distribution of dose-volume histogram (DVH) metrics (e.g. minimum tumor or maximum organ dose), exact bandwidths of DVHs, and to separate the effects of random and systematic errors. We present the outcome of our verification experiments based on 6 head-and-neck (HN) patients, and exemplify the usefulness of PCE by comparing a robust and a non-robust treatment plan for a selected HN case. The results suggest that PCE is highly valuable for both research and clinical applications. PMID:27227661
The SPECIES and ORGANISMS Resources for Fast and Accurate Identification of Taxonomic Names in Text.
Pafilis, Evangelos; Frankild, Sune P; Fanini, Lucia; Faulwetter, Sarah; Pavloudi, Christina; Vasileiadou, Aikaterini; Arvanitidis, Christos; Jensen, Lars Juhl
2013-01-01
The exponential growth of the biomedical literature is making the need for efficient, accurate text-mining tools increasingly clear. The identification of named biological entities in text is a central and difficult task. We have developed an efficient algorithm and implementation of a dictionary-based approach to named entity recognition, which we here use to identify names of species and other taxa in text. The tool, SPECIES, is more than an order of magnitude faster and as accurate as existing tools. The precision and recall was assessed both on an existing gold-standard corpus and on a new corpus of 800 abstracts, which were manually annotated after the development of the tool. The corpus comprises abstracts from journals selected to represent many taxonomic groups, which gives insights into which types of organism names are hard to detect and which are easy. Finally, we have tagged organism names in the entire Medline database and developed a web resource, ORGANISMS, that makes the results accessible to the broad community of biologists. The SPECIES software is open source and can be downloaded from http://species.jensenlab.org along with dictionary files and the manually annotated gold-standard corpus. The ORGANISMS web resource can be found at http://organisms.jensenlab.org. PMID:23823062
Parente2: a fast and accurate method for detecting identity by descent
Rodriguez, Jesse M.; Bercovici, Sivan; Huang, Lin; Frostig, Roy; Batzoglou, Serafim
2015-01-01
Identity-by-descent (IBD) inference is the problem of establishing a genetic connection between two individuals through a genomic segment that is inherited by both individuals from a recent common ancestor. IBD inference is an important preceding step in a variety of population genomic studies, ranging from demographic studies to linking genomic variation with phenotype and disease. The problem of accurate IBD detection has become increasingly challenging with the availability of large collections of human genotypes and genomes: Given a cohort’s size, a quadratic number of pairwise genome comparisons must be performed. Therefore, computation time and the false discovery rate can also scale quadratically. To enable accurate and efficient large-scale IBD detection, we present Parente2, a novel method for detecting IBD segments. Parente2 is based on an embedded log-likelihood ratio and uses a model that accounts for linkage disequilibrium by explicitly modeling haplotype frequencies. Parente2 operates directly on genotype data without the need to phase data prior to IBD inference. We evaluate Parente2’s performance through extensive simulations using real data, and we show that it provides substantially higher accuracy compared to previous state-of-the-art methods while maintaining high computational efficiency. PMID:25273070
Fast, Accurate and Precise Mid-Sagittal Plane Location in 3D MR Images of the Brain
NASA Astrophysics Data System (ADS)
Bergo, Felipe P. G.; Falcão, Alexandre X.; Yasuda, Clarissa L.; Ruppert, Guilherme C. S.
Extraction of the mid-sagittal plane (MSP) is a key step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, based on automatic segmentation of the brain and on heuristic maximization of the cerebro-spinal fluid within the MSP. The method is robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The method is also accurate with respect to MSP delineations done by a specialist. The method was evaluated on 64 MR images (36 pathological, 20 healthy, 8 synthetic), and it found a precise and accurate approximation of the MSP in all of them with a mean time of 60.0 seconds per image, mean angular variation within a same image (precision) of 1.26o and mean angular difference from specialist delineations (accuracy) of 1.64o.
Fast Geometric Method for Calculating Accurate Minimum Orbit Intersection Distances (MOIDs)
NASA Astrophysics Data System (ADS)
Wiźniowski, T.; Rickman, H.
2013-06-01
We present a new method to compute Minimum Orbit Intersection Distances (MOIDs) for arbitrary pairs of heliocentric orbits and compare it with Giovanni Gronchi's algebraic method. Our procedure is numerical and iterative, and the MOID configuration is found by geometric scanning and tuning. A basic element is the meridional plane, used for initial scanning, which contains one of the objects and is perpendicular to the orbital plane of the other. Our method also relies on an efficient tuning technique in order to zoom in on the MOID configuration, starting from the first approximation found by scanning. We work with high accuracy and take special care to avoid the risk of missing the MOID, which is inherent to our type of approach. We demonstrate that our method is both fast, reliable and flexible. It is freely available and its source Fortran code downloadable via our web page.
Unbounded Binary Search for a Fast and Accurate Maximum Power Point Tracking
NASA Astrophysics Data System (ADS)
Kim, Yong Sin; Winston, Roland
2011-12-01
This paper presents a technique for maximum power point tracking (MPPT) of a concentrating photovoltaic system using cell level power optimization. Perturb and observe (P&O) has been a standard for an MPPT, but it introduces a tradeoff between the tacking speed and the accuracy of the maximum power delivered. The P&O algorithm is not suitable for a rapid environmental condition change by partial shading and self-shading due to its tracking time being linear to the length of the voltage range. Some of researches have been worked on fast tracking but they come with internal ad hoc parameters. In this paper, by using the proposed unbounded binary search algorithm for the MPPT, tracking time becomes a logarithmic function of the voltage search range without ad hoc parameters.
Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models
Drugowitsch, Jan
2016-01-01
We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models’ parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation. PMID:26864391
Towards fast and accurate algorithms for processing fuzzy data: interval computations revisited
NASA Astrophysics Data System (ADS)
Xiang, Gang; Kreinovich, Vladik
2013-02-01
In many practical applications, we need to process data, e.g. to predict the future values of different quantities based on their current values. Often, the only information that we have about the current values comes from experts, and is described in informal ('fuzzy') terms like 'small'. To process such data, it is natural to use fuzzy techniques, techniques specifically designed by Lotfi Zadeh to handle such informal information. In this survey, we start by revisiting the motivation behind Zadeh's formulae for processing fuzzy data, and explain how the algorithmic problem of processing fuzzy data can be described in terms of interval computations (α-cuts). Many fuzzy practitioners claim 'I tried interval computations, they did not work' - meaning that they got estimates which are much wider than the desired α-cuts. We show that such statements are usually based on a (widely spread) misunderstanding - that interval computations simply mean replacing each arithmetic operation with the corresponding operation with intervals. We show that while such straightforward interval techniques indeed often lead to over-wide estimates, the current advanced interval computations techniques result in estimates which are much more accurate. We overview such advanced interval computations techniques, and show that by using them, we can efficiently and accurately process fuzzy data. We wrote this survey with three audiences in mind. First, we want fuzzy researchers and practitioners to understand the current advanced interval computations techniques and to use them to come up with faster and more accurate algorithms for processing fuzzy data. For this 'fuzzy' audience, we explain these current techniques in detail. Second, we also want interval researchers to better understand this important application area for their techniques. For this 'interval' audience, we want to explain where fuzzy techniques come from, what are possible variants of these techniques, and what are the
A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows
NASA Astrophysics Data System (ADS)
Bijleveld, H. A.; Veldman, A. E. P.
2014-12-01
A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.
2015-09-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
A fast and accurate PCA based radiative transfer model: Extension to the broadband shortwave region
NASA Astrophysics Data System (ADS)
Kopparla, Pushkar; Natraj, Vijay; Spurr, Robert; Shia, Run-Lie; Crisp, David; Yung, Yuk L.
2016-04-01
Accurate radiative transfer (RT) calculations are necessary for many earth-atmosphere applications, from remote sensing retrieval to climate modeling. A Principal Component Analysis (PCA)-based spectral binning method has been shown to provide an order of magnitude increase in computational speed while maintaining an overall accuracy of 0.01% (compared to line-by-line calculations) over narrow spectral bands. In this paper, we have extended the PCA method for RT calculations over the entire shortwave region of the spectrum from 0.3 to 3 microns. The region is divided into 33 spectral fields covering all major gas absorption regimes. We find that the RT performance runtimes are shorter by factors between 10 and 100, while root mean square errors are of order 0.01%.
SU-E-T-373: A Motorized Stage for Fast and Accurate QA of Machine Isocenter
Moore, J; Velarde, E; Wong, J
2014-06-01
Purpose: Precision delivery of radiation dose relies on accurate knowledge of the machine isocenter under a variety of machine motions. This is typically determined by performing a Winston-Lutz test consisting of imaging a known object at multiple gantry/collimator/table angles and ensuring that the maximum offset is within specified tolerance. The first step in the Winston-Lutz test is careful placement of a ball bearing at the machine isocenter as determined by repeated imaging and shifting until accurate placement has been determined. Conventionally this is performed by adjusting a stage manually using vernier scales which carry the limitation that each adjustment must be done inside the treatment room with the risks of inaccurate adjustment of the scale and physical bumping of the table. It is proposed to use a motorized system controlled outside of the room to improve the required time and accuracy of these tests. Methods: The three dimensional vernier scales are replaced by three motors with accuracy of 1 micron and a range of 25.4mm connected via USB to a computer in the control room. Software is designed which automatically detects the motors and assigns them to proper axes and allows for small shifts to be entered and performed. Input values match calculated offsets in magnitude and sign to reduce conversion errors. Speed of setup, number of iterations to setup, and accuracy of final placement are assessed. Results: Automatic BB placement required 2.25 iterations and 13 minutes on average while manual placement required 3.76 iterations and 37.5 minutes. The average final XYZ offsets is 0.02cm, 0.01cm, 0.04cm for automatic setup and 0.04cm, 0.02cm, 0.04cm for manual setup. Conclusion: Automatic placement decreased time and repeat iterations for setup while improving placement accuracy. Automatic placement greatly reduces the time required to perform QA.
Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-03-14
Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 10(3)-10(5) molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online. PMID:25770527
Fast and accurate inference on gravitational waves from precessing compact binaries
NASA Astrophysics Data System (ADS)
Smith, Rory; Field, Scott E.; Blackburn, Kent; Haster, Carl-Johan; Pürrer, Michael; Raymond, Vivien; Schmidt, Patricia
2016-08-01
Inferring astrophysical information from gravitational waves emitted by compact binaries is one of the key science goals of gravitational-wave astronomy. In order to reach the full scientific potential of gravitational-wave experiments, we require techniques to mitigate the cost of Bayesian inference, especially as gravitational-wave signal models and analyses become increasingly sophisticated and detailed. Reduced-order models (ROMs) of gravitational waveforms can significantly reduce the computational cost of inference by removing redundant computations. In this paper, we construct the first reduced-order models of gravitational-wave signals that include the effects of spin precession, inspiral, merger, and ringdown in compact object binaries and that are valid for component masses describing binary neutron star, binary black hole, and mixed binary systems. This work utilizes the waveform model known as "IMRPhenomPv2." Our ROM enables the use of a fast reduced-order quadrature (ROQ) integration rule which allows us to approximate Bayesian probability density functions at a greatly reduced computational cost. We find that the ROQ rule can be used to speed-up inference by factors as high as 300 without introducing systematic bias. This corresponds to a reduction in computational time from around half a year to half a day for the longest duration and lowest mass signals. The ROM and ROQ rules are available with the main inference library of the LIGO Scientific Collaboration, LALInference.
Accurate and Fast Convergent Initial-Value Belief Propagation for Stereo Matching
Wang, Xiaofeng; Liu, Yiguang
2015-01-01
The belief propagation (BP) algorithm has some limitations, including ambiguous edges and textureless regions, and slow convergence speed. To address these problems, we present a novel algorithm that intrinsically improves both the accuracy and the convergence speed of BP. First, traditional BP generally consumes time due to numerous iterations. To reduce the number of iterations, inspired by the crucial importance of the initial value in nonlinear problems, a novel initial-value belief propagation (IVBP) algorithm is presented, which can greatly improve both convergence speed and accuracy. Second, .the majority of the existing research on BP concentrates on the smoothness term or other energy terms, neglecting the significance of the data term. In this study, a self-adapting dissimilarity data term (SDDT) is presented to improve the accuracy of the data term, which incorporates an additional gradient-based measure into the traditional data term, with the weight determined by the robust measure-based control function. Finally, this study explores the effective combination of local methods and global methods. The experimental results have demonstrated that our method performs well compared with the state-of-the-art BP and simultaneously holds better edge-preserving smoothing effects with fast convergence speed in the Middlebury and new 2014 Middlebury datasets. PMID:26349063
Automated system for fast and accurate analysis of SF6 injected in the surface ocean.
Koo, Chul-Min; Lee, Kitack; Kim, Miok; Kim, Dae-Ok
2005-11-01
This paper describes an automated sampling and analysis system for the shipboard measurement of dissolved sulfur hexafluoride (SF6) in surface marine environments into which SF6 has been deliberately released. This underway system includes a gas chromatograph associated with an electron capture detector, a fast and highly efficient SF6-extraction device, a global positioning system, and a data acquisition system based on Visual Basic 6.0/C 6.0. This work is distinct from previous studies in that it quantifies the efficiency of the SF6-extraction device and its carryover effect and examines the effect of surfactant on the SF6-extraction efficiency. Measurements can be continuously performed on seawater samples taken from a seawater line installed onboard a research vessel. The system runs on an hourly cycle during which one set of four SF6 standards is measured and SF6 derived from the seawater stream is subsequently analyzed for the rest of each 1 h period. This state-of-art system was successfully used to trace a water mass carrying Cochlodinium polykrikoides, which causes harmful algal blooms (HAB) in the coastal waters of southern Korea. The successful application of this analysis system in tracing the HAB-infected water mass suggests that the SF6 detection method described in this paper will improve the quality of the future study of biogeochemical processes in the marine environment. PMID:16294883
EZ-Rhizo: integrated software for the fast and accurate measurement of root system architecture.
Armengaud, Patrick; Zambaux, Kevin; Hills, Adrian; Sulpice, Ronan; Pattison, Richard J; Blatt, Michael R; Amtmann, Anna
2009-03-01
The root system is essential for the growth and development of plants. In addition to anchoring the plant in the ground, it is the site of uptake of water and minerals from the soil. Plant root systems show an astonishing plasticity in their architecture, which allows for optimal exploitation of diverse soil structures and conditions. The signalling pathways that enable plants to sense and respond to changes in soil conditions, in particular nutrient supply, are a topic of intensive research, and root system architecture (RSA) is an important and obvious phenotypic output. At present, the quantitative description of RSA is labour intensive and time consuming, even using the currently available software, and the lack of a fast RSA measuring tool hampers forward and quantitative genetics studies. Here, we describe EZ-Rhizo: a Windows-integrated and semi-automated computer program designed to detect and quantify multiple RSA parameters from plants growing on a solid support medium. The method is non-invasive, enabling the user to follow RSA development over time. We have successfully applied EZ-Rhizo to evaluate natural variation in RSA across 23 Arabidopsis thaliana accessions, and have identified new RSA determinants as a basis for future quantitative trait locus (QTL) analysis. PMID:19000163
Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-03-14
Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 10{sup 3}-10{sup 5} molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.
NASA Astrophysics Data System (ADS)
Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-03-01
Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.
RRTMGP: A fast and accurate radiation code for the next decade
NASA Astrophysics Data System (ADS)
Mlawer, E. J.; Pincus, R.; Wehe, A.; Delamere, J.
2015-12-01
Atmospheric radiative processes are key drivers of the Earth's climate and must be accurately represented in global circulations models (GCMs) to allow faithful simulations of the planet's past, present, and future. The radiation code RRTMG is widely utilized by global modeling centers for both climate and weather predictions, but it has become increasingly out-of-date. The code's structure is not well suited for the current generation of computer architectures and its stored absorption coefficients are not consistent with the most recent spectroscopic information. We are developing a new broadband radiation code for the current generation of computational architectures. This code, called RRTMGP, will be a completely restructured and modern version of RRTMG. The new code preserves the strengths of the existing RRTMG parameterization, especially the high accuracy of the k-distribution treatment of absorption by gases, but the entire code is being rewritten to provide highly efficient computation across a range of architectures. Our redesign includes refactoring the code into discrete kernels corresponding to fundamental computational elements (e.g. gas optics), optimizing the code for operating on multiple columns in parallel, simplifying the subroutine interface, revisiting the existing gas optics interpolation scheme to reduce branching, and adding flexibility with respect to run-time choices of streams, need for consideration of scattering, aerosol and cloud optics, etc. The result of the proposed development will be a single, well-supported and well-validated code amenable to optimization across a wide range of platforms. Our main emphasis is on highly-parallel platforms including Graphical Processing Units (GPUs) and Many-Integrated-Core processors (MICs), which experience shows can accelerate broadband radiation calculations by as much as a factor of fifty. RRTMGP will provide highly efficient and accurate radiative fluxes calculations for coupled global
Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk
2015-03-15
Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.
Salaices Avila, Manuel Alejandro; Breiter, Roman; Mott, Henry
2007-01-01
Solid-phase microextraction (SPME) with gas chromatography is to be used for assay of effluent liquid samples from soil column experiments associated with VOC fate/transport studies. One goal of the fate/transport studies is to develop accurate, highly reproducible column breakthrough curves for 1,2-cis-dichloroethylene (cis-DCE) and trichloroethylene (TCE) to better understand interactions with selected natural solid phases. For SPME, the influences of the sample equilibration time, extraction temperature and the ratio of volume of sample bottle to that of the liquid sample (V(T)/V(w)) are the critical factors that could influence accuracy and precision of the measured results. Equilibrium between the gas phase and liquid phase was attained after 200 min of equilibration time. The temperature must be carefully controlled due to variation of both the Henry's constant (K(h)) and the fibre/gas phase distribution coefficient (K(fg)). K(h) decreases with decreasing temperature while K(fg) increases. Low V(T)/V(w) yields better sensitivity but results in analyte losses and negative bias of the resultant assay. High V(T)/V(w) ratio yields reduced sensitivity but analyte losses were found to be minimal, leading to better accuracy and reproducibility. A fast SPME method was achieved, 5 min for SPME extraction and 3.10 min for GC analysis. A linear calibration function in the gas phase was developed to analyse the breakthrough curve data, linear between a range of 0.9-236 microgl(-1), and a detection limit lower than 5 microgl(-1). PMID:16844196
The fast and accurate 3D-face scanning technology based on laser triangle sensors
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin
2013-08-01
A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.
WaveQ3D: Fast and accurate acoustic transmission loss (TL) eigenrays, in littoral environments
NASA Astrophysics Data System (ADS)
Reilly, Sean M.
This study defines a new 3D Gaussian ray bundling acoustic transmission loss model in geodetic coordinates: latitude, longitude, and altitude. This approach is designed to lower the computation burden of computing accurate environmental effects in sonar training application by eliminating the need to transform the ocean environment into a collection of Nx2D Cartesian radials. This approach also improves model accuracy by incorporating real world 3D effects, like horizontal refraction, into the model. This study starts with derivations for a 3D variant of Gaussian ray bundles in this coordinate system. To verify the accuracy of this approach, acoustic propagation predictions of transmission loss, time of arrival, and propagation direction are compared to analytic solutions and other models. To validate the model's ability to predict real world phenomena, predictions of transmission loss and propagation direction are compared to at-sea measurements, in an environment where strong horizontal refraction effect have been observed. This model has been integrated into U.S. Navy active sonar training system applications, where testing has demonstrated its ability to improve transmission loss calculation speed without sacrificing accuracy.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979
Palm computer demonstrates a fast and accurate means of burn data collection.
Lal, S O; Smith, F W; Davis, J P; Castro, H Y; Smith, D W; Chinkes, D L; Barrow, R E
2000-01-01
Manual biomedical data collection and entry of the data into a personal computer is time-consuming and can be prone to errors. The purpose of this study was to compare data entry into a hand-held computer versus hand written data followed by entry of the data into a personal computer. A Palm (3Com Palm IIIx, Santa, Clara, Calif) computer with a custom menu-driven program was used for the entry and retrieval of burn-related variables. These variables were also used to create an identical sheet that was filled in by hand. Identical data were retrieved twice from 110 charts 48 hours apart and then used to create an Excel (Microsoft, Redmond, Wash) spreadsheet. One time data were recorded by the Palm entry method, and the other time the data were handwritten. The method of retrieval was alternated between the Palm system and handwritten system every 10 charts. The total time required to log data and to generate an Excel spreadsheet was recorded and used as a study endpoint. The total time for the Palm method of data collection and downloading to a personal computer was 23% faster than hand recording with the personal computer entry method (P < 0.05), and 58% fewer errors were generated with the Palm method.) The Palm is a faster and more accurate means of data collection than a handwritten technique. PMID:11194811
Spectroscopic Method for Fast and Accurate Group A Streptococcus Bacteria Detection.
Schiff, Dillon; Aviv, Hagit; Rosenbaum, Efraim; Tischler, Yaakov R
2016-02-16
Rapid and accurate detection of pathogens is paramount to human health. Spectroscopic techniques have been shown to be viable methods for detecting various pathogens. Enhanced methods of Raman spectroscopy can discriminate unique bacterial signatures; however, many of these require precise conditions and do not have in vivo replicability. Common biological detection methods such as rapid antigen detection tests have high specificity but do not have high sensitivity. Here we developed a new method of bacteria detection that is both highly specific and highly sensitive by combining the specificity of antibody staining and the sensitivity of spectroscopic characterization. Bacteria samples, treated with a fluorescent antibody complex specific to Streptococcus pyogenes, were volumetrically normalized according to their Raman bacterial signal intensity and characterized for fluorescence, eliciting a positive result for samples containing Streptococcus pyogenes and a negative result for those without. The normalized fluorescence intensity of the Streptococcus pyogenes gave a signal that is up to 16.4 times higher than that of other bacteria samples for bacteria stained in solution and up to 12.7 times higher in solid state. This method can be very easily replicated for other bacteria species using suitable antibody-dye complexes. In addition, this method shows viability for in vivo detection as it requires minute amounts of bacteria, low laser excitation power, and short integration times in order to achieve high signal. PMID:26752013
A fast and accurate simulator for the design of birdcage coils in MRI.
Giovannetti, Giulio; Landini, Luigi; Santarelli, Maria Filomena; Positano, Vincenzo
2002-11-01
The birdcage coils are extensively used in MRI systems since they introduce a high signal to noise ratio and a high radiofrequency magnetic field homogeneity that guarantee a large field of view. The present article describes the implementation of a birdcage coil simulator, operating in high-pass and low-pass modes, using magnetostatic analysis of the coil. Respect to other simulators described in literature, our simulator allows to obtain in short time not only the dominant frequency mode, but also the complete resonant frequency spectrum and the relevant magnetic field pattern with high accuracy. Our simulator accounts for all the inductances including the mutual inductances between conductors. Moreover, the inductance calculation includes an accurately birdcage geometry description and the effect of a radiofrequency shield. The knowledge of all the resonance modes introduced by a birdcage coil is twofold useful during birdcage coil design: --higher order modes should be pushed far from the fundamental one, --for particular applications, it is necessary to localize other resonant modes (as the Helmholtz mode) jointly to the dominant mode. The knowledge of the magnetic field pattern allows to a priori verify the field homogeneity created inside the coil, when varying the coil dimension and mainly the number of the coil legs. The coil is analyzed using equivalent circuit method. Finally, the simulator is validated by implementing a low-pass birdcage coil and comparing our data with the literature. PMID:12413563
LinkImpute: Fast and Accurate Genotype Imputation for Nonmodel Organisms.
Money, Daniel; Gardner, Kyle; Migicovsky, Zoë; Schwaninger, Heidi; Zhong, Gan-Yuan; Myles, Sean
2015-11-01
Obtaining genome-wide genotype data from a set of individuals is the first step in many genomic studies, including genome-wide association and genomic selection. All genotyping methods suffer from some level of missing data, and genotype imputation can be used to fill in the missing data and improve the power of downstream analyses. Model organisms like human and cattle benefit from high-quality reference genomes and panels of reference genotypes that aid in imputation accuracy. In nonmodel organisms, however, genetic and physical maps often are either of poor quality or are completely absent, and there are no panels of reference genotypes available. There is therefore a need for imputation methods designed specifically for nonmodel organisms in which genomic resources are poorly developed and marker order is unreliable or unknown. Here we introduce LinkImpute, a software package based on a k-nearest neighbor genotype imputation method, LD-kNNi, which is designed for unordered markers. No physical or genetic maps are required, and it is designed to work on unphased genotype data from heterozygous species. It exploits the fact that markers useful for imputation often are not physically close to the missing genotype but rather distributed throughout the genome. Using genotyping-by-sequencing data from diverse and heterozygous accessions of apples, grapes, and maize, we compare LD-kNNi with several genotype imputation methods and show that LD-kNNi is fast, comparable in accuracy to the best-existing methods, and exhibits the least bias in allele frequency estimates. PMID:26377960
Fast and accurate nonenzymatic copying of an RNA-like synthetic genetic polymer.
Zhang, Shenglong; Blain, J Craig; Zielinska, Daria; Gryaznov, Sergei M; Szostak, Jack W
2013-10-29
Recent advances suggest that it may be possible to construct simple artificial cells from two subsystems: a self-replicating cell membrane and a self-replicating genetic polymer. Although multiple pathways for the growth and division of model protocell membranes have been characterized, no self-replicating genetic material is yet available. Nonenzymatic template-directed synthesis of RNA with activated ribonucleotide monomers has led to the copying of short RNA templates; however, these reactions are generally slow (taking days to weeks) and highly error prone. N3'-P5'-linked phosphoramidate DNA (3'-NP-DNA) is similar to RNA in its overall duplex structure, and is attractive as an alternative to RNA because the high reactivity of its corresponding monomers allows rapid and efficient copying of all four nucleobases on homopolymeric RNA and DNA templates. Here we show that both homopolymeric and mixed-sequence 3'-NP-DNA templates can be copied into complementary 3'-NP-DNA sequences. G:T and A:C wobble pairing leads to a high error rate, but the modified nucleoside 2-thiothymidine suppresses wobble pairing. We show that the 2-thiothymidine modification increases both polymerization rate and fidelity in the copying of a 3'-NP-DNA template into a complementary strand of 3'-NP-DNA. Our results suggest that 3'-NP-DNA has the potential to serve as the genetic material of artificial biological systems. PMID:24101473
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Spurr, R. J. D.; Shia, R. L.; Yung, Y. L.
2014-12-01
Radiative transfer (RT) computations are an essential component of energy budget calculations in climate models. However, full treatment of RT processes is computationally expensive, prompting usage of 2-stream approximations in operational climate models. This simplification introduces errors of the order of 10% in the top of the atmosphere (TOA) fluxes [Randles et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT simulations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those (few) optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Here, we extend the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Comparisons between the new model, called Universal Principal Component Analysis model for Radiative Transfer (UPCART), 2-stream models (such as those used in climate applications) and line-by-line RT models are performed, in order for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the TOA for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and solar and viewing geometries. We demonstrate that very accurate radiative forcing estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases as compared to an exact line-by-line RT model. The model is comparable in speeds to 2-stream models, potentially rendering UPCART useful for operational General Circulation Models (GCMs). The operational speed and accuracy of UPCART can be further
Fast and accurate approximate inference of transcript expression from RNA-seq data
Hensman, James; Papastamoulis, Panagiotis; Glaus, Peter; Honkela, Antti; Rattray, Magnus
2015-01-01
Motivation: Assigning RNA-seq reads to their transcript of origin is a fundamental task in transcript expression estimation. Where ambiguities in assignments exist due to transcripts sharing sequence, e.g. alternative isoforms or alleles, the problem can be solved through probabilistic inference. Bayesian methods have been shown to provide accurate transcript abundance estimates compared with competing methods. However, exact Bayesian inference is intractable and approximate methods such as Markov chain Monte Carlo and Variational Bayes (VB) are typically used. While providing a high degree of accuracy and modelling flexibility, standard implementations can be prohibitively slow for large datasets and complex transcriptome annotations. Results: We propose a novel approximate inference scheme based on VB and apply it to an existing model of transcript expression inference from RNA-seq data. Recent advances in VB algorithmics are used to improve the convergence of the algorithm beyond the standard Variational Bayes Expectation Maximization algorithm. We apply our algorithm to simulated and biological datasets, demonstrating a significant increase in speed with only very small loss in accuracy of expression level estimation. We carry out a comparative study against seven popular alternative methods and demonstrate that our new algorithm provides excellent accuracy and inter-replicate consistency while remaining competitive in computation time. Availability and implementation: The methods were implemented in R and C++, and are available as part of the BitSeq project at github.com/BitSeq. The method is also available through the BitSeq Bioconductor package. The source code to reproduce all simulation results can be accessed via github.com/BitSeq/BitSeqVB_benchmarking. Contact: james.hensman@sheffield.ac.uk or panagiotis.papastamoulis@manchester.ac.uk or Magnus.Rattray@manchester.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online
Fast, accurate and easy-to-pipeline methods for amplicon sequence processing
NASA Astrophysics Data System (ADS)
Antonielli, Livio; Sessitsch, Angela
2016-04-01
Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.
NASA Astrophysics Data System (ADS)
Gómez-Pedrero, José A.; Rodríguez-Ibañez, Diego; Alonso, José; Quirgoa, Juan A.
2015-09-01
With the advent of techniques devised for the mass production of optical components made with surfaces of arbitrary form (also known as free form surfaces) in the last years, a parallel development of measuring systems adapted for these new kind of surfaces constitutes a real necessity for the industry. Profilometry is one of the preferred methods for the assessment of the quality of a surface, and is widely employed in the optical fabrication industry for the quality control of its products. In this work, we present the design, development and assembly of a new profilometer with five axis of movement, specifically suited to the measurement of medium size (up to 150 mm of diameter) "free-form" optical surfaces with sub-micrometer accuracy and low measuring times. The apparatus is formed by three X, Y, Z linear motorized positioners plus and additional angular and a tilt positioner employed to locate accurately the surface to be measured and the probe which can be a mechanical or an optical one, being optical one a confocal sensor based on chromatic aberration. Both optical and mechanical probes guarantee an accuracy lower than the micrometer in the determination of the surface height, thus ensuring an accuracy in the surface curvatures of the order of 0.01 D or better. An original calibration procedure based on the measurement of a precision sphere has been developed in order to correct the perpendicularity error between the axes of the linear positioners. To reduce the measuring time of the profilometer, a custom electronics, based on an Arduino™ controller, have been designed and produced in order to synchronize the five motorized positioners and the optical and mechanical probes so that a medium size surface (around 10 cm of diameter) with a dynamic range in curvatures of around 10 D, can be measured in less than 300 seconds (using three axes) keeping the resolution in height and curvature in the figures mentioned above.
Clark, Alex M; Bunin, Barry A; Litterman, Nadia K; Schürer, Stephan C; Visser, Ubbo
2014-01-01
Bioinformatics and computer aided drug design rely on the curation of a large number of protocols for biological assays that measure the ability of potential drugs to achieve a therapeutic effect. These assay protocols are generally published by scientists in the form of plain text, which needs to be more precisely annotated in order to be useful to software methods. We have developed a pragmatic approach to describing assays according to the semantic definitions of the BioAssay Ontology (BAO) project, using a hybrid of machine learning based on natural language processing, and a simplified user interface designed to help scientists curate their data with minimum effort. We have carried out this work based on the premise that pure machine learning is insufficiently accurate, and that expecting scientists to find the time to annotate their protocols manually is unrealistic. By combining these approaches, we have created an effective prototype for which annotation of bioassay text within the domain of the training set can be accomplished very quickly. Well-trained annotations require single-click user approval, while annotations from outside the training set domain can be identified using the search feature of a well-designed user interface, and subsequently used to improve the underlying models. By drastically reducing the time required for scientists to annotate their assays, we can realistically advocate for semantic annotation to become a standard part of the publication process. Once even a small proportion of the public body of bioassay data is marked up, bioinformatics researchers can begin to construct sophisticated and useful searching and analysis algorithms that will provide a diverse and powerful set of tools for drug discovery researchers. PMID:25165633
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
Bunin, Barry A.; Litterman, Nadia K.; Schürer, Stephan C.; Visser, Ubbo
2014-01-01
Bioinformatics and computer aided drug design rely on the curation of a large number of protocols for biological assays that measure the ability of potential drugs to achieve a therapeutic effect. These assay protocols are generally published by scientists in the form of plain text, which needs to be more precisely annotated in order to be useful to software methods. We have developed a pragmatic approach to describing assays according to the semantic definitions of the BioAssay Ontology (BAO) project, using a hybrid of machine learning based on natural language processing, and a simplified user interface designed to help scientists curate their data with minimum effort. We have carried out this work based on the premise that pure machine learning is insufficiently accurate, and that expecting scientists to find the time to annotate their protocols manually is unrealistic. By combining these approaches, we have created an effective prototype for which annotation of bioassay text within the domain of the training set can be accomplished very quickly. Well-trained annotations require single-click user approval, while annotations from outside the training set domain can be identified using the search feature of a well-designed user interface, and subsequently used to improve the underlying models. By drastically reducing the time required for scientists to annotate their assays, we can realistically advocate for semantic annotation to become a standard part of the publication process. Once even a small proportion of the public body of bioassay data is marked up, bioinformatics researchers can begin to construct sophisticated and useful searching and analysis algorithms that will provide a diverse and powerful set of tools for drug discovery researchers. PMID:25165633
Fast and Accurate Discovery of Degenerate Linear Motifs in Protein Sequences
Levy, Emmanuel D.; Michnick, Stephen W.
2014-01-01
Linear motifs mediate a wide variety of cellular functions, which makes their characterization in protein sequences crucial to understanding cellular systems. However, the short length and degenerate nature of linear motifs make their discovery a difficult problem. Here, we introduce MotifHound, an algorithm particularly suited for the discovery of small and degenerate linear motifs. MotifHound performs an exact and exhaustive enumeration of all motifs present in proteins of interest, including all of their degenerate forms, and scores the overrepresentation of each motif based on its occurrence in proteins of interest relative to a background (e.g., proteome) using the hypergeometric distribution. To assess MotifHound, we benchmarked it together with state-of-the-art algorithms. The benchmark consists of 11,880 sets of proteins from S. cerevisiae; in each set, we artificially spiked-in one motif varying in terms of three key parameters, (i) number of occurrences, (ii) length and (iii) the number of degenerate or “wildcard” positions. The benchmark enabled the evaluation of the impact of these three properties on the performance of the different algorithms. The results showed that MotifHound and SLiMFinder were the most accurate in detecting degenerate linear motifs. Interestingly, MotifHound was 15 to 20 times faster at comparable accuracy and performed best in the discovery of highly degenerate motifs. We complemented the benchmark by an analysis of proteins experimentally shown to bind the FUS1 SH3 domain from S. cerevisiae. Using the full-length protein partners as sole information, MotifHound recapitulated most experimentally determined motifs binding to the FUS1 SH3 domain. Moreover, these motifs exhibited properties typical of SH3 binding peptides, e.g., high intrinsic disorder and evolutionary conservation, despite the fact that none of these properties were used as prior information. MotifHound is available (http://michnick.bcm.umontreal.ca or http
Simple, fast codebook training algorithm by entropy sequence for vector quantization
NASA Astrophysics Data System (ADS)
Pang, Chao-yang; Yao, Shaowen; Qi, Zhang; Sun, Shi-xin; Liu, Jingde
2001-09-01
The traditional training algorithm for vector quantization such as the LBG algorithm uses the convergence of distortion sequence as the condition of the end of algorithm. We presented a novel training algorithm for vector quantization in this paper. The convergence of the entropy sequence of each region sequence is employed as the condition of the end of the algorithm. Compared with the famous LBG algorithm, it is simple, fast and easy to be comprehended and controlled. We test the performance of the algorithm by typical test image Lena and Barb. The result shows that the PSNR difference between the algorithm and LBG is less than 0.1dB, but the running time of it is at most one second of LBG.
SERF: A Simple, Effective, Robust, and Fast Image Super-Resolver From Cascaded Linear Regression.
Hu, Yanting; Wang, Nannan; Tao, Dacheng; Gao, Xinbo; Li, Xuelong
2016-09-01
Example learning-based image super-resolution techniques estimate a high-resolution image from a low-resolution input image by relying on high- and low-resolution image pairs. An important issue for these techniques is how to model the relationship between high- and low-resolution image patches: most existing complex models either generalize hard to diverse natural images or require a lot of time for model training, while simple models have limited representation capability. In this paper, we propose a simple, effective, robust, and fast (SERF) image super-resolver for image super-resolution. The proposed super-resolver is based on a series of linear least squares functions, namely, cascaded linear regression. It has few parameters to control the model and is thus able to robustly adapt to different image data sets and experimental settings. The linear least square functions lead to closed form solutions and therefore achieve computationally efficient implementations. To effectively decrease these gaps, we group image patches into clusters via k-means algorithm and learn a linear regressor for each cluster at each iteration. The cascaded learning process gradually decreases the gap of high-frequency detail between the estimated high-resolution image patch and the ground truth image patch and simultaneously obtains the linear regression parameters. Experimental results show that the proposed method achieves superior performance with lower time consumption than the state-of-the-art methods. PMID:27323364
Ultra-fast single-file transport of a simple liquid beyond the collective behavior zone.
Su, Jiaye; Yang, Keda; Huang, Decai
2016-07-27
We use molecular dynamics simulations to analyze the single-file transport behavior of a simple liquid through a narrow membrane channel. With the decrease of the liquid-channel interaction, the liquid flow exhibits a remarkable maximum behavior owing to the competition of liquid-liquid and liquid-channel interactions. Surprisingly, this maximum flow is coupled to a sudden reduce of the liquid occupancy, where the liquid particle is moving through the channel alone at nearly constant velocity, rather than in a collective motion mode. Further investigation on the encountered energy barrier suggests that this maximum flow should be induced by particles having large instant velocities (or thermal fluctuation) that overcome the liquid-liquid and liquid-channel interaction barriers. Further decreasing the liquid-channel interaction leads to the decrease and ultimate stabilization of the liquid flow, since the energy barrier will increase and becomes steady. These results suggest that the breakdown of collective behavior can be a new rule for achieving fast single-file transportation, especially for simple or nonpolar liquids with relatively weak liquid-liquid interactions, and is thus helpful for the design of high flux nanofluidic devices. PMID:27460013
Wang, Hui; Liu, Tao; Qiu, Quan; Ding, Peng; He, Yan-Hui; Chen, Wei-Qing
2015-02-01
This study aimed to develop and validate a simple risk score for detecting individuals with impaired fasting glucose (IFG) among the Southern Chinese population. A sample of participants aged ≥20 years and without known diabetes from the 2006-2007 Guangzhou diabetes cross-sectional survey was used to develop separate risk scores for men and women. The participants completed a self-administered structured questionnaire and underwent simple clinical measurements. The risk scores were developed by multiple logistic regression analysis. External validation was performed based on three other studies: the 2007 Zhuhai rural population-based study, the 2008-2010 Guangzhou diabetes cross-sectional study and the 2007 Tibet population-based study. Performance of the scores was measured with the Hosmer-Lemeshow goodness-of-fit test and ROC c-statistic. Age, waist circumference, body mass index and family history of diabetes were included in the risk score for both men and women, with the additional factor of hypertension for men. The ROC c-statistic was 0.70 for both men and women in the derivation samples. Risk scores of ≥28 for men and ≥18 for women showed respective sensitivity, specificity, positive predictive value and negative predictive value of 56.6%, 71.7%, 13.0% and 96.0% for men and 68.7%, 60.2%, 11% and 96.0% for women in the derivation population. The scores performed comparably with the Zhuhai rural sample and the 2008-2010 Guangzhou urban samples but poorly in the Tibet sample. The performance of pre-existing USA, Shanghai, and Chengdu risk scores was poorer in our population than in their original study populations. The results suggest that the developed simple IFG risk scores can be generalized in Guangzhou city and nearby rural regions and may help primary health care workers to identify individuals with IFG in their practice. PMID:25625405
NASA Astrophysics Data System (ADS)
Zargoosh, Kiomars; Ghayeb, Yousef; Azmoon, Behnaz; Qandalee, Mohammad
2013-08-01
A simple and fast procedure is described for evaluating the antioxidant activity of hydrophilic and hydrophobic compounds by using the peroxyoxalate-chemiluminescence (PO-CL) reaction of Bis(2,4,6-trichlorophenyl) oxalate (TCPO) with hydrogen peroxide in the presence of di(tert-butyl)2-(tert-butylamino)-5-[(E)-2-phenyl-1-ethenyl]3,4-furandicarboxylate as a highly fluorescent fluorophore. The IC50 values of the well-known antioxidants were calculated and the results were expressed as gallic equivalent antioxidant capacity (GEAC). It was found that the proposed method is free of physical quenching and oxidant interference, for this reason, proposed method is able to determine the accurate scavenging activity of the antioxidants to the free radicals. Finally, the proposed method was applied to the evaluation of antioxidant activity of complex real samples such as soybean oil and sunflower oil (as hydrophobic samples) and honey (as hydrophilic sample). To the best of our knowledge, this is the first time that total antioxidant activity can be determined directly in soybean oil, sunflower oil and honey (not in their extracts) using PO-CL reactions.
Kirchmair, Johannes; Williamson, Mark J; Afzal, Avid M; Tyzack, Jonathan D; Choy, Alison P K; Howlett, Andrew; Rydberg, Patrik; Glen, Robert C
2013-11-25
FAst MEtabolizer (FAME) is a fast and accurate predictor of sites of metabolism (SoMs). It is based on a collection of random forest models trained on diverse chemical data sets of more than 20 000 molecules annotated with their experimentally determined SoMs. Using a comprehensive set of available data, FAME aims to assess metabolic processes from a holistic point of view. It is not limited to a specific enzyme family or species. Besides a global model, dedicated models are available for human, rat, and dog metabolism; specific prediction of phase I and II metabolism is also supported. FAME is able to identify at least one known SoM among the top-1, top-2, and top-3 highest ranked atom positions in up to 71%, 81%, and 87% of all cases tested, respectively. These prediction rates are comparable to or better than SoM predictors focused on specific enzyme families (such as cytochrome P450s), despite the fact that FAME uses only seven chemical descriptors. FAME covers a very broad chemical space, which together with its inter- and extrapolation power makes it applicable to a wide range of chemicals. Predictions take less than 2.5 s per molecule in batch mode on an Ultrabook. Results are visualized using Jmol, with the most likely SoMs highlighted. PMID:24219364
Shayesteh, Tavakol Heidari; Khajavi, Farzad; Khosroshahi, Abolfazl Ghafuri; Mahjub, Reza
2016-01-01
The determination of blood lead levels is the most useful indicator of the determination of the amount of lead that is absorbed by the human body. Various methods, like atomic absorption spectroscopy (AAS), have already been used for the detection of lead in biological fluid, but most of these methods are based on complicated, expensive, and highly instructed instruments. In this study, a simple and accurate spectroscopic method for the determination of lead has been developed and applied for the investigation of lead concentration in biological samples. In this study, a silica gel column was used to extract lead and eliminate interfering agents in human serum samples. The column was washed with deionized water. The pH was adjusted to the value of 8.2 using phosphate buffer, and then tartrate and cyanide solutions were added as masking agents. The lead content was extracted into the organic phase containing dithizone as a complexion reagent and the dithizone-Pb(II) complex was formed and approved by visible spectrophotometry at 538 nm. The recovery was found to be 84.6 %. In order to validate the method, a calibration curve involving the use of various concentration levels was calculated and proven to be linear in the range of 0.01-1.5 μg/ml, with an R (2) regression coefficient of 0.9968 by statistical analysis of linear model validation. The largest error % values were found to be -5.80 and +11.6 % for intra-day and inter-day measurements, respectively. The largest RSD % values were calculated to be 6.54 and 12.32 % for intra-day and inter-day measurements, respectively. Further, the limit of detection (LOD) was calculated to be 0.002 μg/ml. The developed method was applied to determine the lead content in the human serum of voluntary miners, and it has been proven that there is no statistically significant difference between the data provided from this novel method and the data obtained from previously studied AAS. PMID:26631397
Xie, Minzhu; Wang, Jianxin; Chen, Xin
2015-01-01
Phased haplotype information is crucial in our complete understanding of differences between individuals at the genetic level. Given a collection of DNA fragments sequenced from a homologous pair of chromosomes, the problem of single individual haplotyping (SIH) aims to reconstruct a pair of haplotypes using a computer algorithm. In this paper, we encode the information of aligned DNA fragments into a two-locus linkage graph and approach the SIH problem by vertex labeling of the graph. In order to find a vertex labeling with the minimum sum of weights of incompatible edges, we develop a fast and accurate heuristic algorithm. It starts with detecting error-tolerant components by an adapted breadth-first search. A proper labeling of vertices is then identified for each component, with which sequencing errors are further corrected and edge weights are adjusted accordingly. After contracting each error-tolerant component into a single vertex, the above procedure is iterated on the resulting condensed linkage graph until error-tolerant components are no longer detected. The algorithm finally outputs a haplotype pair based on the vertex labeling. Extensive experiments on simulated and real data show that our algorithm is more accurate and faster than five existing algorithms for single individual haplotyping. PMID:26671798
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng
2016-01-01
An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.
Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J
2015-09-01
Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples. PMID:25940062
Dissociative simple ionization of two active electron diatomic systems by fast electron impact
NASA Astrophysics Data System (ADS)
Lahmidi, N.; Joulakian, B.
2005-01-01
The dissociative (e, 2e) ionization of diatomic hydrogen and lithium by fast electrons is studied theoretically as a vertical transition from the lowest vibrational and rotational level of the fundamental electronic state 1Σ+g of H2 (and Li2) to the first dissociative 2Σu state of H2+ (and Li2+). After verification of the perturbative procedure in the non-dissociative case, for which experimental and theoretical results exist, the variation of the multiply differential cross section of the dissociative ionization is studied in a variety of situations to show the particularities of this process and motivate actually realizable complete experiments, which can detect the scattered and ejected electrons in coincidence with the bare detached nucleus. Our results show that the dynamically well understood behaviour in the case of simple (e, 2e) ionization breaks down in the dissociative case, because of the increasing influence of the electron-electron correlation of the two target electrons.
A fast and simple method for the polymerase chain reaction-based sexing of livestock embryos.
Tavares, K C S; Carneiro, I S; Rios, D B; Feltrin, C; Ribeiro, A K C; Gaudêncio-Neto, S; Martins, L T; Aguiar, L H; Lazzarotto, C R; Calderón, C E M; Lopes, F E M; Teixeira, L P R; Bertolini, M; Bertolini, L R
2016-01-01
Embryo sexing is a powerful tool for livestock producers because it allows them to manage their breeding stocks more effectively. However, the cost of supplies and reagents, and the need for trained professionals to biopsy embryos by micromanipulation restrict the worldwide use of the technology to a limited number of specialized groups. The aim of this study was to couple a fast and inexpensive DNA extraction protocol with a practical biopsy approach to create a simple, quick, effective, and dependable embryo sexing procedure. From a total of 1847 sheep and cattle whole embryos or embryo biopsies, the sexing efficiency was 100% for embryo biopsies, 98% for sheep embryos, and 90.2% for cattle embryos. We used a primer pair that was common to both species and only 10% of the total extracted DNA. The whole protocol takes only 2 h to perform, which suggests that the proposed procedure can be readily applied to field conditions. Moreover, in addition to embryo sexing, the procedure can be used for further analyses, such as genotyping and molecular diagnosis in preimplantation embryos. PMID:27050974
A fast and accurate method for computing the Sunyaev-Zel'dovich signal of hot galaxy clusters
NASA Astrophysics Data System (ADS)
Chluba, Jens; Nagai, Daisuke; Sazonov, Sergey; Nelson, Kaylea
2012-10-01
New-generation ground- and space-based cosmic microwave background experiments have ushered in discoveries of massive galaxy clusters via the Sunyaev-Zel'dovich (SZ) effect, providing a new window for studying cluster astrophysics and cosmology. Many of the newly discovered, SZ-selected clusters contain hot intracluster plasma (kTe ≳ 10 keV) and exhibit disturbed morphology, indicative of frequent mergers with large peculiar velocity (v ≳ 1000 km s-1). It is well known that for the interpretation of the SZ signal from hot, moving galaxy clusters, relativistic corrections must be taken into account, and in this work, we present a fast and accurate method for computing these effects. Our approach is based on an alternative derivation of the Boltzmann collision term which provides new physical insight into the sources of different kinematic corrections in the scattering problem. In contrast to previous works, this allows us to obtain a clean separation of kinematic and scattering terms. We also briefly mention additional complications connected with kinematic effects that should be considered when interpreting future SZ data for individual clusters. One of the main outcomes of this work is SZPACK, a numerical library which allows very fast and precise (≲0.001 per cent at frequencies hν ≲ 20kTγ) computation of the SZ signals up to high electron temperature (kTe ≃ 25 keV) and large peculiar velocity (v/c ≃ 0.01). The accuracy is well beyond the current and future precision of SZ observations and practically eliminates uncertainties which are usually overcome with more expensive numerical evaluation of the Boltzmann collision term. Our new approach should therefore be useful for analysing future high-resolution, multifrequency SZ observations as well as computing the predicted SZ effect signals from numerical simulations.
Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu
2015-01-01
Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement
NASA Technical Reports Server (NTRS)
Goodwin, Sabine A.; Raj, P.
1999-01-01
Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.
NASA Astrophysics Data System (ADS)
Jiang, Xikai; Karpeev, Dmitry; Li, Jiyuan; de Pablo, Juan; Hernandez-Ortiz, Juan; Heinonen, Olle
Boundary integrals arise in many electrostatic and magnetostatic problems. In computational modeling of these problems, although the integral is performed only on the boundary of a domain, its direct evaluation needs O(N2) operations, where N is number of unknowns on the boundary. The O(N2) scaling impedes a wider usage of the boundary integral method in scientific and engineering communities. We have developed a parallel computational approach that utilize the Fast Multipole Method to evaluate the boundary integral in O(N) operations. To demonstrate the accuracy, efficiency, and scalability of our approach, we consider two test cases. In the first case, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space using a hybrid finite element-boundary integral method. In the second case, we solve an electrostatic problem involving the polarization of dielectric objects in free space using the boundary element method. The results from test cases show that our parallel approach can enable highly efficient and accurate simulations of mesoscale electrostatic/magnetostatic problems. Computing resources was provided by Blues, a high-performance cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. Work at Argonne was supported by U. S. DOE, Office of Science under Contract No. DE-AC02-06CH11357.
Zabaleta, I; Bizkarguenaga, E; Bilbao, D; Etxebarria, N; Prieto, A; Zuloaga, O
2016-05-15
A simple and fast analytical method for the determination of fourteen perfluorinated compounds (PFCs), including three perfluoroalkylsulfonates (PFSAs), seven perfluorocarboxylic acids (PFCAs), three perfluorophosphonic acids (PFPAs) and perfluorooctanesulfonamide (PFOSA) and ten potential precursors, including four polyfluoroalkyl phosphates (PAPs), four fluorotelomer saturated acids (FTCAs) and two fluorotelomer unsaturated acids (FTUCAs) in different packaging materials was developed in the present work. In order to achieve this objective the optimization of an ultrasonic probe-assisted extraction (UPAE) method was carried out before the analysis of the target compounds by liquid-chromatography-triple quadrupole-tandem mass spectrometry (LC-QqQ-MS/MS). 7 mL of 1 % acetic acid in methanol and a 2.5-min single extraction cycle were sufficient for the extraction of all the target analytes. The optimized analytical method was validated in terms of recovery, precision and method detection limits (MDLs). Apparent recovery values after correction with the corresponding labeled standard were in the 69-103 % and 62-98 % range for samples fortified at 25 ng/g and 50 ng/g concentration levels, respectively and MDL values in the 0.6-2.2 ng/g range were obtained. The developed method was applied to the analysis of plastic (milk bottle, muffin cup, pre-cooked food wrapper and cup of coffee) and cardboard materials (microwave popcorn bag, greaseproof paper for French fries, cardboard box for pizza and cinema cardboard box for popcorn). To the best of our knowledge, this is the first method that describes the determination of fourteen PFCs and ten potential precursors in packaging materials. Moreover, 6:2 FTCA, 6:2 FTUCA and 5:3 FTCA analytes were detected for the first time in microwave popcorn bags. PMID:26992531
NASA Astrophysics Data System (ADS)
Maloney, James G.; Smith, Glenn S.; Scott, Waymond R., Jr.
1990-07-01
Two antennas are considered, a cylindrical monopole and a conical monopole. Both are driven through an image plane from a coaxial transmission line. Each of these antennas corresponds to a well-posed theoretical electromagnetic boundary value problem and a realizable experimental model. These antennas are analyzed by a straightforward application of the time-domain finite-difference method. The computed results for these antennas are shown to be in excellent agreement with accurate experimental measurements for both the time domain and the frequency domain. The graphical displays presented for the transient near-zone and far-zone radiation from these antennas provide physical insight into the radiation process.
Babić, S; Barišić, J; Malev, O; Klobučar, G; Popović, N Topić; Strunjak-Perović, I; Krasnići, N; Čož-Rakovac, R; Klobučar, R Sauerborn
2016-06-01
Sewage sludge (SS) is a complex organic by-product of wastewater treatment plants. Deposition of large amounts of SS can increase the risk of soil contamination. Therefore, there is an increasing need for fast and accurate assessment of SS toxic potential. Toxic effects of SS were tested on earthworm Eisenia fetida tissue, at the subcellular and biochemical level. Earthworms were exposed to depot sludge (DS) concentration ratio of 30 or 70 %, to undiluted and to 100 and 10 times diluted active sludge (AS). The exposure to DS lasted for 24/48 h (acute exposure), 96 h (semi-acute exposure) and 7/14/28 days (sub-chronic exposure) and 48 h for AS. Toxic effects were tested by the measurements of multixenobiotic resistance mechanism (MXR) activity and lipid peroxidation levels, as well as the observation of morphological alterations and behavioural changes. Biochemical markers confirmed the presence of MXR inhibitors in the tested AS and DS and highlighted the presence of SS-induced oxidative stress. The MXR inhibition and thiobarbituric acid reactive substance (TBARS) concentration in the whole earthworm's body were higher after the exposition to lower concentration of the DS. Furthermore, histopathological changes revealed damage to earthworm body wall tissue layers as well as to the epithelial and chloragogen cells in the typhlosole region. These changes were proportional to SS concentration in tested soils and to exposure duration. Obtained results may contribute to the understanding of SS-induced toxic effects on terrestrial invertebrates exposed through soil contact and to identify defence mechanisms of earthworms. PMID:26971513
Wee, Eugene J.H.; Wang, Yuling; Tsao, Simon Chang-Hao; Trau, Matt
2016-01-01
Sensitive and accurate identification of specific DNA mutations can influence clinical decisions. However accurate diagnosis from limiting samples such as circulating tumour DNA (ctDNA) is challenging. Current approaches based on fluorescence such as quantitative PCR (qPCR) and more recently, droplet digital PCR (ddPCR) have limitations in multiplex detection, sensitivity and the need for expensive specialized equipment. Herein we describe an assay capitalizing on the multiplexing and sensitivity benefits of surface-enhanced Raman spectroscopy (SERS) with the simplicity of standard PCR to address the limitations of current approaches. This proof-of-concept method could reproducibly detect as few as 0.1% (10 copies, CV < 9%) of target sequences thus demonstrating the high sensitivity of the method. The method was then applied to specifically detect three important melanoma mutations in multiplex. Finally, the PCR/SERS assay was used to genotype cell lines and ctDNA from serum samples where results subsequently validated with ddPCR. With ddPCR-like sensitivity and accuracy yet at the convenience of standard PCR, we believe this multiplex PCR/SERS method could find wide applications in both diagnostics and research. PMID:27446486
Bellazzini, R.; Brez, A.; Massai, M.M.; Torquati, M.R.
1985-02-01
A fast and accurate algorithm to calculate the charge and current induced on all the electrodes of wire chambers (MWPC, 'pad chambers', TPC. . . .) is presented. The algorithm is completely three dimensional so that it is possible to calculate the induced charge on anode wires and cathode strips or 'pads' regardless of their orientation in space.
Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353
Ovchinnikov, Victor; Nam, Kwangho; Karplus, Martin
2016-08-25
A method is developed to obtain simultaneously free energy profiles and diffusion constants from restrained molecular simulations in diffusive systems. The method is based on low-order expansions of the free energy and diffusivity as functions of the reaction coordinate. These expansions lead to simple analytical relationships between simulation statistics and model parameters. The method is tested on 1D and 2D model systems; its accuracy is found to be comparable to or better than that of the existing alternatives, which are briefly discussed. An important aspect of the method is that the free energy is constructed by integrating its derivatives, which can be computed without need for overlapping sampling windows. The implementation of the method in any molecular simulation program that supports external umbrella potentials (e.g., CHARMM) requires modification of only a few lines of code. As a demonstration of its applicability to realistic biomolecular systems, the method is applied to model the α-helix ↔ β-sheet transition in a 16-residue peptide in implicit solvent, with the reaction coordinate provided by the string method. Possible modifications of the method are briefly discussed; they include generalization to multidimensional reaction coordinates [in the spirit of the model of Ermak and McCammon (Ermak, D. L.; McCammon, J. A. J. Chem. Phys. 1978, 69, 1352-1360)], a higher-order expansion of the free energy surface, applicability in nonequilibrium systems, and a simple test for Markovianity. In view of the small overhead of the method relative to standard umbrella sampling, we suggest its routine application in the cases where umbrella potential simulations are appropriate. PMID:27135391
Goldoni, Luca; Beringhelli, Tiziana; Rocchia, Walter; Realini, Natalia; Piomelli, Daniele
2016-05-15
Absolute analyte quantification by nuclear magnetic resonance (NMR) spectroscopy is rarely pursued in metabolomics, even though this would allow researchers to compare results obtained using different techniques. Here we report on a new protocol that permits, after pH-controlled serum protein removal, the sensitive quantification (limit of detection [LOD] = 5-25 μM) of hydrophilic nutrients and metabolites in the extracellular medium of cells in cultures. The method does not require the use of databases and uses PULCON (pulse length-based concentration determination) quantitative NMR to obtain results that are significantly more accurate and reproducible than those obtained by CPMG (Carr-Purcell-Meiboom-Gill) sequence or post-processing filtering approaches. Three practical applications of the method highlight its flexibility under different cell culture conditions. We identified and quantified (i) metabolic differences between genetically engineered human cell lines, (ii) alterations in cellular metabolism induced by differentiation of mouse myoblasts into myotubes, and (iii) metabolic changes caused by activation of neurotransmitter receptors in mouse myoblasts. Thus, the new protocol offers an easily implementable, efficient, and versatile tool for the investigation of cellular metabolism and signal transduction. PMID:26898303
Gupta, V; Wang, Y; Romero, A; Heijmen, B; Hoogeman, M; Myronenko, A; Jordan, P
2014-06-01
Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain ground truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto
NASA Astrophysics Data System (ADS)
Suzuki, Yasushi; Chen, Guo-Ping; Manna, Uttam; Vij, Jagdish K.; Fukuda, Atsuo
2009-07-01
Simple matrix antiferroelectric liquid crystal displays (SM-AFLCDs) are prototyped to realize field sequential color (FSC) by utilizing the fast pretransitional response. The developed FSC-SM-AFLCDs will lead to the replacement of existing static driven FSC-SM-nematic-LCDs. Bright and clear color can be given to already market-acquired, black-and-white SM-LCDs of up to 1/64-duty and 3-in. diagonal size. To optimize the display performance, we analyze two important factors, the large pretransitional effect and the appropriate reset pulse, in terms of the interlayer interaction potential used in describing the field-induced transition of the antiferroelectric smectic phase.
PS-Analysis Of COSMO SAR Data Stacks Through A Fast And Simple Technique
NASA Astrophysics Data System (ADS)
Riva, Davide; D'Aria, Davide; Giudici, Davide; Guarnieri, Andrea Monti; Recchia, Andrea; Tagliani, Nicolas; Tebaldini, Stefano; Mancon, Simone
2012-01-01
This paper shows the results obtained with a Persistent Scatterers analysis over an X-band satellite interferometric dataset, acquired by three sensors of the COSMO SKYMED constellation over Milan, Italy. The accurate Persistent Scatterers analysis technique that leads to the estimation of the PS velocity, phase and displacement is briefly introduced. The PS analysis method performs the estimation by exploiting the differential atmospheric correction of the phases and then estimating the velocity through DFT.
Kapitonov, Vladimir V.; Tempel, Sébastien; Jurka, Jerzy
2009-01-01
Rapidly growing number of sequenced genomes requires fast and accurate computational tools for analysis of different transposable elements (TEs). In this paper we focus on rapid and reliable procedure for classification of autonomous non-LTR retrotransposons based on alignment and clustering of their reverse transcriptase (RT) domains. Typically, the RT domain protein sequences encoded by different non-LTR retrotransposons are similar to each other in terms of significant BLASTP E-values. Therefore, they can be easily detected by the routine BLASTP searches of genomic DNA sequences coding for proteins similar to the RT domains of known non-LTR retrotransposons. However, detailed classification of non-LTR retrotransposons, i.e. their assignment to specific clades, is a slow and complex procedure that is not formalized or integrated as a standard set of computational methods and data. Here we describe a tool (RTclass1) designed for the fast and accurate automated assignment of novel non-LTR retrotransposons to known or novel clades using phylogenetic analysis of the RT domain protein sequences. RTclass1 classifies a particular non-LTR retrotransposon based on its RT domain in less than 10 minutes on a standard desktop computer and achieves 99.5% accuracy. RT1class1 works either as a standalone program installed locally or as a web-server that can be accessed distantly by uploading sequence data through the internet (http://www.girinst.org/RTphylogeny/RTclass1). PMID:19651192
NASA Astrophysics Data System (ADS)
Fendley, Paul; Hagendorf, Christian
2010-10-01
We conjecture exact and simple formulas for some physical quantities in two quantum chains. A classic result of this type is Onsager, Kaufman and Yang's formula for the spontaneous magnetization in the Ising model, subsequently generalized to the chiral Potts models. We conjecture that analogous results occur in the XYZ chain when the couplings obey JxJy + JyJz + JxJz = 0, and in a related fermion chain with strong interactions and supersymmetry. We find exact formulas for the magnetization and gap in the former, and the staggered density in the latter, by exploiting the fact that certain quantities are independent of finite-size effects.
Fast mode-hop-free acousto-optically tuned laser with a simple laser diode.
Bösel, André; Salewski, Klaus-Dieter; Kinder, Thomas
2007-07-01
A mode-hop-free tunable external-cavity Littrow diode laser with intracavity acousto-optic modulators (AOMs) has been built. The modes of the red laser diode without a special antireflection coating are shifted by varying the injection current. The external resonator modes and the grating selectivity are independently electrically alterable by two AOMs. Thus, a tuning of the external resonator over up to 1900 GHz is possible. A precise computer control of laser diode and AOMs allowed a single-mode tuning of the whole laser with a tuning range of 225 GHz in 250 s. Additionally, we demonstrated fast tuning over 90 GHz in 190 micros and a repetition rate of 2.5 kHz. PMID:17603626
Bosse, Jens B.; Tanneti, Nikhila S.; Hogue, Ian B.; Enquist, Lynn W.
2015-01-01
Dual-color live cell fluorescence microscopy of fast intracellular trafficking processes, such as axonal transport, requires rapid switching of illumination channels. Typical broad-spectrum sources necessitate the use of mechanical filter switching, which introduces delays between acquisition of different fluorescence channels, impeding the interpretation and quantification of highly dynamic processes. Light Emitting Diodes (LEDs), however, allow modulation of excitation light in microseconds. Here we provide a step-by-step protocol to enable any scientist to build a research-grade LED illuminator for live cell microscopy, even without prior experience with electronics or optics. We quantify and compare components, discuss our design considerations, and demonstrate the performance of our LED illuminator by imaging axonal transport of herpes virus particles with high temporal resolution. PMID:26600461
Zhang, Yuan; Zhou, Wei-E; Li, Shao-Hui; Ren, Zhi-Qin; Li, Wei-Qing; Zhou, Yu; Feng, Xue-Song; Wu, Wen-Jie; Zhang, Feng
2016-02-01
An analytical method based on ultra-high performance supercritical fluid chromatography (UHPSFC) with photo-diode array detection (PDA) has been developed to quantify 15 sulfonamides and their N4-acetylation metabolites in serum. Under the optimized gradient elution conditions, it took only 7min to separate all 15 sulfonamides and the critical pairs of each parent drug and metabolite were completely separated. Variables affecting the UHPSFC were optimized to get a better separation. The performance of the developed method was evaluated. The UHPSFC method allowed the baseline separation and determination of 15 sulfonamides and metabolites with limit of detection ranging from 0.15 to 0.35μg/mL. Recoveries between 90.1 and 102.2% were obtained with satisfactory precision since relative standard deviations were always below 3%. The proposed method is simple, accurate, time-saving and green, it is applicable to a variety of sulfonamides detection in serum samples. PMID:26780846
Simple and Fast Method for Fabrication of Endoscopic Implantable Sensor Arrays
Tahirbegi, I. Bogachan; Alvira, Margarita; Mir, Mònica; Samitier, Josep
2014-01-01
Here we have developed a simple method for the fabrication of disposable implantable all-solid-state ion-selective electrodes (ISE) in an array format without using complex fabrication equipment or clean room facilities. The electrodes were designed in a needle shape instead of planar electrodes for a full contact with the tissue. The needle-shape platform comprises 12 metallic pins which were functionalized with conductive inks and ISE membranes. The modified microelectrodes were characterized with cyclic voltammetry, scanning electron microscope (SEM), and optical interferometry. The surface area and roughness factor of each microelectrode were determined and reproducible values were obtained for all the microelectrodes on the array. In this work, the microelectrodes were modified with membranes for the detection of pH and nitrate ions to prove the reliability of the fabricated sensor array platform adapted to an endoscope. PMID:24971473
Fusion of microlitre water-in-oil droplets for simple, fast and green chemical assays.
Chiu, S-H; Urban, P L
2015-08-01
A simple format for microscale chemical assays is proposed. It does not require the use of test tubes, microchips or microtiter plates. Microlitre-range (ca. 0.7-5.0 μL) aqueous droplets are generated by a commercial micropipette in a non-polar matrix inside a Petri dish. When two droplets are pipetted nearby, they spontaneously coalesce within seconds, priming a chemical reaction. Detection of the reaction product is accomplished by colorimetry, spectrophotometry, or fluorimetry using simple light-emitting diode (LED) arrays as the sources of monochromatic light, while chemiluminescence detection of the analytes present in single droplets is conducted in the dark. A smartphone camera is used as the detector. The limits of detection obtained for the developed in-droplet assays are estimated to be: 1.4 nmol (potassium permanganate by colorimetry), 1.4 pmol (fluorescein by fluorimetry), and 580 fmol (sodium hypochlorite by chemiluminescence detection). The format has successfully been used to monitor the progress of chemical and biochemical reactions over time with sub-second resolution. A semi-quantitative analysis of ascorbic acid using Tillman's reagent is presented. A few tens of individual droplets can be scanned in parallel. Rapid switching of the LED light sources with different wavelengths enables a spectral analysis of multiple droplets. Very little solid waste is produced. The assay matrix is readily recycled, thus the volume of liquid waste produced each time is also very small (typically, 1-10 μL per analysis). Various water-immiscible translucent liquids can be used as the reaction matrix: including silicone oil, 1-octanol as well as soybean cooking oil. PMID:26040707
Clisby, Nathan
2010-02-01
We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to 33x10{6} steps. Consequently the critical exponent nu for three-dimensional self-avoiding walks is determined to great accuracy; the final estimate is nu=0.587 597(7). The method can be adapted to other models of polymers with short-range interactions, on the lattice or in the continuum. PMID:20366773
A Simple and Fast Semiautomatic Procedure for the Atomistic Modeling of Complex DNA Polyhedra.
Alves, Cassio; Iacovelli, Federico; Falconi, Mattia; Cardamone, Francesca; Morozzo Della Rocca, Blasco; de Oliveira, Cristiano L P; Desideri, Alessandro
2016-05-23
A semiautomatic procedure to build complex atomistic covalently linked DNA nanocages has been implemented in a user-friendly, free, and fast program. As a test set, seven different truncated DNA polyhedra, composed by B-DNA double helices connected through short single-stranded linkers, have been generated. The atomistic structures, including a tetrahedron, a cube, an octahedron, a dodecahedron, a triangular prism, a pentagonal prism, and a hexagonal prism, have been probed through classical molecular dynamics and analyzed to evaluate their structural and dynamical properties and to highlight possible building faults. The analysis of the simulated trajectories also allows us to investigate the role of the different geometries in defining nanocages stability and flexibility. The data indicate that the cages are stable and that their structural and dynamical parameters measured along the trajectories are slightly affected by the different geometries. These results demonstrate that the constraints imposed by the covalent links induce an almost identical conformational variability independently of the three-dimensional geometry and that the program presented here is a reliable and valid tool to engineer DNA nanostructures. PMID:27050675
Qiu, Dong Zhang, Mingxing
2014-08-15
A simple and inclusive method is proposed for accurate determination of the habit plane between bicrystals in transmission electron microscope. Whilst this method can be regarded as a variant of surface trace analysis, the major innovation lies in the improved accuracy and efficiency of foil thickness measurement, which involves a simple tilt of the thin foil about a permanent tilting axis of the specimen holder, rather than cumbersome tilt about the surface trace of the habit plane. Experimental study has been done to validate this proposed method in determining the habit plane between lamellar α{sub 2} plates and γ matrix in a Ti–Al–Nb alloy. Both high accuracy (± 1°) and high precision (± 1°) have been achieved by using the new method. The source of the experimental errors as well as the applicability of this method is discussed. Some tips to minimise the experimental errors are also suggested. - Highlights: • An improved algorithm is formulated to measure the foil thickness. • Habit plane can be determined with a single tilt holder based on the new algorithm. • Better accuracy and precision within ± 1° are achievable using the proposed method. • The data for multi-facet determination can be collected simultaneously.
NASA Astrophysics Data System (ADS)
Mahmoud, Ahmed M.; Stapleton, Phoebe A.; Frisbee, Jefferson C.; D'Audiffret, Alexandre; Mukdadi, Osama M.
2009-02-01
Measurement of flow-mediated vasodilatation (FMD) in brachial and other conduit arteries has become a common method to asses the status of endothelial function in vivo. In spite of the direct relationship between the arterial wall multi-component strains and FMD responses, direct measurement of wall strain tensor due to FMD has not yet been reported in the literature. In this work, a noninvasive direct ultrasound-based strain tensor measuring (STM) technique is presented to assess changes in the mechanical parameters of the vascular wall during FMD. The STM technique utilizes only sequences of B-mode ultrasound images, and starts with segmenting a region of interest within the artery and providing the acquisition parameters. Then a block matching technique is employed to measure the frame to frame local velocities. Displacements, diameter change, multi-component strain tensor and strain rates are then calculated by integrating or differentiating velocity components. The accuracy of the STM algorithm was assessed using a phantom study, and was further validated using in vivo data from human subjects. Results indicate the validity and versatility of the STM algorithm, and describe how parameters other than the diameter change are sensitive to pre- and post-occlusion, which can then be used for accurate assessment of atherosclerosis.
Fast and accurate single-cell RNA-seq analysis by clustering of transcript-compatibility counts.
Ntranos, Vasilis; Kamath, Govinda M; Zhang, Jesse M; Pachter, Lior; Tse, David N
2016-01-01
Current approaches to single-cell transcriptomic analysis are computationally intensive and require assay-specific modeling, which limits their scope and generality. We propose a novel method that compares and clusters cells based on their transcript-compatibility read counts rather than on the transcript or gene quantifications used in standard analysis pipelines. In the reanalysis of two landmark yet disparate single-cell RNA-seq datasets, we show that our method is up to two orders of magnitude faster than previous approaches, provides accurate and in some cases improved results, and is directly applicable to data from a wide variety of assays. PMID:27230763
A Simple Technique for Fast Digital Background Calibration of A/D Converters
NASA Astrophysics Data System (ADS)
Centurelli, Francesco; Monsurrò, Pietro; Trifiletti, Alessandro
2007-12-01
A modification of the background digital calibration procedure for A/D converters by Li and Moon is proposed, based on a method to improve the speed of convergence and the accuracy of the calibration. The procedure exploits a colored random sequence in the calibration algorithm, and can be applied both for narrowband input signals and for baseband signals, with a slight penalty on the analog bandwidth of the converter. By improving the signal-to-calibration-noise ratio of the statistical estimation of the error parameters, our proposed technique can be employed either to improve linearity or to make the calibration procedure faster. A practical method to generate the random sequence with minimum overhead with respect to a simple PRBS is also presented. Simulations have been performed on a 14-bit pipeline A/D converter in which the first 4 stages have been calibrated, showing a 15 dB improvement in THD and SFDR for the same calibration time with respect to the original technique.
A very simple and fast way to access and validate algorithms in reproducible research.
Stegmayer, Georgina; Pividori, Milton; Milone, Diego H
2016-01-01
The reproducibility of research in bioinformatics refers to the notion that new methodologies/algorithms and scientific claims have to be published together with their data and source code, in a way that other researchers may verify the findings to further build more knowledge on them. The replication and corroboration of research results are key to the scientific process, and many journals are discussing the matter nowadays, taking concrete steps in this direction. In this journal itself, a recent opinion note has appeared highlighting the increasing importance of this topic in bioinformatics and computational biology, inviting the community to further discuss the matter. In agreement with that article, we would like to propose here another step into that direction with a tool that allows the automatic generation of a web interface, named web-demo, directly from source code in a simple and straightforward way. We believe this contribution can help make research not only reproducible but also more easily accessible. A web-demo associated to a published paper can accelerate an algorithm validation with real data, wide-spreading its use with just a few clicks. PMID:26223526
A simple device for sub-aperture stitching of fast convex surfaces
NASA Astrophysics Data System (ADS)
Aguirre-Aguirre, D.; Izazaga-Pérez, R.; Villalobos-Mendoza, B.; Carrasco-Licea, E.; Granados-Agustin, F. S.; Percino-Zacarías, M. E.; Salazar-Morales, M. F.; Cruz-Zavala, E.
2015-10-01
In this work, we show a simple device that helps in the use of the sub-aperture stitching method for testing convex surfaces with large diameter and a small f/#. This device was designed at INAOE's Optical work shop to solve the problem that exists when a Newton Interferometer and the sub-aperture stitching method are used. It is well known that if the f/# of a surface is small, the slopes over the surface increases rapidly and this is critical for points far from the vertex. Therefore, if we use a reference master in the Newton interferometer to test a convex surface with a large diameter and an area far from the vertex, the master tends to slide causing scratches over the surface under test. To solve this problem, a device for mounting the surface under test with two freedom degrees, a rotating axis and a lever to tilt the surface, was designed. As result, the optical axis of the master can be placed in vertical position avoiding undesired movements of the master and making the sub-aperture stitching easier. We describe the proposed design and the results obtained with this device.
Ratcliff, Laura E; Grisanti, Luca; Genovese, Luigi; Deutsch, Thierry; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang; Beljonne, David; Cornil, Jérôme
2015-05-12
A fast and accurate scheme has been developed to evaluate two key molecular parameters (on-site energies and transfer integrals) that govern charge transport in organic supramolecular architecture devices. The scheme is based on a constrained density functional theory (CDFT) approach implemented in the linear-scaling BigDFT code that exploits a wavelet basis set. The method has been applied to model disordered structures generated by force-field simulations. The role of the environment on the transport parameters has been taken into account by building large clusters around the active molecules involved in the charge transfer. PMID:26574411
NASA Astrophysics Data System (ADS)
Catarino, I.; Bonfait, G.
2000-01-01
A simple, cryogen-free and inexpensive experimental setup for fast heat capacity measurements of solids from 15 to 300 K is presented. It consists of a thermally controlled cell, coupled to the cold finger of a Gifford-Mac Mahon cryocooler, containing two cheap Pt thin film resistors as thermometers: one is simultaneously the sample holder, the sample heater and the sample thermometer; the other resistor is used for temperature control. This calorimeter allows adiabatic specific heat measurements in the whole temperature range in less than one hour. The heat capacity results for a 106 mg copper sample match the tabulated values within 2% for T > 20 K. This system was used to measure the specific heat of UFe xAl 12-x with sample masses as low as 26 mg without performance degradation.
NASA Astrophysics Data System (ADS)
Bucci, Ovidio M.; Gennarelli, Claudio; Savarese, Catello
1991-01-01
An optimal sampling interpolation algorithm which allows the accurate recovery of plane-rectangular near-field samples from the knowledge of the plane-polar ones is developed. This enables the standard near field-far field (NF-FF) transformation, which takes full advantage of the FFT algorithm, to be applied to plane-polar scanning. The maximum allowable sample spacing is also rigorously derived, and it is shown that it can be significantly greater than lambda/2 as the measurement place moves away from the source. This allows a remarkable reduction of both measurement time and memory storage requirements. The sampling approach is compared with that based on the bivariate Lagrange interpolation (BLI) method. The sampling reconstruction agrees with the exact results significantly better than the BLI, in spite of the significantly lower number of required measurements.
Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A; Lundström, Patrik
2015-01-01
The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available. PMID:25569628
NASA Astrophysics Data System (ADS)
Shoaib, Mahbubul Alam; Cho, Soo Gyeong; Choi, Cheol Ho
2014-04-01
We proposed a new parameterization scheme, G4MP2-SFM, for the prediction of heat of formation by combining SFM (Systematic Fragmentation Method) and high accuracy G4MP2 theories. In an application to imidazole derivatives, we found that the overall MAD and RMSD of the particular G4MP2-SFM(opt) are 1.9 and 2.2 kcal/mol, respectively, demonstrating its high prediction accuracy. In addition, our parameterization scheme replaces the ab initio computations with a set of simple arithmetic, allowing fast predictions. Our new computational scheme can be of practical use in high throughput search for new high energy materials.
Protocol: a fast and simple in situ PCR method for localising gene expression in plant tissue
2014-01-01
Background An important step in characterising the function of a gene is identifying the cells in which it is expressed. Traditional methods to determine this include in situ hybridisation, gene promoter-reporter fusions or cell isolation/purification techniques followed by quantitative PCR. These methods, although frequently used, can have limitations including their time-consuming nature, limited specificity, reliance upon well-annotated promoters, high cost, and the need for specialized equipment. In situ PCR is a relatively simple and rapid method that involves the amplification of specific mRNA directly within plant tissue whilst incorporating labelled nucleotides that are subsequently detected by immunohistochemistry. Another notable advantage of this technique is that it can be used on plants that are not easily genetically transformed. Results An optimised workflow for in-tube and on-slide in situ PCR is presented that has been evaluated using multiple plant species and tissue types. The protocol includes optimised methods for: (i) fixing, embedding, and sectioning of plant tissue; (ii) DNase treatment; (iii) in situ RT-PCR with the incorporation of DIG-labelled nucleotides; (iv) signal detection using colourimetric alkaline phosphatase substrates; and (v) mounting and microscopy. We also provide advice on troubleshooting and the limitations of using fluorescence as an alternative detection method. Using our protocol, reliable results can be obtained within two days from harvesting plant material. This method requires limited specialized equipment and can be adopted by any laboratory with a vibratome (vibrating blade microtome), a standard thermocycler, and a microscope. We show that the technique can be used to localise gene expression with cell-specific resolution. Conclusions The in situ PCR method presented here is highly sensitive and specific. It reliably identifies the cellular expression pattern of even highly homologous and low abundance
Cruz, Rebeca; Casal, Susana
2013-11-15
Vitamin E analysis in green vegetables is performed by an array of different methods, making it difficult to compare published data or choosing the adequate one for a particular sample. Aiming to achieve a consistent method with wide applicability, the current study reports the development and validation of a fast micro-method for quantification of vitamin E in green leafy vegetables. The methodology uses solid-liquid extraction based on the Folch method, with tocol as internal standard, and normal-phase HPLC with fluorescence detection. A large linear working range was confirmed, being highly reproducible, with inter-day precisions below 5% (RSD). Method sensitivity was established (below 0.02 μg/g fresh weight), and accuracy was assessed by recovery tests (>96%). The method was tested in different green leafy vegetables, evidencing diverse tocochromanol profiles, with variable ratios and amounts of α- and γ-tocopherol, and other minor compounds. The methodology is adequate for routine analyses, with a reduced chromatographic run (<7 min) and organic solvent consumption, and requires only standard chromatographic equipment available in most laboratories. PMID:23790900
Osicka, Josef; Ilčiková, Marketa; Popelka, Anton; Filip, Jaroslav; Bertok, Tomas; Tkac, Jan; Kasak, Peter
2016-06-01
A simple fabrication method for preparation of surfaces able to switch from superhydrophobic to superhydrophilic state in a reversible and fast way is described. A self-assembled monolayer (SAM) consisting of quaternary ammonium group with aliphatic tail bearing terminal thiol functionality was created on gold nano/microstructured and gold planar surfaces, respectively. A rough nano/microstructured surface was prepared by galvanic reaction on a silicon wafer. The reversible counterion exchange on the rough surface resulted in a switchable contact angle between <5° and 151°. The prewetted rough surface with Cl(-) as a counterion possesses a superoleophobic underwater character. The kinetics of counterion exchanges suggests a long hydration process and strong electron ion pairing between quaternary ammonium group and perfluorooctanoate counterion. Moreover, a wettability gradient from superhydrophobic to superhydrophilic can be formed on the modified rough gold surface in a robust and simple way by passive incubation of the substrate in a counterion solution and controlled by ionic strength. Furthermore, adsorption of gold nanoparticles to modified plain gold surface can be controlled to a high extent by counterions present on the SAM layer. PMID:27181793
NASA Astrophysics Data System (ADS)
Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart
2013-09-01
The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.
ICE-COLA: towards fast and accurate synthetic galaxy catalogues optimizing a quasi-N-body method
NASA Astrophysics Data System (ADS)
Izard, Albert; Crocce, Martin; Fosalba, Pablo
2016-07-01
Next generation galaxy surveys demand the development of massive ensembles of galaxy mocks to model the observables and their covariances, what is computationally prohibitive using N-body simulations. COmoving Lagrangian Acceleration (COLA) is a novel method designed to make this feasible by following an approximate dynamics but with up to three orders of magnitude speed-ups when compared to an exact N-body. In this paper, we investigate the optimization of the code parameters in the compromise between computational cost and recovered accuracy in observables such as two-point clustering and halo abundance. We benchmark those observables with a state-of-the-art N-body run, the MICE Grand Challenge simulation. We find that using 40 time-steps linearly spaced since zi ˜ 20, and a force mesh resolution three times finer than that of the number of particles, yields a matter power spectrum within 1 per cent for k ≲ 1 h Mpc-1 and a halo mass function within 5 per cent of those in the N-body. In turn, the halo bias is accurate within 2 per cent for k ≲ 0.7 h Mpc-1 whereas, in redshift space, the halo monopole and quadrupole are within 4 per cent for k ≲ 0.4 h Mpc-1. These results hold for a broad range in redshift (0 < z < 1) and for all halo mass bins investigated (M > 1012.5 h-1 M⊙). To bring accuracy in clustering to one per cent level we study various methods that re-calibrate halo masses and/or velocities. We thus propose an optimized choice of COLA code parameters as a powerful tool to optimally exploit future galaxy surveys.
NASA Astrophysics Data System (ADS)
Zhang, Bin; Liang, Chunlei
2015-08-01
This paper presents a simple, efficient, and high-order accurate sliding-mesh interface approach to the spectral difference (SD) method. We demonstrate the approach by solving the two-dimensional compressible Navier-Stokes equations on quadrilateral grids. This approach is an extension of the straight mortar method originally designed for stationary domains [7,8]. Our sliding method creates curved dynamic mortars on sliding-mesh interfaces to couple rotating and stationary domains. On the nonconforming sliding-mesh interfaces, the related variables are first projected from cell faces to mortars to compute common fluxes, and then the common fluxes are projected back from the mortars to the cell faces to ensure conservation. To verify the spatial order of accuracy of the sliding-mesh spectral difference (SSD) method, both inviscid and viscous flow cases are tested. It is shown that the SSD method preserves the high-order accuracy of the SD method. Meanwhile, the SSD method is found to be very efficient in terms of computational cost. This novel sliding-mesh interface method is very suitable for parallel processing with domain decomposition. It can be applied to a wide range of problems, such as the hydrodynamics of marine propellers, the aerodynamics of rotorcraft, wind turbines, and oscillating wing power generators, etc.
Kakiyama, Genta; Muto, Akina; Takei, Hajime; Nittono, Hiroshi; Murai, Tsuyoshi; Kurosawa, Takao; Hofmann, Alan F.; Pandak, William M.; Bajaj, Jasmohan S.
2014-01-01
We have developed a simple and accurate HPLC method for measurement of fecal bile acids using phenacyl derivatives of unconjugated bile acids, and applied it to the measurement of fecal bile acids in cirrhotic patients. The HPLC method has the following steps: 1) lyophilization of the stool sample; 2) reconstitution in buffer and enzymatic deconjugation using cholylglycine hydrolase/sulfatase; 3) incubation with 0.1 N NaOH in 50% isopropanol at 60°C to hydrolyze esterified bile acids; 4) extraction of bile acids from particulate material using 0.1 N NaOH; 5) isolation of deconjugated bile acids by solid phase extraction; 6) formation of phenacyl esters by derivatization using phenacyl bromide; and 7) HPLC separation measuring eluted peaks at 254 nm. The method was validated by showing that results obtained by HPLC agreed with those obtained by LC-MS/MS and GC-MS. We then applied the method to measuring total fecal bile acid (concentration) and bile acid profile in samples from 38 patients with cirrhosis (17 early, 21 advanced) and 10 healthy subjects. Bile acid concentrations were significantly lower in patients with advanced cirrhosis, suggesting impaired bile acid synthesis. PMID:24627129
Karton, A.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science
2007-06-01
A benchmark calculation of the atomization energy of the 'simple' organic molecule C2H6 (ethane) has been carried out by means of W4 theory. While the molecule is straightforward in terms of one-particle and n-particle basis set convergence, its large zero-point vibrational energy (and anharmonic correction thereto) and nontrivial diagonal Born-Oppenheimer correction (DBOC) represent interesting challenges. For the W4 set of molecules and C2H6, we show that DBOCs to the total atomization energy are systematically overestimated at the SCF level, and that the correlation correction converges very rapidly with the basis set. Thus, even at the CISD/cc-pVDZ level, useful correlation corrections to the DBOC are obtained. When applying such a correction, overall agreement with experiment was only marginally improved, but a more significant improvement is seen when hydrogen-containing systems are considered in isolation. We conclude that for closed-shell organic molecules, the greatest obstacles to highly accurate computational thermochemistry may not lie in the solution of the clamped-nuclei Schroedinger equation, but rather in the zero-point vibrational energy and the diagonal Born-Oppenheimer correction.
Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A.; Solberg, Timothy D.; Chetty, Indrin J.
2009-12-15
electron splitting. When DBS was used with electron splitting and combined with augmented charged particle range rejection, a technique recently introduced in BEAMnrc, relative efficiencies were {approx}420 ({approx}253 min on a single processor) and {approx}175 ({approx}58 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. Calculations of the Siemens Primus treatment head with VMC++ produced relative efficiencies of {approx}1400 ({approx}6 min on a single processor) and {approx}60 ({approx}4 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. BEAMnrc PHSP calculations with DBS alone or DBS in combination with charged particle range rejection were more efficient than the other efficiency enhancing techniques used. Using VMC++, accurate simulations of the entire linac treatment head were performed within minutes on a single processor. Noteworthy differences ({+-}1%-3%) in the mean energy, planar fluence, and angular and spectral distributions were observed with the NIST bremsstrahlung cross sections compared with those of Bethe-Heitler (BEAMnrc default bremsstrahlung cross section). However, MC calculated dose distributions in water phantoms (using combinations of VRTs/AEITs and cross-section data) agreed within 2% of measurements. Furthermore, MC calculated dose distributions in a simulated water/air/water phantom, using NIST cross sections, were within 2% agreement with the BEAMnrc Bethe-Heitler default case.