Mackie, David M; Jahnke, Justin P; Benyamin, Marcus S; Sumner, James J
2016-01-01
The standard methodologies for quantitative analysis (QA) of mixtures using Fourier transform infrared (FTIR) instruments have evolved until they are now more complicated than necessary for many users' purposes. We present a simpler methodology, suitable for widespread adoption of FTIR QA as a standard laboratory technique across disciplines by occasional users.•Algorithm is straightforward and intuitive, yet it is also fast, accurate, and robust.•Relies on component spectra, minimization of errors, and local adaptive mesh refinement.•Tested successfully on real mixtures of up to nine components. We show that our methodology is robust to challenging experimental conditions such as similar substances, component percentages differing by three orders of magnitude, and imperfect (noisy) spectra. As examples, we analyze biological, chemical, and physical aspects of bio-hybrid fuel cells.
Simple and accurate sum rules for highly relativistic systems
NASA Astrophysics Data System (ADS)
Cohen, Scott M.
2005-03-01
In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.
Fast and accurate exhaled breath ammonia measurement.
Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H
2014-06-11
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.
Accurate and simple calibration of DLP projector systems
NASA Astrophysics Data System (ADS)
Wilm, Jakob; Olesen, Oline V.; Larsen, Rasmus
2014-03-01
Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of parameters including lens distortion. Our implementation acquires printed planar calibration scenes in less than 1s. This makes our method both fast and convenient. We evaluate our method in terms of reprojection errors and structured light image reconstruction quality.
Fast and accurate estimation for astrophysical problems in large databases
NASA Astrophysics Data System (ADS)
Richards, Joseph W.
2010-10-01
A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems
An All-Fragments Grammar for Simple and Accurate Parsing
2012-03-21
present a simple but accurate parser which exploits both large tree fragments and symbol refinement. We parse with all fragments of the training set...in contrast to much recent work on tree selection in data-oriented parsing and tree -substitution grammar learning. We require only simple...which exploits both large tree fragments and sym- bol refinement. We parse with all fragments of the training set, in contrast to much recent work on
Fast, accurate, robust and Open Source Brain Extraction Tool (OSBET)
NASA Astrophysics Data System (ADS)
Namias, R.; Donnelly Kehoe, P.; D'Amato, J. P.; Nagel, J.
2015-12-01
The removal of non-brain regions in neuroimaging is a critical task to perform a favorable preprocessing. The skull-stripping depends on different factors including the noise level in the image, the anatomy of the subject being scanned and the acquisition sequence. For these and other reasons, an ideal brain extraction method should be fast, accurate, user friendly, open-source and knowledge based (to allow for the interaction with the algorithm in case the expected outcome is not being obtained), producing stable results and making it possible to automate the process for large datasets. There are already a large number of validated tools to perform this task but none of them meets the desired characteristics. In this paper we introduced an open source brain extraction tool (OSBET), composed of four steps using simple well-known operations such as: optimal thresholding, binary morphology, labeling and geometrical analysis that aims to assemble all the desired features. We present an experiment comparing OSBET with other six state-of-the-art techniques against a publicly available dataset consisting of 40 T1-weighted 3D scans and their corresponding manually segmented images. OSBET gave both: a short duration with an excellent accuracy, getting the best Dice Coefficient metric. Further validation should be performed, for instance, in unhealthy population, to generalize its usage for clinical purposes.
Fast and simple decycling and dismantling of networks
NASA Astrophysics Data System (ADS)
Zdeborová, Lenka; Zhang, Pan; Zhou, Hai-Jun
2016-11-01
Decycling and dismantling of complex networks are underlying many important applications in network science. Recently these two closely related problems were tackled by several heuristic algorithms, simple and considerably sub-optimal, on the one hand, and involved and accurate message-passing ones that evaluate single-node marginal probabilities, on the other hand. In this paper we propose a simple and extremely fast algorithm, CoreHD, which recursively removes nodes of the highest degree from the 2-core of the network. CoreHD performs much better than all existing simple algorithms. When applied on real-world networks, it achieves equally good solutions as those obtained by the state-of-art iterative message-passing algorithms at greatly reduced computational cost, suggesting that CoreHD should be the algorithm of choice for many practical purposes.
Fast and simple decycling and dismantling of networks
Zdeborová, Lenka; Zhang, Pan; Zhou, Hai-Jun
2016-01-01
Decycling and dismantling of complex networks are underlying many important applications in network science. Recently these two closely related problems were tackled by several heuristic algorithms, simple and considerably sub-optimal, on the one hand, and involved and accurate message-passing ones that evaluate single-node marginal probabilities, on the other hand. In this paper we propose a simple and extremely fast algorithm, CoreHD, which recursively removes nodes of the highest degree from the 2-core of the network. CoreHD performs much better than all existing simple algorithms. When applied on real-world networks, it achieves equally good solutions as those obtained by the state-of-art iterative message-passing algorithms at greatly reduced computational cost, suggesting that CoreHD should be the algorithm of choice for many practical purposes. PMID:27897223
Accurate and fast computation of transmission cross coefficients
NASA Astrophysics Data System (ADS)
Apostol, Štefan; Hurley, Paul; Ionescu, Radu-Cristian
2015-03-01
Precise and fast computation of aerial images are essential. Typical lithographic simulators employ a Köhler illumination system for which aerial imagery is obtained using a large number of Transmission Cross Coefficients (TCCs). These are generally computed by a slow numerical evaluation of a double integral. We review the general framework in which the 2D imagery is solved and then propose a fast and accurate method to obtain the TCCs. We acquire analytical solutions and thus avoid the complexity-accuracy trade-off encountered with numerical integration. Compared to other analytical integration methods, the one presented is faster, more general and more tractable.
Fast and accurate line scanner based on white light interferometry
NASA Astrophysics Data System (ADS)
Lambelet, Patrick; Moosburger, Rudolf
2013-04-01
White-light interferometry is a highly accurate technology for 3D measurements. The principle is widely utilized in surface metrology instruments but rarely adopted for in-line inspection systems. The main challenges for rolling out inspection systems based on white-light interferometry to the production floor are its sensitivity to environmental vibrations and relatively long measurement times: a large quantity of data needs to be acquired and processed in order to obtain a single topographic measurement. Heliotis developed a smart-pixel CMOS camera (lock-in camera) which is specially suited for white-light interferometry. The demodulation of the interference signal is treated at the level of the pixel which typically reduces the acquisition data by one orders of magnitude. Along with the high bandwidth of the dedicated lock-in camera, vertical scan-speeds of more than 40mm/s are reachable. The high scan speed allows for the realization of inspection systems that are rugged against external vibrations as present on the production floor. For many industrial applications such as the inspection of wafer-bumps, surface of mechanical parts and solar-panel, large areas need to be measured. In this case either the instrument or the sample are displaced laterally and several measurements are stitched together. The cycle time of such a system is mostly limited by the stepping time for multiple lateral displacements. A line-scanner based on white light interferometry would eliminate most of the stepping time while maintaining robustness and accuracy. A. Olszak proposed a simple geometry to realize such a lateral scanning interferometer. We demonstrate that such inclined interferometers can benefit significantly from the fast in-pixel demodulation capabilities of the lock-in camera. One drawback of an inclined observation perspective is that its application is limited to objects with scattering surfaces. We therefore propose an alternate geometry where the incident light is
Accurate Anisotropic Fast Marching for Diffusion-Based Geodesic Tractography
Jbabdi, S.; Bellec, P.; Toro, R.; Daunizeau, J.; Pélégrini-Issac, M.; Benali, H.
2008-01-01
Using geodesics for inferring white matter fibre tracts from diffusion-weighted MR data is an attractive method for at least two reasons: (i) the method optimises a global criterion, and hence is less sensitive to local perturbations such as noise or partial volume effects, and (ii) the method is fast, allowing to infer on a large number of connexions in a reasonable computational time. Here, we propose an improved fast marching algorithm to infer on geodesic paths. Specifically, this procedure is designed to achieve accurate front propagation in an anisotropic elliptic medium, such as DTI data. We evaluate the numerical performance of this approach on simulated datasets, as well as its robustness to local perturbation induced by fiber crossing. On real data, we demonstrate the feasibility of extracting geodesics to connect an extended set of brain regions. PMID:18299703
Simple, flexible, and accurate phase retrieval method for generalized phase-shifting interferometry.
Yatabe, Kohei; Ishikawa, Kenji; Oikawa, Yasuhiro
2017-01-01
This paper presents a non-iterative phase retrieval method from randomly phase-shifted fringe images. By combining the hyperaccurate least squares ellipse fitting method with the subspace method (usually called the principal component analysis), a fast and accurate phase retrieval algorithm is realized. The proposed method is simple, flexible, and accurate. It can be easily coded without iteration, initial guess, or tuning parameter. Its flexibility comes from the fact that totally random phase-shifting steps and any number of fringe images greater than two are acceptable without any specific treatment. Finally, it is accurate because the hyperaccurate least squares method and the modified subspace method enable phase retrieval with a small error as shown by the simulations. A MATLAB code, which is used in the experimental section, is provided within the paper to demonstrate its simplicity and easiness.
Braking of fast and accurate elbow flexions in the monkey.
Flament, D; Hore, J; Vilis, T
1984-01-01
The processes responsible for braking fast and accurate elbow movements were studied in the monkey. The movements studied were made over different amplitudes and against different inertias . All were made to the same end position. Only fast movements that showed the typical biphasic or triphasic pattern of activity in agonists and antagonists were analysed in detail. For movements made over different amplitudes and at different velocities there was symmetry between the acceleration and deceleration phases of the movements. For movements of the same amplitude performed at different velocities there was a direct linear relation between peak velocity and both the peak acceleration (and integrated agonist burst) and peak deceleration (and integrated antagonist burst). The slopes of these relations and their intercept with the peak velocity axis were a function of movement amplitude. This was such that for large and small movements of the same peak velocity and the same end position (i) peak acceleration and phasic agonist activity were larger for the small movements and (ii) peak deceleration and phasic antagonist activity were larger for the small movements. The slope of these relations and the symmetry between acceleration and deceleration were not affected by the addition of an inertial load to the handle held by the monkey. The results indicate that fast and accurate elbow movements in the monkey are braked by antagonist activity that is centrally programmed. As all movements were made to the same end position, the larger antagonist burst in small movements, made at the same peak velocity as large movements, cannot be due to differences in the viscoelastic contribution to braking (cf. Marsden, Obeso & Rothwell , 1983).(ABSTRACT TRUNCATED AT 250 WORDS) PMID:6737291
Method for Accurate Surface Temperature Measurements During Fast Induction Heating
NASA Astrophysics Data System (ADS)
Larregain, Benjamin; Vanderesse, Nicolas; Bridier, Florent; Bocher, Philippe; Arkinson, Patrick
2013-07-01
A robust method is proposed for the measurement of surface temperature fields during induction heating. It is based on the original coupling of temperature-indicating lacquers and a high-speed camera system. Image analysis tools have been implemented to automatically extract the temporal evolution of isotherms. This method was applied to the fast induction treatment of a 4340 steel spur gear, allowing the full history of surface isotherms to be accurately documented for a sequential heating, i.e., a medium frequency preheating followed by a high frequency final heating. Three isotherms, i.e., 704, 816, and 927°C, were acquired every 0.3 ms with a spatial resolution of 0.04 mm per pixel. The information provided by the method is described and discussed. Finally, the transformation temperature Ac1 is linked to the temperature on specific locations of the gear tooth.
Learning accurate very fast decision trees from uncertain data streams
NASA Astrophysics Data System (ADS)
Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo
2015-12-01
Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.
Fast processing techniques for accurate ultrasonic range measurements
NASA Astrophysics Data System (ADS)
Barshan, Billur
2000-01-01
Four methods of range measurement for airborne ultrasonic systems - namely simple thresholding, curve-fitting, sliding-window, and correlation detection - are compared on the basis of bias error, standard deviation, total error, robustness to noise, and the difficulty/complexity of implementation. Whereas correlation detection is theoretically optimal, the other three methods can offer acceptable performance at much lower cost. Performances of all methods have been investigated as a function of target range, azimuth, and signal-to-noise ratio. Curve fitting, sliding window, and thresholding follow correlation detection in the order of decreasing complexity. Apart from correlation detection, minimum bias and total error is most consistently obtained with the curve-fitting method. On the other hand, the sliding-window method is always better than the thresholding and curve-fitting methods in terms of minimizing the standard deviation. The experimental results are in close agreement with the corresponding simulation results. Overall, the three simple and fast processing methods provide a variety of attractive compromises between measurement accuracy and system complexity. Although this paper concentrates on ultrasonic range measurement in air, the techniques described may also find application in underwater acoustics.
Simple tunnel diode circuit for accurate zero crossing timing
NASA Technical Reports Server (NTRS)
Metz, A. J.
1969-01-01
Tunnel diode circuit, capable of timing the zero crossing point of bipolar pulses, provides effective design for a fast crossing detector. It combines a nonlinear load line with the diode to detect the zero crossing of a wide range of input waveshapes.
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
A Simple and Accurate Method for Measuring Enzyme Activity.
ERIC Educational Resources Information Center
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
A Simple and Practical Algorithm for Accurate Gravitational Magnification Maps
NASA Astrophysics Data System (ADS)
Walters, S. J.; Forbes, L. K.
2017-01-01
In this brief communication, a new method is outlined for modelling magnification patterns on an observer's plane using a first-order approximation to the null geodesic path equations for a point mass lens. For each ray emitted from a source, an explicit calculation is made for the change in position on the observer's plane due to each lens mass. By counting the number of points in each small area of the observer's plane, the magnification at that point can be determined. This allows for a very simple and transparent algorithm. A short Matlab code sample for creating simple magnification maps due to multiple point lenses is included in an appendix.
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; ...
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
Toward accurate and fast iris segmentation for iris biometrics.
He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao
2009-09-01
Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.
Progress in Fast, Accurate Multi-scale Climate Simulations
Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter
2015-01-01
We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
Very Fast and Accurate Azimuth Disambiguation of Vector Magnetograms
NASA Astrophysics Data System (ADS)
Rudenko, G. V.; Anfinogentov, S. A.
2014-05-01
We present a method for fast and accurate azimuth disambiguation of vector magnetogram data regardless of the location of the analyzed region on the solar disk. The direction of the transverse field is determined with the principle of minimum deviation of the field from the reference (potential) field. The new disambiguation (NDA) code is examined on the well-known models of Metcalf et al. ( Solar Phys. 237, 267, 2006) and Leka et al. ( Solar Phys. 260, 83, 2009), and on an artificial model based on the observed magnetic field of AR 10930 (Rudenko, Myshyakov, and Anfinogentov, Astron. Rep. 57, 622, 2013). We compare Hinode/SOT-SP vector magnetograms of AR 10930 disambiguated with three codes: the NDA code, the nonpotential magnetic-field calculation (NPFC: Georgoulis, Astrophys. J. Lett. 629, L69, 2005), and the spherical minimum-energy method (Rudenko, Myshyakov, and Anfinogentov, Astron. Rep. 57, 622, 2013). We then illustrate the performance of NDA on SDO/HMI full-disk magnetic-field observations. We show that our new algorithm is more than four times faster than the fastest algorithm that provides the disambiguation with a satisfactory accuracy (NPFC). At the same time, its accuracy is similar to that of the minimum-energy method (a very slow algorithm). In contrast to other codes, the NDA code maintains high accuracy when the region to be analyzed is very close to the limb.
Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics
Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.
2015-01-01
Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results
Stonehenge: A Simple and Accurate Predictor of Lunar Eclipses
NASA Astrophysics Data System (ADS)
Challener, S.
1999-12-01
Over the last century, much has been written about the astronomical significance of Stonehenge. The rage peaked in the mid to late 1960s when new computer technology enabled astronomers to make the first complete search for celestial alignments. Because there are hundreds of rocks or holes at Stonehenge and dozens of bright objects in the sky, the quest was fraught with obvious statistical problems. A storm of controversy followed and the subject nearly vanished from print. Only a handful of these alignments remain compelling. Today, few astronomers and still fewer archaeologists would argue that Stonehenge served primarily as an observatory. Instead, Stonehenge probably served as a sacred meeting place, which was consecrated by certain celestial events. These would include the sun's risings and settings at the solstices and possibly some lunar risings as well. I suggest that Stonehenge was also used to predict lunar eclipses. While Hawkins and Hoyle also suggested that Stonehenge was used in this way, their methods are complex and they make use of only early, minor, or outlying areas of Stonehenge. In contrast, I suggest a way that makes use of the imposing, central region of Stonehenge; the area built during the final phase of activity. To predict every lunar eclipse without predicting eclipses that do not occur, I use the less familiar lunar cycle of 47 lunar months. By moving markers about the Sarsen Circle, the Bluestone Circle, and the Bluestone Horseshoe, all umbral lunar eclipses can be predicted accurately.
IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator
NASA Astrophysics Data System (ADS)
Du, Qiang; Yang, Jiang
2017-03-01
This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simple ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge-Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge-Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen-Cahn equations, nonlocal Cahn-Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.
NASA Astrophysics Data System (ADS)
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
Note: Fast, small, accurate 90° rotator for a polarizer.
Shelton, David P; O'Donnell, William M; Norton, James L
2011-03-01
A permanent magnet stepper motor is modified to hold a dichroic polarizer inside the motor. Rotation of the polarizer by 90° ± 0.04° is accomplished within 80 ms. This device is used for measurements of the intensity ratio for two orthogonal linear polarized components of a light beam. The two selected polarizations can be rapidly alternated to allow for signal drift compensation, and the two selected polarizations are accurately orthogonal.
Interacting with image hierarchies for fast and accurate object segmentation
NASA Astrophysics Data System (ADS)
Beard, David V.; Eberly, David H.; Hemminger, Bradley M.; Pizer, Stephen M.; Faith, R. E.; Kurak, Charles; Livingston, Mark
1994-05-01
Object definition is an increasingly important area of medical image research. Accurate and fairly rapid object definition is essential for measuring the size and, perhaps more importantly, the change in size of anatomical objects such as kidneys and tumors. Rapid and fairly accurate object definition is essential for 3D real-time visualization including both surgery planning and Radiation oncology treatment planning. One approach to object definition involves the use of 3D image hierarchies, such as Eberly's Ridge Flow. However, the image hierarchy segmentation approach requires user interaction in selecting regions and subtrees. Further, visualizing and comprehending the anatomy and the selected portions of the hierarchy can be problematic. In this paper we will describe the Magic Crayon tool which allows a user to define rapidly and accurately various anatomical objects by interacting with image hierarchies such as those generated with Eberly's Ridge Flow algorithm as well as other 3D image hierarchies. Preliminary results suggest that fairly complex anatomical objects can be segmented in under a minute with sufficient accuracy for 3D surgery planning, 3D radiation oncology treatment planning, and similar applications. Potential modifications to the approach for improved accuracy are summarized.
Massively Parallel Processing for Fast and Accurate Stamping Simulations
NASA Astrophysics Data System (ADS)
Gress, Jeffrey J.; Xu, Siguang; Joshi, Ramesh; Wang, Chuan-tao; Paul, Sabu
2005-08-01
The competitive automotive market drives automotive manufacturers to speed up the vehicle development cycles and reduce the lead-time. Fast tooling development is one of the key areas to support fast and short vehicle development programs (VDP). In the past ten years, the stamping simulation has become the most effective validation tool in predicting and resolving all potential formability and quality problems before the dies are physically made. The stamping simulation and formability analysis has become an critical business segment in GM math-based die engineering process. As the simulation becomes as one of the major production tools in engineering factory, the simulation speed and accuracy are the two of the most important measures for stamping simulation technology. The speed and time-in-system of forming analysis becomes an even more critical to support the fast VDP and tooling readiness. Since 1997, General Motors Die Center has been working jointly with our software vendor to develop and implement a parallel version of simulation software for mass production analysis applications. By 2001, this technology was matured in the form of distributed memory processing (DMP) of draw die simulations in a networked distributed memory computing environment. In 2004, this technology was refined to massively parallel processing (MPP) and extended to line die forming analysis (draw, trim, flange, and associated spring-back) running on a dedicated computing environment. The evolution of this technology and the insight gained through the implementation of DM0P/MPP technology as well as performance benchmarks are discussed in this publication.
Fast and accurate mapping of Complete Genomics reads.
Lee, Donghyuk; Hormozdiari, Farhad; Xin, Hongyi; Hach, Faraz; Mutlu, Onur; Alkan, Can
2015-06-01
Many recent advances in genomics and the expectations of personalized medicine are made possible thanks to power of high throughput sequencing (HTS) in sequencing large collections of human genomes. There are tens of different sequencing technologies currently available, and each HTS platform have different strengths and biases. This diversity both makes it possible to use different technologies to correct for shortcomings; but also requires to develop different algorithms for each platform due to the differences in data types and error models. The first problem to tackle in analyzing HTS data for resequencing applications is the read mapping stage, where many tools have been developed for the most popular HTS methods, but publicly available and open source aligners are still lacking for the Complete Genomics (CG) platform. Unfortunately, Burrows-Wheeler based methods are not practical for CG data due to the gapped nature of the reads generated by this method. Here we provide a sensitive read mapper (sirFAST) for the CG technology based on the seed-and-extend paradigm that can quickly map CG reads to a reference genome. We evaluate the performance and accuracy of sirFAST using both simulated and publicly available real data sets, showing high precision and recall rates.
A fast and accurate image-based measuring system for isotropic reflection materials
NASA Astrophysics Data System (ADS)
Kim, Duck Bong; Kim, Kang Yeon; Park, Kang Su; Seo, Myoung Kook; Lee, Kwan H.
2008-08-01
We present a novel image-based BRDF (Bidirectional Reflectance Distribution Function) measurement system for materials that have isotropic reflectance properties. Our proposed system is fast due to simple set up and automated operations. It also provides a wide angular coverage and noise reduction capability so that it achieves accuracy that is needed for computer graphics applications. We test the uniformity and constancy of the light source and the reciprocity of the measurement system. We perform a photometric calibration of HDR (High Dynamic Range) camera to recover an accurate radiance map from each HDR image. We verify our proposed system by comparing it with a previous imagebased BRDF measurement system. We demonstrate the efficiency and accuracy of our proposed system by generating photorealistic images of the measured BRDF data that include glossy blue, green plastics, gold coated metal and gold metallic paints.
A new simple multidomain fast multipole boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2016-09-01
A simple multidomain fast multipole boundary element method (BEM) for solving potential problems is presented in this paper, which can be applied to solve a true multidomain problem or a large-scale single domain problem using the domain decomposition technique. In this multidomain BEM, the coefficient matrix is formed simply by assembling the coefficient matrices of each subdomain and the interface conditions between subdomains without eliminating any unknown variables on the interfaces. Compared with other conventional multidomain BEM approaches, this new approach is more efficient with the fast multipole method, regardless how the subdomains are connected. Instead of solving the linear system of equations directly, the entire coefficient matrix is partitioned and decomposed using Schur complement in this new approach. Numerical results show that the new multidomain fast multipole BEM uses fewer iterations in most cases with the iterative equation solver and less CPU time than the traditional fast multipole BEM in solving large-scale BEM models. A large-scale fuel cell model with more than 6 million elements was solved successfully on a cluster within 3 h using the new multidomain fast multipole BEM.
BBMap: A Fast, Accurate, Splice-Aware Aligner
Bushnell, Brian
2014-03-17
Alignment of reads is one of the primary computational tasks in bioinformatics. Of paramount importance to resequencing, alignment is also crucial to other areas - quality control, scaffolding, string-graph assembly, homology detection, assembly evaluation, error-correction, expression quantification, and even as a tool to evaluate other tools. An optimal aligner would greatly improve virtually any sequencing process, but optimal alignment is prohibitively expensive for gigabases of data. Here, we will present BBMap [1], a fast splice-aware aligner for short and long reads. We will demonstrate that BBMap has superior speed, sensitivity, and specificity to alternative high-throughput aligners bowtie2 [2], bwa [3], smalt, [4] GSNAP [5], and BLASR [6].
A fast and accurate decoder for underwater acoustic telemetry.
Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A
2014-07-01
The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.
A fast and accurate FPGA based QRS detection system.
Shukla, Ashish; Macchiarulo, Luca
2008-01-01
An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.
Fast and accurate automated cell boundary determination for fluorescence microscopy
NASA Astrophysics Data System (ADS)
Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider
2013-07-01
Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.
Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows
Johnson, B M; Guan, X; Gammie, F
2008-04-11
In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.
Robust, accurate and fast automatic segmentation of the spinal cord.
De Leener, Benjamin; Kadoury, Samuel; Cohen-Adad, Julien
2014-09-01
Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord.
Fast and accurate predictions of covalent bonds in chemical space
NASA Astrophysics Data System (ADS)
Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-07
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
NASA Astrophysics Data System (ADS)
Park, Byung-Kwan; Kim, Sung-Su; Chung, Dae-Su; Lee, Seong-Deok; Kim, Chang-Yeong
2008-02-01
This paper describes the new method for fast auto focusing in image capturing devices. This is achieved by using two defocused images. At two prefixed lens positions, two defocused images are taken and defocused blur levels in each image are estimated using Discrete Cosine Transform (DCT). These DCT values can be classified into distance from the image capturing device to main object, so we can make distance vs. defocused blur level classifier. With this classifier, relation between two defocused blur levels can give the device the best focused lens step. In the case of ordinary auto focusing like Depth from Focus (DFF), it needs several defocused images and compares high frequency components in each image. Also known as hill-climbing method, the process requires about half number of images in all focus lens steps for focusing in general. Since this new method requires only two defocused images, it can save lots of time for focusing or reduce shutter lag time. Compared to existing Depth from Defocus (DFD) which uses two defocused images, this new algorithm is simple and accurate as DFF method. Because of this simplicity and accuracy, this method can also be applied to fast 3D depth map construction.
Simple, fast, and efficient process for producing and purifying trehalulose.
Wei, Yutuo; Liang, Jiayuan; Huang, Ying; Lei, Panxian; Du, Liqin; Huang, Ribo
2013-06-01
A new property of recombinant trehalose synthase (GTase) from Thermus thermophilus HB-8 (ATCC 27634) was found and described in this study. GTase can act on sucrose and catalyze trehalulose formation without isomaltose, isomaltulose, or isomelezitose, releasing small amounts of glucose and fructose as byproducts. Maximum trehalulose yield (approximately 81%) was obtained at an optimum temperature of 65°C and was independent of substrate concentration. A simple, fast, and efficient method of producing and purifying trehalulose is then described. In the first step, GTase catalyzed trehalulose formation using a 20% sucrose substrate. Miscellaneous sugars were then rapidly removed, while trehalulose was completely preserved by Saccharomyces cerevisiae cells. Finally, the cells were separated by centrifugation, and salt ions were removed by an ion-exchange resin, subsequently obtaining a high-purity trehalulose solution. A trehalulose recovery rate of over 95% was achieved using this process. This method has a simple process, fast separation efficiency, and low investment in production equipment, so greatly to improve production efficiency and reduce production costs.
Effects of Fast Simple Numerical Calculation Training on Neural Systems
Takeuchi, Hikaru; Nagase, Tomomi; Taki, Yasuyuki; Sassa, Yuko; Hashizume, Hiroshi; Nouchi, Rui; Kawashima, Ryuta
2016-01-01
Cognitive training, including fast simple numerical calculation (FSNC), has been shown to improve performance on untrained processing speed and executive function tasks in the elderly. However, the effects of FSNC training on cognitive functions in the young and on neural mechanisms remain unknown. We investigated the effects of 1-week intensive FSNC training on cognitive function, regional gray matter volume (rGMV), and regional cerebral blood flow at rest (resting rCBF) in healthy young adults. FSNC training was associated with improvements in performance on simple processing speed, speeded executive functioning, and simple and complex arithmetic tasks. FSNC training was associated with a reduction in rGMV and an increase in resting rCBF in the frontopolar areas and a weak but widespread increase in resting rCBF in an anatomical cluster in the posterior region. These results provide direct evidence that FSNC training alone can improve performance on processing speed and executive function tasks as well as plasticity of brain structures and perfusion. Our results also indicate that changes in neural systems in the frontopolar areas may underlie these cognitive improvements. PMID:26881117
Digitization of photographic slides: simple, effective, fast, and inexpensive method.
Camarena, Lázaro Cárdenas; Guerrero, María Teresa
2002-03-01
The technological evolution has changed multiple areas of plastic surgery, including photography. The photograph is one of the instruments used most by the plastic surgeon, and it cannot be eliminated by technological changes. The principal change in photography is that images can be scanned through digital cameras instead of slides. Despite the multiple advantages that digital photography represents, many surgeons are resisting the change. One of the main reasons for this resistance is the large quantity of photographic slides that need to be digitized to be used at scientific conferences as well as in publications. The methods and existing techniques for digitizing slides are costly and time-consuming, and there is risk for loss of definition and image brightness. The authors present a simple, effective, fast, and inexpensive method for digitizing slides. This method has been validated by various plastic surgeons and is effective for use in multimedia presentations and for paper printouts with publication quality.
Chon, K H
2001-06-01
We use a previously introduced fast orthogonal search algorithm to detect sinusoidal frequency components buried in either white or colored noise. We show that the method outperforms the correlogram, modified covariance autoregressive (MODCOVAR) and multiple-signal classification (MUSIC) methods. Fast orthogonal search method achieves accurate detection of sinusoids even with signal-to-noise ratios as low as -10 dB, and is superior at detecting sinusoids buried in 1/f noise. Since the utilized method accurately detects sinusoids even under colored noise, it can be used to extract a 1/f noise process observed in physiological signals such as heart rate and renal blood pressure and flow data.
NASA Astrophysics Data System (ADS)
Abbas, Zulkifly; Yeow, You Kok; Shaari, Abdul Halim; Zakaria, Azmi; Hassan, Jumiah; Khalid, Kaida; Saion, Elias
2005-07-01
A simple, fast and accurate technique employing an open-ended coaxial sensor for the determination of the moisture content in oil palm fruit is presented. For this technique, a calibration equation has been developed based on the relationship between the measured moisture content obtained by the oven drying method and the phase of the reflection coefficient of the sensor for 21 fruits. The moisture content predicted by the sensor was in good agreement with that obtained using the standard oven drying method within ± 5% accuracy when tested on 145 different fruits samples.
Fast, simple, and good pan-sharpening method
NASA Astrophysics Data System (ADS)
Palubinskas, Gintautas
2013-01-01
Pan-sharpening of optical remote sensing multispectral imagery aims to include spatial information from a high-resolution image (high frequencies) into a low-resolution image (low frequencies) while preserving spectral properties of a low-resolution image. From a signal processing view, a general fusion filtering framework (GFF) can be formulated, which is very well suitable for a fusion of multiresolution and multisensor data such as optical-optical and optical-radar imagery. To reduce computation time, a simple and fast variant of GFF-high-pass filtering method (HPFM)-is proposed, which performs filtering in signal domain and thus avoids time-consuming FFT computations. A new joint quality measure based on the combination of two spectral and spatial measures was proposed for quality assessment by a proper normalization of the ranges of variables. Quality and speed of six pan-sharpening methods-component substitution (CS), Gram-Schmidt (GS) sharpening, Ehlers fusion, Amélioration de la Résolution Spatiale par Injection de Structures, GFF, and HPFM-were evaluated for WorldView-2 satellite remote sensing data. Experiments showed that the HPFM method outperforms all the fusion methods used in this study, even its parentage method GFF. Moreover, it is more than four times faster than GFF method and competitive with CS and GS methods in speed.
FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.
El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant
2016-01-01
A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein
Fast and accurate calculation of dilute quantum gas using Uehling-Uhlenbeck model equation
NASA Astrophysics Data System (ADS)
Yano, Ryosuke
2017-02-01
The Uehling-Uhlenbeck (U-U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U-U model equation. DSMC analysis based on the U-U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U-U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green-Kubo expression and the shock layer of a dilute Bose gas around a cylinder.
Loco, Daniele; Jurinovich, Sandro; Di Bari, Lorenzo; Mennucci, Benedetta
2016-01-14
We present and discuss a simple and fast computational approach to the calculation of electronic circular dichroism spectra of nucleic acids. It is based on a exciton model in which the couplings are obtained in terms of the full transition-charge distributions, as resulting from TDDFT methods applied on the individual nucleobases. We validated the method on two systems, a DNA G-quadruplex and a RNA β-hairpin whose solution structures have been accurately determined by means of NMR. We have shown that the different characteristics of composition and structure of the two systems can lead to quite important differences in the dependence of the accuracy of the simulation on the excitonic parameters. The accurate reproduction of the CD spectra together with their interpretation in terms of the excitonic composition suggest that this method may lend itself as a general computational tool to both predict the spectra of hypothetic structures and define clear relationships between structural and ECD properties.
Development of a fast and accurate color-encoded digital fringe projection profilometry
NASA Astrophysics Data System (ADS)
Liu, Z.; Quan, C.; Tay, C. J.
2013-06-01
In the past two decades, fringe projection profilometry (FPP) has been widely used in three-dimensional (3D) profile measurement for its fast speed and high accuracy. As a branch of FPP, color-encoded digital fringe projection profilometry (CDFPP) has been applied to surface profile measurement. CDFPP has the advantage of being fast speed, non-contact and full-field. It is one of the most important dynamic 3D profile measurement techniques. However, due to color cross-talk and gamma distortions of electro-optical devices, phase errors arise in using conventional phase-shifting algorithms to retrieve the phase in CDFPP. Therefore, it is important to develop methods for phase error suppression in CDFPP and thus realizing fast and accurate profile measurement. In this paper, a phase error suppression technique is proposed to overcome color cross-talk and gamma distortions. The proposed method is able to carry out fast and accurate surface profile measurement. The real data experimental results show that the proposed method can effectively suppress phase errors and achieve accurate measurements in CDFPP.
Weaver, Phoebe G; Jagow, Devin M; Portune, Cameron M; Kenney, John W
2016-07-19
The design and operation of a simple liquid nitrogen Dewar/cryostat apparatus based upon a small fused silica optical Dewar, a thermocouple assembly, and a CCD spectrograph are described. The experiments for which this Dewar/cryostat is designed require fast sample loading, fast sample freezing, fast alignment of the sample, accurate and stable sample temperatures, and small size and portability of the Dewar/cryostat cryogenic unit. When coupled with the fast data acquisition rates of the CCD spectrograph, this Dewar/cryostat is capable of supporting cryogenic luminescence spectroscopic measurements on luminescent samples at a series of known, stable temperatures in the 77-300 K range. A temperature-dependent study of the oxygen quenching of luminescence in a rhodium(III) transition metal complex is presented as an example of the type of investigation possible with this Dewar/cryostat. In the context of this apparatus, a stable temperature for cryogenic spectroscopy means a luminescent sample that is thermally equilibrated with either liquid nitrogen or gaseous nitrogen at a known measureable temperature that does not vary (ΔT < 0.1 K) during the short time scale (~1-10 sec) of the spectroscopic measurement by the CCD. The Dewar/cryostat works by taking advantage of the positive thermal gradient dT/dh that develops above liquid nitrogen level in the Dewar where h is the height of the sample above the liquid nitrogen level. The slow evaporation of the liquid nitrogen results in a slow increase in h over several hours and a consequent slow increase in the sample temperature T over this time period. A quickly acquired luminescence spectrum effectively catches the sample at a constant, thermally equilibrated temperature.
A rapid, simple, and accurate plaque assay for human respiratory syncytial virus (HRSV).
Kim, Kyung Sook; Kim, Ah Ra; Piao, Ying; Lee, Ju-Hie; Quan, Fu-Shi
2017-03-31
Plaque assays of human respiratory syncytial virus (HRSV) are time-consuming, requiring 4 to 7 days for plaque formation and several hours for dye staining. Here, we describe a simple method by which RSV plaques can be visualized and counted with the naked eye only 2 days after infection of HEp-2 cells. In this assay, the infected cells are stained with monoclonal antibodies and the plaques are developed using diaminobenzidine (DAB). We tested the accuracy of this new plaque assay by comparing the results obtained on days 1, 2, 3, 4, 5, and 6 post-infection. The whole procedure is significantly simpler than the traditional method, with an immunostaining process of around 1.5h. Our method is rapid, accurate, and simple; thus, it has the potential to significantly contribute to studies related to RSV disease.
Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates
NASA Astrophysics Data System (ADS)
Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo
2017-03-01
The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.
Fast and spectrally accurate Ewald summation for 2-periodic electrostatic systems
NASA Astrophysics Data System (ADS)
Lindbo, Dag; Tornberg, Anna-Karin
2012-04-01
A new method for Ewald summation in planar/slablike geometry, i.e., systems where periodicity applies in two dimensions and the last dimension is "free" (2P), is presented. We employ a spectral representation in terms of both Fourier series and integrals. This allows us to concisely derive both the 2P Ewald sum and a fast particle mesh Ewald (PME)-type method suitable for large-scale computations. The primary results are: (i) close and illuminating connections between the 2P problem and the standard Ewald sum and associated fast methods for full periodicity; (ii) a fast, O(N log N), and spectrally accurate PME-type method for the 2P k-space Ewald sum that uses vastly less memory than traditional PME methods; (iii) errors that decouple, such that parameter selection is simplified. We give analytical and numerical results to support this.
Fast and accurate image recognition algorithms for fresh produce food safety sensing
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.
2011-06-01
This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.
Simple and surprisingly accurate approach to the chemical bond obtained from dimensional scaling.
Svidzinsky, Anatoly A; Scully, Marlan O; Herschbach, Dudley R
2005-08-19
We present a new dimensional scaling transformation of the Schrödinger equation for the two electron bond. This yields, for the first time, a good description of the bond via D scaling. There also emerges, in the large-D limit, an intuitively appealing semiclassical picture, akin to a molecular model proposed by Bohr in 1913. In this limit, the electrons are confined to specific orbits in the scaled space, yet the uncertainty principle is maintained. A first-order perturbation correction, proportional to 1/D, substantially improves the agreement with the exact ground state potential energy curve. The present treatment is very simple mathematically, yet provides a strikingly accurate description of the potential curves for the lowest singlet, triplet, and excited states of H2. We find the modified D-scaling method also gives good results for other molecules. It can be combined advantageously with Hartree-Fock and other conventional methods.
Troeltzsch, Matthias; Liedtke, Jan; Troeltzsch, Volker; Frankenberger, Roland; Steiner, Timm; Troeltzsch, Markus
2012-10-01
Odontomas account for the largest fraction of odontogenic tumors and are frequent causes of tooth impaction. A case of a 13-year-old female patient with an odontoma-associated impaction of a mandibular molar is presented with a review of the literature. Preoperative planning involved simple and convenient methods such as clinical examination and panoramic radiography, which led to a diagnosis of complex odontoma and warranted surgical removal. The clinical diagnosis was confirmed histologically. Multidisciplinary consultation may enable the clinician to find the accurate diagnosis and appropriate therapy based on the clinical and radiographic appearance. Modern radiologic methods such as cone-beam computed tomography or computed tomography should be applied only for special cases, to decrease radiation.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Fast and accurate roughness characterization techniques for wafers and hard disks
NASA Astrophysics Data System (ADS)
Rothe, Hendrik; Kasper, Andre
1996-11-01
Especially for wafers, hard disks and flat panel displays fast and accurate technical means for roughness characterization are needed. However, speed and accuracy are contradictory. Generally speaking, fast roughness sensors are not accurate, and precise instruments are slow. It turned out in the last years that with multi aperture fiber optic sensors which acquire ARS/TIS data a very fast estimation of surface roughness is possible. But it is rather difficult to convince e.g. chip manufacturers that the results of such sensors are reliable, because there are no accepted international standards for these kinds of optical measurements. Therefore we decided to establish a setup of our ARS/TIS sensor for roughness characterization and an instrument for roughness measurement in a cleanroom consisting of the following parts: (1) 200 X 200 mm stages, speed 0.4 ms-1, +/- 1 micron accuracy, acceleration 1 g; (2) visual inspection head consisting of 50 X objective and CCD camera; (3) AFM scan head; (4) ARS/TIS fiber optic sensor; and (5) laminar box. Topics of the paper are measurement philosophy, specs of the setup, architecture of the fiber optic ARS/TIS head, as well as data processing algorithms and software.
A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns
NASA Astrophysics Data System (ADS)
Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae
2004-05-01
Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.
Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.
Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng
2015-06-10
In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.
Fast and accurate mock catalogue generation for low-mass galaxies
NASA Astrophysics Data System (ADS)
Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe
2016-06-01
We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.
Spinelli, Orietta; Rambaldi, Alessandro; Rigo, Francesca; Zanghì, Pamela; D'Agostini, Elena; Amicarelli, Giulia; Colotta, Francesco; Divona, Mariadomenica; Ciardi, Claudia; Coco, Francesco Lo; Minnucci, Giulia
2015-01-01
The diagnostic work-up of acute promyelocytic leukemia (APL) includes the cytogenetic demonstration of the t(15;17) translocation and/or the PML-RARA chimeric transcript by RQ-PCR or RT-PCR. This latter assays provide suitable results in 3-6 hours. We describe here two new, rapid and specific assays that detect PML-RARA transcripts, based on the RT-QLAMP (Reverse Transcription-Quenching Loop-mediated Isothermal Amplification) technology in which RNA retrotranscription and cDNA amplification are carried out in a single tube with one enzyme at one temperature, in fluorescence and real time format. A single tube triplex assay detects bcr1 and bcr3 PML-RARA transcripts along with GUS housekeeping gene. A single tube duplex assay detects bcr2 and GUSB. In 73 APL cases, these assays detected in 16 minutes bcr1, bcr2 and bcr3 transcripts. All 81 non-APL samples were negative by RT-QLAMP for chimeric transcripts whereas GUSB was detectable. In 11 APL patients in which RT-PCR yielded equivocal breakpoint type results, RT-QLAMP assays unequivocally and accurately defined the breakpoint type (as confirmed by sequencing). Furthermore, RT-QLAMP could amplify two bcr2 transcripts with particularly extended PML exon 6 deletions not amplified by RQ-PCR. RT-QLAMP reproducible sensitivity is 10(-3) for bcr1 and bcr3 and 10(-)2 for bcr2 thus making this assay particularly attractive at diagnosis and leaving RQ-PCR for the molecular monitoring of minimal residual disease during the follow up. In conclusion, PML-RARA RT-QLAMP compared to RT-PCR or RQ-PCR is a valid improvement to perform rapid, simple and accurate molecular diagnosis of APL.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
Spinelli, Orietta; Rambaldi, Alessandro; Rigo, Francesca; Zanghì, Pamela; D'Agostini, Elena; Amicarelli, Giulia; Colotta, Francesco; Divona, Mariadomenica; Ciardi, Claudia; Coco, Francesco Lo; Minnucci, Giulia
2015-01-01
The diagnostic work-up of acute promyelocytic leukemia (APL) includes the cytogenetic demonstration of the t(15;17) translocation and/or the PML-RARA chimeric transcript by RQ-PCR or RT-PCR. This latter assays provide suitable results in 3-6 hours. We describe here two new, rapid and specific assays that detect PML-RARA transcripts, based on the RT-QLAMP (Reverse Transcription-Quenching Loop-mediated Isothermal Amplification) technology in which RNA retrotranscription and cDNA amplification are carried out in a single tube with one enzyme at one temperature, in fluorescence and real time format. A single tube triplex assay detects bcr1 and bcr3 PML-RARA transcripts along with GUS housekeeping gene. A single tube duplex assay detects bcr2 and GUSB. In 73 APL cases, these assays detected in 16 minutes bcr1, bcr2 and bcr3 transcripts. All 81 non-APL samples were negative by RT-QLAMP for chimeric transcripts whereas GUSB was detectable. In 11 APL patients in which RT-PCR yielded equivocal breakpoint type results, RT-QLAMP assays unequivocally and accurately defined the breakpoint type (as confirmed by sequencing). Furthermore, RT-QLAMP could amplify two bcr2 transcripts with particularly extended PML exon 6 deletions not amplified by RQ-PCR. RT-QLAMP reproducible sensitivity is 10−3 for bcr1 and bcr3 and 10−2 for bcr2 thus making this assay particularly attractive at diagnosis and leaving RQ-PCR for the molecular monitoring of minimal residual disease during the follow up. In conclusion, PML-RARA RT-QLAMP compared to RT-PCR or RQ-PCR is a valid improvement to perform rapid, simple and accurate molecular diagnosis of APL. PMID:25815362
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
Flight Research into Simple Adaptive Control on the NASA FAST Aircraft
NASA Technical Reports Server (NTRS)
Hanson, Curtis E.
2011-01-01
A series of simple adaptive controllers with varying levels of complexity were designed, implemented and flight tested on the NASA Full-Scale Advanced Systems Testbed (FAST) aircraft. Lessons learned from the development and flight testing are presented.
A Simple and Accurate Equation for Peak Capacity Estimation in Two Dimensional Liquid Chromatography
Li, Xiaoping; Stoll, Dwight R.; Carr, Peter W.
2009-01-01
Two dimensional liquid chromatography (2DLC) is a very powerful way to greatly increase the resolving power and overall peak capacity of liquid chromatography. The traditional “product rule” for peak capacity usually overestimates the true resolving power due to neglect of the often quite severe under-sampling effect and thus provides poor guidance for optimizing the separation and biases comparisons to optimized one dimensional gradient liquid chromatography. Here we derive a simple yet accurate equation for the effective two dimensional peak capacity that incorporates a correction for under-sampling of the first dimension. The results show that not only is the speed of the second dimension separation important for reducing the overall analysis time, but it plays a vital role in determining the overall peak capacity when the first dimension is under-sampled. A surprising subsidiary finding is that for relatively short 2DLC separations (much less than a couple of hours), the first dimension peak capacity is far less important than is commonly believed and need not be highly optimized, for example through use of long columns or very small particles. PMID:19053226
A Simple and Fast Spline Filtering Algorithm for Surface Metrology.
Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei
2015-01-01
Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement.
Fast and Accurate Semiautomatic Segmentation of Individual Teeth from Dental CT Images.
Kang, Ho Chul; Choi, Chankyu; Shin, Juneseuk; Lee, Jeongjin; Shin, Yeong-Gil
2015-01-01
In this paper, we propose a fast and accurate semiautomatic method to effectively distinguish individual teeth from the sockets of teeth in dental CT images. Parameter values of thresholding and shapes of the teeth are propagated to the neighboring slice, based on the separated teeth from reference images. After the propagation of threshold values and shapes of the teeth, the histogram of the current slice was analyzed. The individual teeth are automatically separated and segmented by using seeded region growing. Then, the newly generated separation information is iteratively propagated to the neighboring slice. Our method was validated by ten sets of dental CT scans, and the results were compared with the manually segmented result and conventional methods. The average error of absolute value of volume measurement was 2.29 ± 0.56%, which was more accurate than conventional methods. Boosting up the speed with the multicore processors was shown to be 2.4 times faster than a single core processor. The proposed method identified the individual teeth accurately, demonstrating that it can give dentists substantial assistance during dental surgery.
A simple backscattering microscope for fast tracking of biological molecules
NASA Astrophysics Data System (ADS)
Sowa, Yoshiyuki; Steel, Bradley C.; Berry, Richard M.
2010-11-01
Recent developments in techniques for observing single molecules under light microscopes have helped reveal the mechanisms by which molecular machines work. A wide range of markers can be used to detect molecules, from single fluorophores to micron sized markers, depending on the research interest. Here, we present a new and simple objective-type backscattering microscope to track gold nanoparticles with nanometer and microsecond resolution. The total noise of our system in a 55 kHz bandwidth is ˜0.6 nm per axis, sufficient to measure molecular movement. We found our backscattering microscopy to be useful not only for in vitro but also for in vivo experiments because of lower background scattering from cells than in conventional dark-field microscopy. We demonstrate the application of this technique to measuring the motion of a biological rotary molecular motor, the bacterial flagellar motor, in live Escherichia coli cells.
CoMOGrad and PHOG: From Computer Vision to Fast and Accurate Protein Tertiary Structure Retrieval
Karim, Rezaul; Aziz, Mohd. Momin Al; Shatabda, Swakkhar; Rahman, M. Sohel; Mia, Md. Abul Kashem; Zaman, Farhana; Rakin, Salman
2015-01-01
The number of entries in a structural database of proteins is increasing day by day. Methods for retrieving protein tertiary structures from such a large database have turn out to be the key to comparative analysis of structures that plays an important role to understand proteins and their functions. In this paper, we present fast and accurate methods for the retrieval of proteins having tertiary structures similar to a query protein from a large database. Our proposed methods borrow ideas from the field of computer vision. The speed and accuracy of our methods come from the two newly introduced features- the co-occurrence matrix of the oriented gradient and pyramid histogram of oriented gradient- and the use of Euclidean distance as the distance measure. Experimental results clearly indicate the superiority of our approach in both running time and accuracy. Our method is readily available for use from this website: http://research.buet.ac.bd:8080/Comograd/. PMID:26293226
Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes
Sjostrom, Travis; Daligault, Jerome
2014-10-10
Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lower temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.
Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes
Sjostrom, Travis; Daligault, Jerome
2014-10-10
Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lowermore » temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.« less
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
Flexible, Fast and Accurate Sequence Alignment Profiling on GPGPU with PaSWAS
Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J. L.; Nap, Jan Peter
2015-01-01
Motivation To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. Results With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation. PMID:25830241
A fast and accurate algorithm for high-frequency trans-ionospheric path length determination
NASA Astrophysics Data System (ADS)
Wijaya, Dudy D.
2015-12-01
This paper presents a fast and accurate algorithm for high-frequency trans-ionospheric path length determination. The algorithm is merely based on the solution of the Eikonal equation that is solved using the conformal theory of refraction. The main advantages of the algorithm are summarized as follows. First, the algorithm can determine the optical path length without iteratively adjusting both elevation and azimuth angles and, hence, the computational time can be reduced. Second, for the same elevation and azimuth angles, the algorithm can simultaneously determine the phase and group of both ordinary and extra-ordinary optical path lengths for different frequencies. Results from numerical simulations show that the computational time required by the proposed algorithm to accurately determine 8 different optical path lengths is almost 17 times faster than that required by a 3D ionospheric ray-tracing algorithm. It is found that the computational time to determine multiple optical path lengths is the same with that for determining a single optical path length. It is also found that the proposed algorithm is capable of determining the optical path lengths with millimeter level of accuracies, if the magnitude of the squared ratio of the plasma frequency to the transmitted frequency is less than 1.33× 10^{-3}, and hence the proposed algorithm is applicable for geodetic applications.
Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle
NASA Astrophysics Data System (ADS)
Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.
2017-04-01
Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.
Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle
NASA Astrophysics Data System (ADS)
Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.
2016-12-01
Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.
Fast and accurate MAS-DNP simulations of large spin ensembles.
Mentink-Vigier, Frédéric; Vega, Shimon; De Paëpe, Gaël
2017-02-01
A deeper understanding of parameters affecting Magic Angle Spinning Dynamic Nuclear Polarization (MAS-DNP), an emerging nuclear magnetic resonance hyperpolarization method, is crucial for the development of new polarizing agents and the successful implementation of the technique at higher magnetic fields (>10 T). Such progress is currently impeded by computational limitation which prevents the simulation of large spin ensembles (electron as well as nuclear spins) and to accurately describe the interplay between all the multiple key parameters at play. In this work, we present an alternative approach to existing cross-effect and solid-effect MAS-DNP codes that yields fast and accurate simulations. More specifically we describe the model, the associated Liouville-based formalism (Bloch-type derivation and/or Landau-Zener approximations) and the linear time algorithm that allows computing MAS-DNP mechanisms with unprecedented time savings. As a result, one can easily scan through multiple parameters and disentangle their mutual influences. In addition, the simulation code is able to handle multiple electrons and protons, which allows probing the effect of (hyper)polarizing agents concentration, as well as fully revealing the interplay between the polarizing agent structure and the hyperfine couplings, nuclear dipolar couplings, nuclear relaxation times, both in terms of depolarization effect, but also of polarization gain and buildup times.
Simple but accurate GCM-free approach for quantifying anthropogenic climate change
NASA Astrophysics Data System (ADS)
Lovejoy, S.
2014-12-01
We are so used to analysing the climate with the help of giant computer models (GCM's) that it is easy to get the impression that they are indispensable. Yet anthropogenic warming is so large (roughly 0.9oC) that it turns out that it is straightforward to quantify it with more empirically based methodologies that can be readily understood by the layperson. The key is to use the CO2 forcing as a linear surrogate for all the anthropogenic effects from 1880 to the present (implicitly including all effects due to Greenhouse Gases, aerosols and land use changes). To a good approximation, double the economic activity, double the effects. The relationship between the forcing and global mean temperature is extremely linear as can be seen graphically and understood without fancy statistics, [Lovejoy, 2014a] (see the attached figure and http://www.physics.mcgill.ca/~gang/Lovejoy.htm). To an excellent approximation, the deviations from the linear forcing - temperature relation can be interpreted as the natural variability. For example, this direct - yet accurate approach makes it graphically obvious that the "pause" or "hiatus" in the warming since 1998 is simply a natural cooling event that has roughly offset the anthropogenic warming [Lovejoy, 2014b]. Rather than trying to prove that the warming is anthropogenic, with a little extra work (and some nonlinear geophysics theory and pre-industrial multiproxies) we can disprove the competing theory that it is natural. This approach leads to the estimate that the probability of the industrial scale warming being a giant natural fluctuation is ≈0.1%: it can be dismissed. This destroys the last climate skeptic argument - that the models are wrong and the warming is natural. It finally allows for a closure of the debate. In this talk we argue that this new, direct, simple, intuitive approach provides an indispensable tool for communicating - and convincing - the public of both the reality and the amplitude of anthropogenic warming
Highly accurate and fast optical penetration-based silkworm gender separation system
NASA Astrophysics Data System (ADS)
Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Chanhorm, Sataporn
2015-07-01
Based on our research work in the last five years, this paper highlights our innovative optical sensing system that can identify and separate silkworm gender highly suitable for sericulture industry. The key idea relies on our proposed optical penetration concepts and once combined with simple image processing operations leads to high accuracy in identifying of silkworm gender. Inside the system, there are electronic and mechanical parts that assist in controlling the overall system operation, processing the optical signal, and separating the female from male silkworm pupae. With current system performance, we achieve a very highly accurate more than 95% in identifying gender of silkworm pupae with an average system operational speed of 30 silkworm pupae/minute. Three of our systems are already in operation at Thailand's Queen Sirikit Sericulture Centers.
Fast and accurate search for non-coding RNA pseudoknot structures in genomes
Huang, Zhibin; Wu, Yong; Robertson, Joseph; Feng, Liang; Malmberg, Russell L.; Cai, Liming
2008-01-01
Motivation: Searching genomes for non-coding RNAs (ncRNAs) by their secondary structure has become an important goal for bioinformatics. For pseudoknot-free structures, ncRNA search can be effective based on the covariance model and CYK-type dynamic programming. However, the computational difficulty in aligning an RNA sequence to a pseudoknot has prohibited fast and accurate search of arbitrary RNA structures. Our previous work introduced a graph model for RNA pseudoknots and proposed to solve the structure–sequence alignment by graph optimization. Given k candidate regions in the target sequence for each of the n stems in the structure, we could compute a best alignment in time O(ktn) based upon a tree width t decomposition of the structure graph. However, to implement this method to programs that can routinely perform fast yet accurate RNA pseudoknot searches, we need novel heuristics to ensure that, without degrading the accuracy, only a small number of stem candidates need to be examined and a tree decomposition of a small tree width can always be found for the structure graph. Results: The current work builds on the previous one with newly developed preprocessing algorithms to reduce the values for parameters k and t and to implement the search method into a practical program, called RNATOPS, for RNA pseudoknot search. In particular, we introduce techniques, based on probabilistic profiling and distance penalty functions, which can identify for every stem just a small number k (e.g. k ≤ 10) of plausible regions in the target sequence to which the stem needs to align. We also devised a specialized tree decomposition algorithm that can yield tree decomposition of small tree width t (e.g. t ≤ 4) for almost all RNA structure graphs. Our experiments show that with RNATOPS it is possible to routinely search prokaryotic and eukaryotic genomes for specific RNA structures of medium to large sizes, including pseudoknots, with high sensitivity and high
ERIC Educational Resources Information Center
Beare, R. A.
2008-01-01
Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…
Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping
NASA Astrophysics Data System (ADS)
Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.
2016-06-01
High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.
Development of a Fast and Accurate PCRTM Radiative Transfer Model in the Solar Spectral Region
NASA Technical Reports Server (NTRS)
Liu, Xu; Yang, Qiguang; Li, Hui; Jin, Zhonghai; Wu, Wan; Kizer, Susan; Zhou, Daniel K.; Yang, Ping
2016-01-01
A fast and accurate principal component-based radiative transfer model in the solar spectral region (PCRTMSOLAR) has been developed. The algorithm is capable of simulating reflected solar spectra in both clear sky and cloudy atmospheric conditions. Multiple scattering of the solar beam by the multilayer clouds and aerosols are calculated using a discrete ordinate radiative transfer scheme. The PCRTM-SOLAR model can be trained to simulate top-of-atmosphere radiance or reflectance spectra with spectral resolution ranging from 1 cm(exp -1) resolution to a few nanometers. Broadband radiances or reflectance can also be calculated if desired. The current version of the PCRTM-SOLAR covers a spectral range from 300 to 2500 nm. The model is valid for solar zenith angles ranging from 0 to 80 deg, the instrument view zenith angles ranging from 0 to 70 deg, and the relative azimuthal angles ranging from 0 to 360 deg. Depending on the number of spectral channels, the speed of the current version of PCRTM-SOLAR is a few hundred to over one thousand times faster than the medium speed correlated-k option MODTRAN5. The absolute RMS error in channel radiance is smaller than 10(exp -3) mW/cm)exp 2)/sr/cm(exp -1) and the relative error is typically less than 0.2%.
Development of a fast and accurate PCRTM radiative transfer model in the solar spectral region.
Liu, Xu; Yang, Qiguang; Li, Hui; Jin, Zhonghai; Wu, Wan; Kizer, Susan; Zhou, Daniel K; Yang, Ping
2016-10-10
A fast and accurate principal component-based radiative transfer model in the solar spectral region (PCRTM-SOLAR) has been developed. The algorithm is capable of simulating reflected solar spectra in both clear sky and cloudy atmospheric conditions. Multiple scattering of the solar beam by the multilayer clouds and aerosols are calculated using a discrete ordinate radiative transfer scheme. The PCRTM-SOLAR model can be trained to simulate top-of-atmosphere radiance or reflectance spectra with spectral resolution ranging from 1 cm^{-1} resolution to a few nanometers. Broadband radiances or reflectance can also be calculated if desired. The current version of the PCRTM-SOLAR covers a spectral range from 300 to 2500 nm. The model is valid for solar zenith angles ranging from 0 to 80 deg, the instrument view zenith angles ranging from 0 to 70 deg, and the relative azimuthal angles ranging from 0 to 360 deg. Depending on the number of spectral channels, the speed of the current version of PCRTM-SOLAR is a few hundred to over one thousand times faster than the medium speed correlated-k option MODTRAN5. The absolute RMS error in channel radiance is smaller than 10^{-3} mW/cm^{2}/sr/cm^{-1} and the relative error is typically less than 0.2%.
PRIMAL: Fast and accurate pedigree-based imputation from sequence data in a founder population.
Livne, Oren E; Han, Lide; Alkorta-Aranburu, Gorka; Wentworth-Sheilds, William; Abney, Mark; Ober, Carole; Nicolae, Dan L
2015-03-01
Founder populations and large pedigrees offer many well-known advantages for genetic mapping studies, including cost-efficient study designs. Here, we describe PRIMAL (PedigRee IMputation ALgorithm), a fast and accurate pedigree-based phasing and imputation algorithm for founder populations. PRIMAL incorporates both existing and original ideas, such as a novel indexing strategy of Identity-By-Descent (IBD) segments based on clique graphs. We were able to impute the genomes of 1,317 South Dakota Hutterites, who had genome-wide genotypes for ~300,000 common single nucleotide variants (SNVs), from 98 whole genome sequences. Using a combination of pedigree-based and LD-based imputation, we were able to assign 87% of genotypes with >99% accuracy over the full range of allele frequencies. Using the IBD cliques we were also able to infer the parental origin of 83% of alleles, and genotypes of deceased recent ancestors for whom no genotype information was available. This imputed data set will enable us to better study the relative contribution of rare and common variants on human phenotypes, as well as parental origin effect of disease risk alleles in >1,000 individuals at minimal cost.
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-03-26
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis.
Linaro, Daniele; Storace, Marco; Giugliano, Michele
2011-03-01
Stochastic channel gating is the major source of intrinsic neuronal noise whose functional consequences at the microcircuit- and network-levels have been only partly explored. A systematic study of this channel noise in large ensembles of biophysically detailed model neurons calls for the availability of fast numerical methods. In fact, exact techniques employ the microscopic simulation of the random opening and closing of individual ion channels, usually based on Markov models, whose computational loads are prohibitive for next generation massive computer models of the brain. In this work, we operatively define a procedure for translating any Markov model describing voltage- or ligand-gated membrane ion-conductances into an effective stochastic version, whose computer simulation is efficient, without compromising accuracy. Our approximation is based on an improved Langevin-like approach, which employs stochastic differential equations and no Montecarlo methods. As opposed to an earlier proposal recently debated in the literature, our approximation reproduces accurately the statistical properties of the exact microscopic simulations, under a variety of conditions, from spontaneous to evoked response features. In addition, our method is not restricted to the Hodgkin-Huxley sodium and potassium currents and is general for a variety of voltage- and ligand-gated ion currents. As a by-product, the analysis of the properties emerging in exact Markov schemes by standard probability calculus enables us for the first time to analytically identify the sources of inaccuracy of the previous proposal, while providing solid ground for its modification and improvement we present here.
SMARTIES: User-friendly codes for fast and accurate calculations of light scattering by spheroids
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-05-01
We provide a detailed user guide for SMARTIES, a suite of MATLAB codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a MATLAB implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarized, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for non-spherical particles with a challenging aspect ratio and/or refractive index contrast.
Visell, Yon
2015-04-01
This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.
Doulaverakis, Charalampos; Tsampoulatidis, Ioannis; Antoniadis, Antonios P; Chatzizisis, Yiannis S; Giannopoulos, Andreas; Kompatsiaris, Ioannis; Giannoglou, George D
2013-11-01
There is an ongoing research and clinical interest in the development of reliable and easily accessible software for the 3D reconstruction of coronary arteries. In this work, we present the architecture and validation of IVUSAngio Tool, an application which performs fast and accurate 3D reconstruction of the coronary arteries by using intravascular ultrasound (IVUS) and biplane angiography data. The 3D reconstruction is based on the fusion of the detected arterial boundaries in IVUS images with the 3D IVUS catheter path derived from the biplane angiography. The IVUSAngio Tool suite integrates all the intermediate processing and computational steps and provides a user-friendly interface. It also offers additional functionality, such as automatic selection of the end-diastolic IVUS images, semi-automatic and automatic IVUS segmentation, vascular morphometric measurements, graphical visualization of the 3D model and export in a format compatible with other computer-aided design applications. Our software was applied and validated in 31 human coronary arteries yielding quite promising results. Collectively, the use of IVUSAngio Tool significantly reduces the total processing time for 3D coronary reconstruction. IVUSAngio Tool is distributed as free software, publicly available to download and use.
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
FAPI: Fast and accurate P-value Imputation for genome-wide association study.
Kwan, Johnny S H; Li, Miao-Xin; Deng, Jia-En; Sham, Pak C
2016-05-01
Imputing individual-level genotypes (or genotype imputation) is now a standard procedure in genome-wide association studies (GWAS) to examine disease associations at untyped common genetic variants. Meta-analysis of publicly available GWAS summary statistics can allow more disease-associated loci to be discovered, but these data are usually provided for various variant sets. Thus imputing these summary statistics of different variant sets into a common reference panel for meta-analyses is impossible using traditional genotype imputation methods. Here we develop a fast and accurate P-value imputation (FAPI) method that utilizes summary statistics of common variants only. Its computational cost is linear with the number of untyped variants and has similar accuracy compared with IMPUTE2 with prephasing, one of the leading methods in genotype imputation. In addition, based on the FAPI idea, we develop a metric to detect abnormal association at a variant and showed that it had a significantly greater power compared with LD-PAC, a method that quantifies the evidence of spurious associations based on likelihood ratio. Our method is implemented in a user-friendly software tool, which is available at http://statgenpro.psychiatry.hku.hk/fapi.
NASA Astrophysics Data System (ADS)
Wu, Su-Yong; Long, Xing-Wu; Yang, Kai-Yong
2009-09-01
To improve the current status of home multilayer optical coating design with low speed and poor efficiency when a large layer number occurs, the accurate calculation and fast realization of merit function’s gradient and Hesse matrix is pointed out. Based on the matrix method to calculate the spectral properties of multilayer optical coating, an analytic model is established theoretically. And the corresponding accurate and fast computation is successfully achieved by programming with Matlab. Theoretical and simulated results indicate that this model is mathematically strict and accurate, and its maximal precision can reach floating-point operations in the computer, with short time and fast speed. Thus it is very suitable to improve the optimal search speed and efficiency of local optimization methods based on the derivatives of merit function. It has outstanding performance in multilayer optical coating design with a large layer number.
Cunha, Pricila da Silva; Pena, Heloisa B; D'Angelo, Carla Sustek; Koiffmann, Celia P; Rosenfeld, Jill A; Shaffer, Lisa G; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho
2014-01-01
Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5-0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.
Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR
Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho
2014-01-01
Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341
dos Santos, Marcus V P; Proenza, Yaicel G; Longo, Ricardo L
2014-09-07
The generalization of the PICVib approach [M. V. P. dos Santos et al., J. Comput. Chem., 2013, 34, 611] for calculating infrared intensities is shown to be successful and to preserve all interesting features of the procedure such as easiness of implementation and parallelization, flexibility, treatment of large systems and at high theoretical levels. It was tested and validated for very diverse molecular systems: XH3 (D3h), YH4 (D4h), conformers of RDX, S(N)2 and E2 reaction product complexes, the [W(dppe)2(NNC5H10)] complex, carbon nanotubes, and hydrogen-bonded complexes (H2O···HOH, MeHO···HOH, MeOH···OH2, MeOH···OHMe) including the guanine-cytosine pair. The PICVib shows an excellent overall performance for calculating infrared intensities of localized normal modes and even mixed vibrations, whereas care must be taken for vibrations involving intermolecular interactions. DFT functionals are still the best combination with high level ab initio methods such as CCSD and CCSD(T).
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime
2016-03-01
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.
Maurício, R; Amaral, L; Santos Coelho, P; Santana, F
2013-12-01
Biofilms are present in several areas and are studied in microbiology, medical sciences, biology and, of course, sanitary engineering. Biofilms are used for the treatment of municipal wastewater, and their application was even before the invention of the activated sludge process. The main objective of this work was to develop a simple, fast and low-cost technique to evaluate the nature of the first decay in the concentration of an organic compound in the presence of a solid material. Though simple, the technique developed has allowed the clarification of whether the initial concentration decay is due to adsorption to the support material or a result of biodegradation. The results show that, with two different support materials, adsorption does not take place, and the biodegradation processes are responsible for the first decay in the organic concentration. The technique used offers a fast and low-cost way of studying the existence of adsorption. Two feed concentration solutions and two different support materials were used.
NASA Astrophysics Data System (ADS)
Westendorp, Hendrik; Nuver, Tonnis T.; Moerland, Marinus A.; Minken, André W.
2015-10-01
The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant.
NINJA-OPS: Fast Accurate Marker Gene Alignment Using Concatenated Ribosomes
Al-Ghalith, Gabriel A.; Montassier, Emmanuel; Ward, Henry N.; Knights, Dan
2016-01-01
The explosion of bioinformatics technologies in the form of next generation sequencing (NGS) has facilitated a massive influx of genomics data in the form of short reads. Short read mapping is therefore a fundamental component of next generation sequencing pipelines which routinely match these short reads against reference genomes for contig assembly. However, such techniques have seldom been applied to microbial marker gene sequencing studies, which have mostly relied on novel heuristic approaches. We propose NINJA Is Not Just Another OTU-Picking Solution (NINJA-OPS, or NINJA for short), a fast and highly accurate novel method enabling reference-based marker gene matching (picking Operational Taxonomic Units, or OTUs). NINJA takes advantage of the Burrows-Wheeler (BW) alignment using an artificial reference chromosome composed of concatenated reference sequences, the “concatesome,” as the BW input. Other features include automatic support for paired-end reads with arbitrary insert sizes. NINJA is also free and open source and implements several pre-filtering methods that elicit substantial speedup when coupled with existing tools. We applied NINJA to several published microbiome studies, obtaining accuracy similar to or better than previous reference-based OTU-picking methods while achieving an order of magnitude or more speedup and using a fraction of the memory footprint. NINJA is a complete pipeline that takes a FASTA-formatted input file and outputs a QIIME-formatted taxonomy-annotated BIOM file for an entire MiSeq run of human gut microbiome 16S genes in under 10 minutes on a dual-core laptop. PMID:26820746
Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros
2014-01-01
When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601
A Simple and Efficient Parallel Implementation of the Fast Marching Method
NASA Astrophysics Data System (ADS)
Yang, Jianming; Stern, Frederick
2011-11-01
The fast marching method is a widely used numerical method for solving the Eikonal equation arising from a variety of applications. However, this method is inherently serial and doesn't lend itself to a straightforward parallelization. In this study, we present a simple and efficient algorithm for the parallel implementation of the fast marching method using a domain decomposition approach. Properties of the Eikonal equation are explored to greatly relax the serial interdependence of neighboring sub-domains. Overlapping sub-domains are employed to reduce communication overhead and improve parallelism among sub-domains. There are no iterative procedures or rollback operations involved in the present algorithm and the changes to the serial version of the fast marching method are minimized. Examples are performed to demonstrate the efficiency of our parallel fast marching method. This study was supported by ONR.
Simple yet accurate noncontact device for measuring the radius of curvature of a spherical mirror
Spiridonov, Maxim; Toebaert, David
2006-09-10
An easily reproducible device is demonstrated to be capable of measuring the radii of curvature of spherical mirrors, both convex and concave, without resorting to high-end interferometric or tactile devices. The former are too elaborate for our purposes,and the latter cannot be used due to the delicate nature of the coatings applied to mirrors used in high-power CO2 laser applications. The proposed apparatus is accurate enough to be useful to anyone using curved optics and needing a quick way to assess the values of the radii of curvature, be it for entrance quality control or trouble shooting an apparently malfunctioning optical system. Specifically, the apparatus was designed for checking 50 mm diameter resonator(typically flat or tens of meters concave) and telescope (typically some meters convex and concave) mirrors for a high-power CO2 laser, but it can easily be adapted to any other type of spherical mirror by a straightforward resizing.
Fast and accurate techniques of treating the radiative transfer problem under cloudy conditions
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry; Doicu, Adrian; Trautmann, Thomas; Loyola, Diego
As a massive amount of spectral information is expected from the new generation of European atmospheric sensors Sentinel 5 Precursor, Sentinel 4 and Sentinel 5, a fast processing of the data in the UV-VIS spectral domain, is required. Trace gas retrievals from nadir sounding instruments are hindered by the presence of clouds. Our research is focused on the developing of a robust and accurate algorithm for treating clouds in the radiative transfer models (RTM). For this reason we have implemented an acceleration technique based on dimensionality reduction algorithms. We obtained the speed improvement of about 8 times. For operational reasons clouds can be considered as an optically homogeneous layer. In the independent pixel approximation, radiative transfer computations involving cloudy scenes require two separate calls to the RTM, one call for a clear sky scenario, the other for an atmosphere containing clouds. We present two novel methods for RTM performance enhancement with particular application to trace gas retrievals under cloudy conditions. Both methods are based on reusing results from clear-sky RTM calculations to speed up corresponding calculations for the cloud-filled scenario. Also, for satellite instruments with a high spatial resolution, it is important to account for the sub-pixel cloud inhomogeneities, or at least, to assess their effect on the radiances at the top of the atmosphere, and in particular, on the retrieval results. This assessment is probabilistic since the detailed structure of the clouds is unknown and only a small number of statistical properties are given. In this regard, we have designed a stochastic model for the solar radiation problem and a molecular atmosphere with its underlying surface. The model allows the computation of the mean radiance at the top of the atmosphere as it is intended to be used for trace gas retrievals. The efficiency of the stochastic model is lower, because we have to solve a two-dimensional problem
Secular Orbit Evolution in Systems with a Strong External Perturber - A Simple and Accurate Model
NASA Astrophysics Data System (ADS)
Andrade-Ines, Eduardo; Eggl, Siegfried
2017-04-01
We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-order models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.
A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat.
Liu, Jian; Li, Dezhang; Liu, Xinzijian
2016-07-14
We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.
A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat
NASA Astrophysics Data System (ADS)
Liu, Jian; Li, Dezhang; Liu, Xinzijian
2016-07-01
We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.
Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish
2016-04-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
NASA Astrophysics Data System (ADS)
Gu, S.
2016-08-01
Despite its low accuracy and consistency, growing degree days (GDD) has been widely used to approximate growing heat summation (GHS) for regional classification and phenological prediction. GDD is usually calculated from the mean of daily minimum and maximum temperatures (GDDmm) above a growing base temperature ( T gb). To determine approximation errors and accuracy, daily and cumulative GDDmm was compared to GDD based on daily average temperature (GDDavg), growing degree hours (GDH) based on hourly temperatures, and growing degree minutes (GDM) based on minute-by-minute temperatures. Finite error, due to the difference between measured and true temperatures above T gb is large in GDDmm but is negligible in GDDavg, GDH, and GDM, depending only upon the number of measured temperatures used for daily approximation. Hidden negative error, due to the temperatures below T gb when being averaged for approximation intervals larger than measuring interval, is large in GDDmm and GDDavg but is negligible in GDH and GDM. Both GDH and GDM improve GHS approximation accuracy over GDDmm or GDDavg by summation of multiple integration rectangles to reduce both finite and hidden negative errors. GDH is proposed as the standardized GHS approximation protocol, providing adequate accuracy and high precision independent upon T gb while requiring simple data recording and processing.
An accurate tool for the fast generation of dark matter halo catalogues
NASA Astrophysics Data System (ADS)
Monaco, P.; Sefusatti, E.; Borgani, S.; Crocce, M.; Fosalba, P.; Sheth, R. K.; Theuns, T.
2013-08-01
We present a new parallel implementation of the PINpointing Orbit Crossing-Collapsed HIerarchical Objects (PINOCCHIO) algorithm, a quick tool, based on Lagrangian Perturbation Theory, for the hierarchical build-up of dark matter (DM) haloes in cosmological volumes. To assess its ability to predict halo correlations on large scales, we compare its results with those of an N-body simulation of a 3 h-1 Gpc box sampled with 20483 particles taken from the MICE suite, matching the same seeds for the initial conditions. Thanks to the Fastest Fourier Transforms in the West (FFTW) libraries and to the relatively simple design, the code shows very good scaling properties. The CPU time required by PINOCCHIO is a tiny fraction (˜1/2000) of that required by the MICE simulation. Varying some of PINOCCHIO numerical parameters allows one to produce a universal mass function that lies in the range allowed by published fits, although it underestimates the MICE mass function of Friends-of-Friends (FoF) haloes in the high-mass tail. We compare the matter-halo and the halo-halo power spectra with those of the MICE simulation and find that these two-point statistics are well recovered on large scales. In particular, when catalogues are matched in number density, agreement within 10 per cent is achieved for the halo power spectrum. At scales k > 0.1 h Mpc-1, the inaccuracy of the Zel'dovich approximation in locating halo positions causes an underestimate of the power spectrum that can be modelled as a Gaussian factor with a damping scale of d = 3 h-1 Mpc at z = 0, decreasing at higher redshift. Finally, a remarkable match is obtained for the reduced halo bispectrum, showing a good description of non-linear halo bias. Our results demonstrate the potential of PINOCCHIO as an accurate and flexible tool for generating large ensembles of mock galaxy surveys, with interesting applications for the analysis of large galaxy redshift surveys.
Becker, Johanna; Luy, Burkhard
2015-11-01
Fast measurement of heteronuclear one-bond couplings, a class of NMR parameters valuable for structure elucidation, is highly desirable, especially if samples undergo chemical reactions or dynamic processes are observed. Methods presented so far face severe limitations in terms of resolution, accessible bandwidth, and sensitivity. We present the CLean InPhase-Acceleration by Sharing Adjacent Polarization-HSQC (CLIP-ASAP-HSQC) pulse sequence that allows fast acquisition of spectra with clean inphase multiplets in about 25 s. The performance in terms of accurate extraction of one-bond couplings is demonstrated on three test samples including partially aligned molecules.
Energy expenditure during level human walking: seeking a simple and accurate predictive solution.
Ludlow, Lindsay W; Weyand, Peter G
2016-03-01
Accurate prediction of the metabolic energy that walking requires can inform numerous health, bodily status, and fitness outcomes. We adopted a two-step approach to identifying a concise, generalized equation for predicting level human walking metabolism. Using literature-aggregated values we compared 1) the predictive accuracy of three literature equations: American College of Sports Medicine (ACSM), Pandolf et al., and Height-Weight-Speed (HWS); and 2) the goodness-of-fit possible from one- vs. two-component descriptions of walking metabolism. Literature metabolic rate values (n = 127; speed range = 0.4 to 1.9 m/s) were aggregated from 25 subject populations (n = 5-42) whose means spanned a 1.8-fold range of heights and a 4.2-fold range of weights. Population-specific resting metabolic rates (V̇o2 rest) were determined using standardized equations. Our first finding was that the ACSM and Pandolf et al. equations underpredicted nearly all 127 literature-aggregated values. Consequently, their standard errors of estimate (SEE) were nearly four times greater than those of the HWS equation (4.51 and 4.39 vs. 1.13 ml O2·kg(-1)·min(-1), respectively). For our second comparison, empirical best-fit relationships for walking metabolism were derived from the data set in one- and two-component forms for three V̇o2-speed model types: linear (∝V(1.0)), exponential (∝V(2.0)), and exponential/height (∝V(2.0)/Ht). We found that the proportion of variance (R(2)) accounted for, when averaged across the three model types, was substantially lower for one- vs. two-component versions (0.63 ± 0.1 vs. 0.90 ± 0.03) and the predictive errors were nearly twice as great (SEE = 2.22 vs. 1.21 ml O2·kg(-1)·min(-1)). Our final analysis identified the following concise, generalized equation for predicting level human walking metabolism: V̇o2 total = V̇o2 rest + 3.85 + 5.97·V(2)/Ht (where V is measured in m/s, Ht in meters, and V̇o2 in ml O2·kg(-1)·min(-1)).
NASA Astrophysics Data System (ADS)
Lima, F. M. S.
2009-11-01
In a previous work, O'Connell (Phys. Teach. 40, 24 (2002)) investigated the time dependence of the tension in the string of a simple pendulum oscillating within the small-angle regime. In spite of the approximation sin θ ≈ θ being accurate only for amplitudes below 7°, his experimental results are for a pendulum oscillating with an amplitude of about 18°, therefore beyond the small-angle regime. This lapse may also be found in some textbooks, laboratory manuals and internet. By noting that the exact analytical solution for this problem involves the so-called Jacobi elliptic functions, which are unknown to most students (even instructors), I take into account a sinusoidal approximate solution for the pendulum equation I introduced in a recent work (Eur. J. Phys. 29 1091 (2008)) for deriving a simple trigonometric approximation for the tension valid for all possible amplitudes. This approximation is compared to both the O'Connell and the exact results, revealing that it is accurate enough for analysing large-angle pendulum experiments.
An accurate, fast and stable material model for shape memory alloys
NASA Astrophysics Data System (ADS)
Junker, Philipp
2014-10-01
Shape memory alloys possess several features that make them interesting for industrial applications. However, due to their complex and thermo-mechanically coupled behavior, direct use of shape memory alloys in engineering construction is problematic. There is thus a demand for tools to achieve realistic, predictive simulations that are numerically robust when computing complex, coupled load states, are fast enough to calculate geometries of industrial interest, and yield realistic and reliable results without the use of fitting curves. In this paper a new and numerically fast material model for shape memory alloys is presented. It is based solely on energetic quantities, which thus creates a quite universal approach. In the beginning, a short derivation is given before it is demonstrated how this model can be easily calibrated by means of tension tests. Then, several examples of engineering applications under mechanical and thermal loads are presented to demonstrate the numerical stability and high computation speed of the model.
Zeb, Alam; Ullah, Fareed
2016-01-01
A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries.
A fast and accurate algorithm for ℓ 1 minimization problems in compressive sampling
NASA Astrophysics Data System (ADS)
Chen, Feishe; Shen, Lixin; Suter, Bruce W.; Xu, Yuesheng
2015-12-01
An accurate and efficient algorithm for solving the constrained ℓ 1-norm minimization problem is highly needed and is crucial for the success of sparse signal recovery in compressive sampling. We tackle the constrained ℓ 1-norm minimization problem by reformulating it via an indicator function which describes the constraints. The resulting model is solved efficiently and accurately by using an elegant proximity operator-based algorithm. Numerical experiments show that the proposed algorithm performs well for sparse signals with magnitudes over a high dynamic range. Furthermore, it performs significantly better than the well-known algorithm NESTA (a shorthand for Nesterov's algorithm) and DADM (dual alternating direction method) in terms of the quality of restored signals and the computational complexity measured in the CPU-time consumed.
NASA Astrophysics Data System (ADS)
Lin, Xue-lei; Lu, Xin; Ng, Micheal K.; Sun, Hai-Wei
2016-10-01
A fast accurate approximation method with multigrid solver is proposed to solve a two-dimensional fractional sub-diffusion equation. Using the finite difference discretization of fractional time derivative, a block lower triangular Toeplitz matrix is obtained where each main diagonal block contains a two-dimensional matrix for the Laplacian operator. Our idea is to make use of the block ɛ-circulant approximation via fast Fourier transforms, so that the resulting task is to solve a block diagonal system, where each diagonal block matrix is the sum of a complex scalar times the identity matrix and a Laplacian matrix. We show that the accuracy of the approximation scheme is of O (ɛ). Because of the special diagonal block structure, we employ the multigrid method to solve the resulting linear systems. The convergence of the multigrid method is studied. Numerical examples are presented to illustrate the accuracy of the proposed approximation scheme and the efficiency of the proposed solver.
Zhang, Xuedian; Liu, Zhaoqing; Jiang, Minshan; Chang, Min
2014-12-15
An auto-focus method for digital imaging systems is proposed that combines depth from focus (DFF) and improved depth from defocus (DFD). The traditional DFD method is improved to become more rapid, which achieves a fast initial focus. The defocus distance is first calculated by the improved DFD method. The result is then used as a search step in the searching stage of the DFF method. A dynamic focusing scheme is designed for the control software, which is able to eliminate environmental disturbances and other noises so that a fast and accurate focus can be achieved. An experiment is designed to verify the proposed focusing method and the results show that the method's efficiency is at least 3-5 times higher than that of the traditional DFF method.
Rashid, Mamoon; Pain, Arnab
2013-01-01
Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: arnab.pain@kaust.edu.sa or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23193222
Barros, Ana I R N A; Silva, Ana P; Gonçalves, Berta; Nunes, Fernando M
2010-03-01
A reliable method for the determination of total vitamin C must be able to resolve ascorbic acid (AA) and the epimeric isoascorbic acid (IAA) and determine the sum of AA and its oxidized form dehydroascorbic acid. AA and IAA are polar molecules with a low retention time in conventional reversed phase systems, and hence of difficult resolution. Hydrophilic interaction chromatography using a TSKgel Amide-80 stationary phase with isocratic elution was successful in resolving the two epimers. The column was compatible with injections of high concentrations of metaphosphoric acid, tris(2-carboxyethyl)-phosphine, and EDTA without drift of baseline and retention time. Total AA and IAA were extracted, stabilized, and reduced in one step at 40 °C, using 5% m-phosphoric acid, 2 mM of EDTA, and 2 mM of tris(2-carboxyethyl)-phosphine as reducing agent. This simple, fast, and robust hydrophilic interaction chromatography-DAD method was applied for the analysis of food products namely fruit juices, chestnut, and ham and also in pharmaceutical and multivitamin tablets. Method validation was performed on the food products, including parameters of precision, accuracy, linearity, limit of detection, and quantification (LOQ). The absence of matrix interferences was assessed by the standard addition method and Youden calibration. The method was fast, accurate, and precise with a LOQ(AA) of 1.5 mg/L and LOQ(IAA) of 3.7 mg/L. The simple experimental procedure, completed in 1 h, the possibility of using IAA as an internal standard, and low probability of artifacts are the major advantages of the proposed method for the routine determination of these compounds in a large number of samples.
Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus
2016-02-01
We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images.
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.
Fast and Accurate Accessible Surface Area Prediction Without a Sequence Profile.
Faraggi, Eshel; Kouza, Maksim; Zhou, Yaoqi; Kloczkowski, Andrzej
2017-01-01
A fast accessible surface area (ASA) predictor is presented. In this new approach no residue mutation profiles generated by multiple sequence alignments are used as inputs. Instead, we use only single sequence information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for ASAquick are available from Research and Information Systems at http://mamiris.com and from the Battelle Center for Mathematical Medicine at http://mathmed.org .
Using Interpolation for Fast and Accurate Calculation of Ion–Ion Interactions
2015-01-01
We perform extensive molecular dynamics (MD) simulations between pairs of ions of various diameters (2–5.5 Å in increments of 0.5 Å) and charge (+1 or −1) interacting in explicit water (TIP3P) under ambient conditions. We extract their potentials of mean force (PMFs). We develop an interpolation scheme, called i-PMF, that is capable of capturing the full set of PMFs for arbitrary combinations of ion sizes ranging from 2 to 5.5 Å. The advantage of the interpolation process is computational cost. Whereas it can take 100 h to simulate each PMF by MD, we can compute an equivalently accurate i-PMF in seconds. This process may be useful for rapid and accurate calculation of the strengths of salt bridges and the effects of bridging waters in biomolecular simulations. We also find that our data is consistent with Collins’ “law of matching affinities” of ion solubilities: small–small or large–large ion pairs are poorly soluble in water, whereas small–large are highly soluble. PMID:24625086
Fast and accurate sensitivity analysis of IMPT treatment plans using Polynomial Chaos Expansion
NASA Astrophysics Data System (ADS)
Perkó, Zoltán; van der Voort, Sebastian R.; van de Water, Steven; Hartman, Charlotte M. H.; Hoogeman, Mischa; Lathouwers, Danny
2016-06-01
The highly conformal planned dose distribution achievable in intensity modulated proton therapy (IMPT) can severely be compromised by uncertainties in patient setup and proton range. While several robust optimization approaches have been presented to address this issue, appropriate methods to accurately estimate the robustness of treatment plans are still lacking. To fill this gap we present Polynomial Chaos Expansion (PCE) techniques which are easily applicable and create a meta-model of the dose engine by approximating the dose in every voxel with multidimensional polynomials. This Polynomial Chaos (PC) model can be built in an automated fashion relatively cheaply and subsequently it can be used to perform comprehensive robustness analysis. We adapted PC to provide among others the expected dose, the dose variance, accurate probability distribution of dose-volume histogram (DVH) metrics (e.g. minimum tumor or maximum organ dose), exact bandwidths of DVHs, and to separate the effects of random and systematic errors. We present the outcome of our verification experiments based on 6 head-and-neck (HN) patients, and exemplify the usefulness of PCE by comparing a robust and a non-robust treatment plan for a selected HN case. The results suggest that PCE is highly valuable for both research and clinical applications.
Fast and accurate sensitivity analysis of IMPT treatment plans using Polynomial Chaos Expansion.
Perkó, Zoltán; van der Voort, Sebastian R; van de Water, Steven; Hartman, Charlotte M H; Hoogeman, Mischa; Lathouwers, Danny
2016-06-21
The highly conformal planned dose distribution achievable in intensity modulated proton therapy (IMPT) can severely be compromised by uncertainties in patient setup and proton range. While several robust optimization approaches have been presented to address this issue, appropriate methods to accurately estimate the robustness of treatment plans are still lacking. To fill this gap we present Polynomial Chaos Expansion (PCE) techniques which are easily applicable and create a meta-model of the dose engine by approximating the dose in every voxel with multidimensional polynomials. This Polynomial Chaos (PC) model can be built in an automated fashion relatively cheaply and subsequently it can be used to perform comprehensive robustness analysis. We adapted PC to provide among others the expected dose, the dose variance, accurate probability distribution of dose-volume histogram (DVH) metrics (e.g. minimum tumor or maximum organ dose), exact bandwidths of DVHs, and to separate the effects of random and systematic errors. We present the outcome of our verification experiments based on 6 head-and-neck (HN) patients, and exemplify the usefulness of PCE by comparing a robust and a non-robust treatment plan for a selected HN case. The results suggest that PCE is highly valuable for both research and clinical applications.
Fast and Accurate Electronic Excitations in Cyanines with the Many-Body Bethe-Salpeter Approach.
Boulanger, Paul; Jacquemin, Denis; Duchemin, Ivan; Blase, Xavier
2014-03-11
The accurate prediction of the optical signatures of cyanine derivatives remains an important challenge in theoretical chemistry. Indeed, up to now, only the most expensive quantum chemical methods (CAS-PT2, CC, DMC, etc.) yield consistent and accurate data, impeding the applications on real-life molecules. Here, we investigate the lowest lying singlet excitation energies of increasingly long cyanine dyes within the GW and Bethe-Salpeter Green's function many-body perturbation theory. Our results are in remarkable agreement with available coupled-cluster (exCC3) data, bringing these two single-reference perturbation techniques within a 0.05 eV maximum discrepancy. By comparison, available TD-DFT calculations with various semilocal, global, or range-separated hybrid functionals, overshoot the transition energies by a typical error of 0.3-0.6 eV. The obtained accuracy is achieved with a parameter-free formalism that offers similar accuracy for metallic or insulating, finite size or extended systems.
Fast and accurate registration techniques for affine and nonrigid alignment of MR brain images.
Liu, Jia-Xiu; Chen, Yong-Sheng; Chen, Li-Fen
2010-01-01
Registration of magnetic resonance brain images is a geometric operation that determines point-wise correspondences between two brains. It remains a difficult task due to the highly convoluted structure of the brain. This paper presents novel methods, Brain Image Registration Tools (BIRT), that can rapidly and accurately register brain images by utilizing the brain structure information estimated from image derivatives. Source and target image spaces are related by affine transformation and non-rigid deformation. The deformation field is modeled by a set of Wendland's radial basis functions hierarchically deployed near the salient brain structures. In general, nonlinear optimization is heavily engaged in the parameter estimation for affine/non-rigid transformation and good initial estimates are thus essential to registration performance. In this work, the affine registration is initialized by a rigid transformation, which can robustly estimate the orientation and position differences of brain images. The parameters of the affine/non-rigid transformation are then hierarchically estimated in a coarse-to-fine manner by maximizing an image similarity measure, the correlation ratio, between the involved images. T1-weighted brain magnetic resonance images were utilized for performance evaluation. Our experimental results using four 3-D image sets demonstrated that BIRT can efficiently align images with high accuracy compared to several other algorithms, and thus is adequate to the applications which apply registration process intensively. Moreover, a voxel-based morphometric study quantitatively indicated that accurate registration can improve both the sensitivity and specificity of the statistical inference results.
Fast, Accurate and Precise Mid-Sagittal Plane Location in 3D MR Images of the Brain
NASA Astrophysics Data System (ADS)
Bergo, Felipe P. G.; Falcão, Alexandre X.; Yasuda, Clarissa L.; Ruppert, Guilherme C. S.
Extraction of the mid-sagittal plane (MSP) is a key step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, based on automatic segmentation of the brain and on heuristic maximization of the cerebro-spinal fluid within the MSP. The method is robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The method is also accurate with respect to MSP delineations done by a specialist. The method was evaluated on 64 MR images (36 pathological, 20 healthy, 8 synthetic), and it found a precise and accurate approximation of the MSP in all of them with a mean time of 60.0 seconds per image, mean angular variation within a same image (precision) of 1.26o and mean angular difference from specialist delineations (accuracy) of 1.64o.
NASA Astrophysics Data System (ADS)
Jin, Xuhon; Huang, Fei; Hu, Pengju; Cheng, Xiaoli
2016-11-01
A fundamental prerequisite for satellites operating in a Low Earth Orbit (LEO) is the availability of fast and accurate prediction of non-gravitational aerodynamic forces, which is characterised by the free molecular flow regime. However, conventional computational methods like the analytical integral method and direct simulation Monte Carlo (DSMC) technique are found failing to deal with flow shadowing and multiple reflections or computationally expensive. This work develops a general computer program for the accurate calculation of aerodynamic forces in the free molecular flow regime using the test particle Monte Carlo (TPMC) method, and non-gravitational aerodynamic forces actiong on the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite is calculated for different freestream conditions and gas-surface interaction models by the computer program.
A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows
NASA Astrophysics Data System (ADS)
Bijleveld, H. A.; Veldman, A. E. P.
2014-12-01
A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.
Baldrich, Eva; Gómez, Rodrigo; Gabriel, Gemma; Muñoz, Francesc Xavier
2011-01-15
Carbon nanotubes (CNT) have been exploited for an important number of electroanalytical and sensing purposes. Specifically, CNT incorporation to an electrode surface coating increases its roughness and area, provides electrocatalytic activity towards a variety of molecules, and improves electron transfer. This modification is generally based on the irreversible deposition of CNT on surface. Nevertheless, CNT are highly porous materials that might promote molecule non-specific adsorption and/or electrodeposition, which could induce sample-to-sample cross-contamination and affect measurement specificity and reproducibility. This drawback has been often circumvented by combining CNT with charged polymers able to repel molecules of opposed charge. We demonstrate that single-walled CNT (SWCNT) have a strong tendency to non-specifically adsorb onto the surface of protein-coated magnetic particles (MP). Magnetic capture of those MP generates CNT coentrapment and allows extremely fast, simple and reversible production of SWCNT electrodes. We have exploited this phenomenon for the production of modified screen-printed electrodes (MP/CNT-SPE), which have been characterized by Scanning Electron Microscopy. The surface has been additionally optimized by evaluating the electrochemical performance of SPE modified with different amounts and proportions of MP and CNT. The modified devices have then been used for dopamine detection. MP/CNT-SPE generated improved assay sensitivity, lower limit of detection, and up to 500% higher current signals than bare electrodes. Magnetic entrapment is proposed as a promising strategy for the fast, simple and reversible generation of nanostructured electrodes of enhanced performance within a few minutes and electrode re-utilisation by simple magnet removal and surface washing.
SU-E-T-373: A Motorized Stage for Fast and Accurate QA of Machine Isocenter
Moore, J; Velarde, E; Wong, J
2014-06-01
Purpose: Precision delivery of radiation dose relies on accurate knowledge of the machine isocenter under a variety of machine motions. This is typically determined by performing a Winston-Lutz test consisting of imaging a known object at multiple gantry/collimator/table angles and ensuring that the maximum offset is within specified tolerance. The first step in the Winston-Lutz test is careful placement of a ball bearing at the machine isocenter as determined by repeated imaging and shifting until accurate placement has been determined. Conventionally this is performed by adjusting a stage manually using vernier scales which carry the limitation that each adjustment must be done inside the treatment room with the risks of inaccurate adjustment of the scale and physical bumping of the table. It is proposed to use a motorized system controlled outside of the room to improve the required time and accuracy of these tests. Methods: The three dimensional vernier scales are replaced by three motors with accuracy of 1 micron and a range of 25.4mm connected via USB to a computer in the control room. Software is designed which automatically detects the motors and assigns them to proper axes and allows for small shifts to be entered and performed. Input values match calculated offsets in magnitude and sign to reduce conversion errors. Speed of setup, number of iterations to setup, and accuracy of final placement are assessed. Results: Automatic BB placement required 2.25 iterations and 13 minutes on average while manual placement required 3.76 iterations and 37.5 minutes. The average final XYZ offsets is 0.02cm, 0.01cm, 0.04cm for automatic setup and 0.04cm, 0.02cm, 0.04cm for manual setup. Conclusion: Automatic placement decreased time and repeat iterations for setup while improving placement accuracy. Automatic placement greatly reduces the time required to perform QA.
Fast and accurate detection of cancer cell using a versatile three-channel plasmonic sensor
NASA Astrophysics Data System (ADS)
Hoseinian, M.; Ahmadi, A. R.; Bolorizadeh, M. A.
2016-09-01
Surface Plasmon Resonance (SPR) optical fiber sensors can be used as cost-effective small sized biosensors that are relatively simple to operate. Additionally, these instruments are label-free, hence rendering them highly sensitive to biological measurements. In this study, a three-channel microstructure optical fiber plasmonic-based portable biosensor is designed and analyzed using Finite Element Method. The proposed system is capable of determining changes in sample's refractive index with precision of order one thousandth. The biosensor measures three absorption resonance wavelengths of the analytes simultaneously. This property is one of the main advantages of the proposed biosensor since it reduces the error in the measured wavelength and enhances the accuracy of the results up to 10-5 m/RIU by reducing noise. In this paper, Jurkat cell, an indicator cell for leukemia cancer, is considered as the analyte; and its absorption resonance wavelengths as well as sensitivity in each channel are determined.
Blasco, Antonio Javier; Crevillén, Agustín González; de la Fuente, Pedro; González, María Cristina; Escarpa, Alberto
2007-04-01
A novel strategy integrating methodological calibration and analysis on board on a planar first-generation microfluidics system for the determination of total isoflavones in soy samples is proposed. The analytical strategy is conceptually proposed and successfully demonstrated on the basis of (i) the microchip design (with the possibility to use both reservoirs), (ii) the analytical characteristics of the developed method (statically zero intercept and excellent robustness between calibration slopes, RSDs < 5%), (iii) the irreversible electrochemical behaviour of isoflavone oxidation (no significant electrode fouling effect was observed between calibration and analysis runs) and (iv) the inherent versatility of the electrochemical end-channel configurations (possibility of use different pumping and detection media). Repeatability obtained in both standard (calibration) and real soy samples (analysis) with values of RSD less than 1% for the migration times indicated the stability of electroosmotic flow (EOF) during both integrated operations. The accuracy (an error of less than 6%) is demonstrated for the first time in these microsystems using a documented secondary standard from the Drug Master File (SW/1211/03) as reference material. Ultra fast calibration and analysis of total isoflavones in soy samples was integrated successfully employing 60 s each; enhancing notably the analytical performance of these microdevices with an important decrease in overall analysis times (less than 120 s) and with an increase in accuracy by a factor of 3.
EZ-Rhizo: integrated software for the fast and accurate measurement of root system architecture.
Armengaud, Patrick; Zambaux, Kevin; Hills, Adrian; Sulpice, Ronan; Pattison, Richard J; Blatt, Michael R; Amtmann, Anna
2009-03-01
The root system is essential for the growth and development of plants. In addition to anchoring the plant in the ground, it is the site of uptake of water and minerals from the soil. Plant root systems show an astonishing plasticity in their architecture, which allows for optimal exploitation of diverse soil structures and conditions. The signalling pathways that enable plants to sense and respond to changes in soil conditions, in particular nutrient supply, are a topic of intensive research, and root system architecture (RSA) is an important and obvious phenotypic output. At present, the quantitative description of RSA is labour intensive and time consuming, even using the currently available software, and the lack of a fast RSA measuring tool hampers forward and quantitative genetics studies. Here, we describe EZ-Rhizo: a Windows-integrated and semi-automated computer program designed to detect and quantify multiple RSA parameters from plants growing on a solid support medium. The method is non-invasive, enabling the user to follow RSA development over time. We have successfully applied EZ-Rhizo to evaluate natural variation in RSA across 23 Arabidopsis thaliana accessions, and have identified new RSA determinants as a basis for future quantitative trait locus (QTL) analysis.
Fast and accurate inference on gravitational waves from precessing compact binaries
NASA Astrophysics Data System (ADS)
Smith, Rory; Field, Scott E.; Blackburn, Kent; Haster, Carl-Johan; Pürrer, Michael; Raymond, Vivien; Schmidt, Patricia
2016-08-01
Inferring astrophysical information from gravitational waves emitted by compact binaries is one of the key science goals of gravitational-wave astronomy. In order to reach the full scientific potential of gravitational-wave experiments, we require techniques to mitigate the cost of Bayesian inference, especially as gravitational-wave signal models and analyses become increasingly sophisticated and detailed. Reduced-order models (ROMs) of gravitational waveforms can significantly reduce the computational cost of inference by removing redundant computations. In this paper, we construct the first reduced-order models of gravitational-wave signals that include the effects of spin precession, inspiral, merger, and ringdown in compact object binaries and that are valid for component masses describing binary neutron star, binary black hole, and mixed binary systems. This work utilizes the waveform model known as "IMRPhenomPv2." Our ROM enables the use of a fast reduced-order quadrature (ROQ) integration rule which allows us to approximate Bayesian probability density functions at a greatly reduced computational cost. We find that the ROQ rule can be used to speed-up inference by factors as high as 300 without introducing systematic bias. This corresponds to a reduction in computational time from around half a year to half a day for the longest duration and lowest mass signals. The ROM and ROQ rules are available with the main inference library of the LIGO Scientific Collaboration, LALInference.
NASA Astrophysics Data System (ADS)
Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-03-01
Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.
Regular, Fast and Accurate Airborne In-Situ Methane Measurements Around the Tropopause
NASA Astrophysics Data System (ADS)
Dyroff, Christoph; Rauthe-Schöch, Armin; Schuck, Tanja J.; Zahn, Andreas
2013-04-01
We present a laser spectrometer for automated monthly measurements of methane (CH4) mixing ratios aboard the CARIBIC passenger aircraft. The instrument is based on a commercial fast methane analyzer (FMA, Los Gatos Res.), which was modified for fully unattended employment. A laboratory characterization was performed and the results with emphasis on the precision, cross sensitivity to H2O, and accuracy are presented. An in-flight calibration strategy is described, that utilizes CH4 measurements obtained from flask samples taken during the same flights. By statistical comparison of the in-situ measurements with the flask samples we derive a total uncetrainty estimate of ~ 3.85 ppbv (1?) around the tropopause, and ~ 12.4 ppbv (1?) during aircraft ascent and descent. Data from the first two years of airborne operation are presented that span a large part of the northern hemispheric upper troposphere and lowermost stratosphere, with occasional crossings of the tropics on flights to southern Africa. With its high spatial resolution and high accuracy this data set is unprecedented in the highly important atmospheric layer of the tropopause.
Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-03-14
Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 10{sup 3}-10{sup 5} molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.
Automated system for fast and accurate analysis of SF6 injected in the surface ocean.
Koo, Chul-Min; Lee, Kitack; Kim, Miok; Kim, Dae-Ok
2005-11-01
This paper describes an automated sampling and analysis system for the shipboard measurement of dissolved sulfur hexafluoride (SF6) in surface marine environments into which SF6 has been deliberately released. This underway system includes a gas chromatograph associated with an electron capture detector, a fast and highly efficient SF6-extraction device, a global positioning system, and a data acquisition system based on Visual Basic 6.0/C 6.0. This work is distinct from previous studies in that it quantifies the efficiency of the SF6-extraction device and its carryover effect and examines the effect of surfactant on the SF6-extraction efficiency. Measurements can be continuously performed on seawater samples taken from a seawater line installed onboard a research vessel. The system runs on an hourly cycle during which one set of four SF6 standards is measured and SF6 derived from the seawater stream is subsequently analyzed for the rest of each 1 h period. This state-of-art system was successfully used to trace a water mass carrying Cochlodinium polykrikoides, which causes harmful algal blooms (HAB) in the coastal waters of southern Korea. The successful application of this analysis system in tracing the HAB-infected water mass suggests that the SF6 detection method described in this paper will improve the quality of the future study of biogeochemical processes in the marine environment.
Accurate and Fast Convergent Initial-Value Belief Propagation for Stereo Matching
Wang, Xiaofeng; Liu, Yiguang
2015-01-01
The belief propagation (BP) algorithm has some limitations, including ambiguous edges and textureless regions, and slow convergence speed. To address these problems, we present a novel algorithm that intrinsically improves both the accuracy and the convergence speed of BP. First, traditional BP generally consumes time due to numerous iterations. To reduce the number of iterations, inspired by the crucial importance of the initial value in nonlinear problems, a novel initial-value belief propagation (IVBP) algorithm is presented, which can greatly improve both convergence speed and accuracy. Second, .the majority of the existing research on BP concentrates on the smoothness term or other energy terms, neglecting the significance of the data term. In this study, a self-adapting dissimilarity data term (SDDT) is presented to improve the accuracy of the data term, which incorporates an additional gradient-based measure into the traditional data term, with the weight determined by the robust measure-based control function. Finally, this study explores the effective combination of local methods and global methods. The experimental results have demonstrated that our method performs well compared with the state-of-the-art BP and simultaneously holds better edge-preserving smoothing effects with fast convergence speed in the Middlebury and new 2014 Middlebury datasets. PMID:26349063
Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk
2015-03-15
Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.
RRTMGP: A fast and accurate radiation code for the next decade
NASA Astrophysics Data System (ADS)
Mlawer, E. J.; Pincus, R.; Wehe, A.; Delamere, J.
2015-12-01
Atmospheric radiative processes are key drivers of the Earth's climate and must be accurately represented in global circulations models (GCMs) to allow faithful simulations of the planet's past, present, and future. The radiation code RRTMG is widely utilized by global modeling centers for both climate and weather predictions, but it has become increasingly out-of-date. The code's structure is not well suited for the current generation of computer architectures and its stored absorption coefficients are not consistent with the most recent spectroscopic information. We are developing a new broadband radiation code for the current generation of computational architectures. This code, called RRTMGP, will be a completely restructured and modern version of RRTMG. The new code preserves the strengths of the existing RRTMG parameterization, especially the high accuracy of the k-distribution treatment of absorption by gases, but the entire code is being rewritten to provide highly efficient computation across a range of architectures. Our redesign includes refactoring the code into discrete kernels corresponding to fundamental computational elements (e.g. gas optics), optimizing the code for operating on multiple columns in parallel, simplifying the subroutine interface, revisiting the existing gas optics interpolation scheme to reduce branching, and adding flexibility with respect to run-time choices of streams, need for consideration of scattering, aerosol and cloud optics, etc. The result of the proposed development will be a single, well-supported and well-validated code amenable to optimization across a wide range of platforms. Our main emphasis is on highly-parallel platforms including Graphical Processing Units (GPUs) and Many-Integrated-Core processors (MICs), which experience shows can accelerate broadband radiation calculations by as much as a factor of fifty. RRTMGP will provide highly efficient and accurate radiative fluxes calculations for coupled global
NASA Technical Reports Server (NTRS)
Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.
2016-01-01
A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.
Hassouna, M Sabry; Farag, A A
2007-09-01
A wide range of computer vision applications require an accurate solution of a particular Hamilton- Jacobi (HJ) equation, known as the Eikonal equation. In this paper, we propose an improved version of the fast marching method (FMM) that is highly accurate for both 2D and 3D Cartesian domains. The new method is called multi-stencils fast marching (MSFM), which computes the solution at each grid point by solving the Eikonal equation along several stencils and then picks the solution that satisfies the upwind condition. The stencils are centered at each grid point and cover its entire nearest neighbors. In 2D space, 2 stencils cover the 8-neighbors of the point, while in 3D space, 6 stencils cover its 26-neighbors. For those stencils that are not aligned with the natural coordinate system, the Eikonal equation is derived using directional derivatives and then solved using higher order finite difference schemes. The accuracy of the proposed method over the state-of-the-art FMM-based techniques has been demonstrated through comprehensive numerical experiments.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Accurate and fast multiple-testing correction in eQTL studies.
Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm
2015-06-04
In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset.
Fast and accurate pressure-drop prediction in straightened atherosclerotic coronary arteries.
Schrauwen, Jelle T C; Koeze, Dion J; Wentzel, Jolanda J; van de Vosse, Frans N; van der Steen, Anton F W; Gijsen, Frank J H
2015-01-01
Atherosclerotic disease progression in coronary arteries is influenced by wall shear stress. To compute patient-specific wall shear stress, computational fluid dynamics (CFD) is required. In this study we propose a method for computing the pressure-drop in regions proximal and distal to a plaque, which can serve as a boundary condition in CFD. As a first step towards exploring the proposed method we investigated ten straightened coronary arteries. First, the flow fields were calculated with CFD and velocity profiles were fitted on the results. Second, the Navier-Stokes equation was simplified and solved with the found velocity profiles to obtain a pressure-drop estimate (Δp (1)). Next, Δp (1) was compared to the pressure-drop from CFD (Δp CFD) as a validation step. Finally, the velocity profiles, and thus the pressure-drop were predicted based on geometry and flow, resulting in Δp geom. We found that Δp (1) adequately estimated Δp CFD with velocity profiles that have one free parameter β. This β was successfully related to geometry and flow, resulting in an excellent agreement between Δp CFD and Δp geom: 3.9 ± 4.9% difference at Re = 150. We showed that this method can quickly and accurately predict pressure-drop on the basis of geometry and flow in straightened coronary arteries that are mildly diseased.
Simons, Craig J; Cobb, Loren; Davidson, Bradley S
2014-04-01
In vivo measurement of lumbar spine configuration is useful for constructing quantitative biomechanical models. Positional magnetic resonance imaging (MRI) accommodates a larger range of movement in most joints than conventional MRI and does not require a supine position. However, this is achieved at the expense of image resolution and contrast. As a result, quantitative research using positional MRI has required long reconstruction times and is sensitive to incorrectly identifying the vertebral boundary due to low contrast between bone and surrounding tissue in the images. We present a semi-automated method used to obtain digitized reconstructions of lumbar vertebrae in any posture of interest. This method combines a high-resolution reference scan with a low-resolution postural scan to provide a detailed and accurate representation of the vertebrae in the posture of interest. Compared to a criterion standard, translational reconstruction error ranged from 0.7 to 1.6 mm and rotational reconstruction error ranged from 0.3 to 2.6°. Intraclass correlation coefficients indicated high interrater reliability for measurements within the imaging plane (ICC 0.97-0.99). Computational efficiency indicates that this method may be used to compile data sets large enough to account for population variance, and potentially expand the use of positional MRI as a quantitative biomechanics research tool.
WaveQ3D: Fast and accurate acoustic transmission loss (TL) eigenrays, in littoral environments
NASA Astrophysics Data System (ADS)
Reilly, Sean M.
This study defines a new 3D Gaussian ray bundling acoustic transmission loss model in geodetic coordinates: latitude, longitude, and altitude. This approach is designed to lower the computation burden of computing accurate environmental effects in sonar training application by eliminating the need to transform the ocean environment into a collection of Nx2D Cartesian radials. This approach also improves model accuracy by incorporating real world 3D effects, like horizontal refraction, into the model. This study starts with derivations for a 3D variant of Gaussian ray bundles in this coordinate system. To verify the accuracy of this approach, acoustic propagation predictions of transmission loss, time of arrival, and propagation direction are compared to analytic solutions and other models. To validate the model's ability to predict real world phenomena, predictions of transmission loss and propagation direction are compared to at-sea measurements, in an environment where strong horizontal refraction effect have been observed. This model has been integrated into U.S. Navy active sonar training system applications, where testing has demonstrated its ability to improve transmission loss calculation speed without sacrificing accuracy.
LinkImpute: Fast and Accurate Genotype Imputation for Nonmodel Organisms.
Money, Daniel; Gardner, Kyle; Migicovsky, Zoë; Schwaninger, Heidi; Zhong, Gan-Yuan; Myles, Sean
2015-09-15
Obtaining genome-wide genotype data from a set of individuals is the first step in many genomic studies, including genome-wide association and genomic selection. All genotyping methods suffer from some level of missing data, and genotype imputation can be used to fill in the missing data and improve the power of downstream analyses. Model organisms like human and cattle benefit from high-quality reference genomes and panels of reference genotypes that aid in imputation accuracy. In nonmodel organisms, however, genetic and physical maps often are either of poor quality or are completely absent, and there are no panels of reference genotypes available. There is therefore a need for imputation methods designed specifically for nonmodel organisms in which genomic resources are poorly developed and marker order is unreliable or unknown. Here we introduce LinkImpute, a software package based on a k-nearest neighbor genotype imputation method, LD-kNNi, which is designed for unordered markers. No physical or genetic maps are required, and it is designed to work on unphased genotype data from heterozygous species. It exploits the fact that markers useful for imputation often are not physically close to the missing genotype but rather distributed throughout the genome. Using genotyping-by-sequencing data from diverse and heterozygous accessions of apples, grapes, and maize, we compare LD-kNNi with several genotype imputation methods and show that LD-kNNi is fast, comparable in accuracy to the best-existing methods, and exhibits the least bias in allele frequency estimates.
Fast, accurate and easy-to-pipeline methods for amplicon sequence processing
NASA Astrophysics Data System (ADS)
Antonielli, Livio; Sessitsch, Angela
2016-04-01
Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.
NASA Astrophysics Data System (ADS)
Gómez-Pedrero, José A.; Rodríguez-Ibañez, Diego; Alonso, José; Quirgoa, Juan A.
2015-09-01
With the advent of techniques devised for the mass production of optical components made with surfaces of arbitrary form (also known as free form surfaces) in the last years, a parallel development of measuring systems adapted for these new kind of surfaces constitutes a real necessity for the industry. Profilometry is one of the preferred methods for the assessment of the quality of a surface, and is widely employed in the optical fabrication industry for the quality control of its products. In this work, we present the design, development and assembly of a new profilometer with five axis of movement, specifically suited to the measurement of medium size (up to 150 mm of diameter) "free-form" optical surfaces with sub-micrometer accuracy and low measuring times. The apparatus is formed by three X, Y, Z linear motorized positioners plus and additional angular and a tilt positioner employed to locate accurately the surface to be measured and the probe which can be a mechanical or an optical one, being optical one a confocal sensor based on chromatic aberration. Both optical and mechanical probes guarantee an accuracy lower than the micrometer in the determination of the surface height, thus ensuring an accuracy in the surface curvatures of the order of 0.01 D or better. An original calibration procedure based on the measurement of a precision sphere has been developed in order to correct the perpendicularity error between the axes of the linear positioners. To reduce the measuring time of the profilometer, a custom electronics, based on an Arduino™ controller, have been designed and produced in order to synchronize the five motorized positioners and the optical and mechanical probes so that a medium size surface (around 10 cm of diameter) with a dynamic range in curvatures of around 10 D, can be measured in less than 300 seconds (using three axes) keeping the resolution in height and curvature in the figures mentioned above.
Clark, Alex M; Bunin, Barry A; Litterman, Nadia K; Schürer, Stephan C; Visser, Ubbo
2014-01-01
Bioinformatics and computer aided drug design rely on the curation of a large number of protocols for biological assays that measure the ability of potential drugs to achieve a therapeutic effect. These assay protocols are generally published by scientists in the form of plain text, which needs to be more precisely annotated in order to be useful to software methods. We have developed a pragmatic approach to describing assays according to the semantic definitions of the BioAssay Ontology (BAO) project, using a hybrid of machine learning based on natural language processing, and a simplified user interface designed to help scientists curate their data with minimum effort. We have carried out this work based on the premise that pure machine learning is insufficiently accurate, and that expecting scientists to find the time to annotate their protocols manually is unrealistic. By combining these approaches, we have created an effective prototype for which annotation of bioassay text within the domain of the training set can be accomplished very quickly. Well-trained annotations require single-click user approval, while annotations from outside the training set domain can be identified using the search feature of a well-designed user interface, and subsequently used to improve the underlying models. By drastically reducing the time required for scientists to annotate their assays, we can realistically advocate for semantic annotation to become a standard part of the publication process. Once even a small proportion of the public body of bioassay data is marked up, bioinformatics researchers can begin to construct sophisticated and useful searching and analysis algorithms that will provide a diverse and powerful set of tools for drug discovery researchers.
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
Bunin, Barry A.; Litterman, Nadia K.; Schürer, Stephan C.; Visser, Ubbo
2014-01-01
Bioinformatics and computer aided drug design rely on the curation of a large number of protocols for biological assays that measure the ability of potential drugs to achieve a therapeutic effect. These assay protocols are generally published by scientists in the form of plain text, which needs to be more precisely annotated in order to be useful to software methods. We have developed a pragmatic approach to describing assays according to the semantic definitions of the BioAssay Ontology (BAO) project, using a hybrid of machine learning based on natural language processing, and a simplified user interface designed to help scientists curate their data with minimum effort. We have carried out this work based on the premise that pure machine learning is insufficiently accurate, and that expecting scientists to find the time to annotate their protocols manually is unrealistic. By combining these approaches, we have created an effective prototype for which annotation of bioassay text within the domain of the training set can be accomplished very quickly. Well-trained annotations require single-click user approval, while annotations from outside the training set domain can be identified using the search feature of a well-designed user interface, and subsequently used to improve the underlying models. By drastically reducing the time required for scientists to annotate their assays, we can realistically advocate for semantic annotation to become a standard part of the publication process. Once even a small proportion of the public body of bioassay data is marked up, bioinformatics researchers can begin to construct sophisticated and useful searching and analysis algorithms that will provide a diverse and powerful set of tools for drug discovery researchers. PMID:25165633
Fast and accurate determination of K, Ca, and Mg in human serum by sector field ICP-MS.
Yu, Lee L; Davis, W Clay; Nuevo Ordonez, Yoana; Long, Stephen E
2013-11-01
Electrolytes in serum are important biomarkers for skeletal and cellular health. The levels of electrolytes are monitored by measuring the Ca, Mg, K, and Na in blood serum. Many reference methods have been developed for the determination of Ca, Mg, and K in clinical measurements; however, isotope dilution thermal ionization mass spectrometry (ID-TIMS) has traditionally been the primary reference method serving as an anchor for traceability and accuracy to these secondary reference methods. The sample matrix must be separated before ID-TIMS measurements, which is a slow and tedious process that hindered the adoption of the technique in routine clinical measurements. We have developed a fast and accurate method for the determination of Ca, Mg, and K in serum by taking advantage of the higher mass resolution capability of the modern sector field inductively coupled plasma mass spectrometry (SF-ICP-MS). Each serum sample was spiked with a mixture containing enriched (44)Ca, (26)Mg, and (41)K, and the (42)Ca(+):(44)Ca(+), (24)Mg(+):(26)Mg(+), and (39)K(+):(41)K(+) ratios were measured. The Ca and Mg ratios were measured in medium resolution mode (m/Δm ≈ 4 500), and the K ratio in high resolution mode (m/Δm ≈ 10 000). Residual (40)Ar(1)H(+) interference was still observed but the deleterious effects of the interference were minimized by measuring the sample at K > 100 ng g(-1). The interferences of Sr(++) at the two Ca isotopes were less than 0.25 % of the analyte signal, and they were corrected with the (88)Sr(+) intensity by using the Sr(++):Sr(+) ratio. The sample preparation involved only simple dilutions, and the measurement using this sample preparation approach is known as dilution-and-shoot (DNS). The DNS approach was validated with samples prepared via the traditional acid digestion approach followed by ID-SF-ICP-MS measurement. DNS and digested samples of SRM 956c were measured with ID-SF-ICP-MS for quality assurance, and the results (mean
A simple and fast representation space for classifying complex time series
NASA Astrophysics Data System (ADS)
Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.
2017-03-01
In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease.
Fast and simple scheme for generating NOON states of photons in circuit QED.
Su, Qi-Ping; Yang, Chui-Ping; Zheng, Shi-Biao
2014-01-28
The generation, manipulation and fundamental understanding of entanglement lies at very heart of quantum mechanics. Among various types of entangled states, the NOON states are a kind of special quantum entangled states with two orthogonal component states in maximal superposition, which have a wide range of potential applications in quantum communication and quantum information processing. Here, we propose a fast and simple scheme for generating NOON states of photons in two superconducting resonators by using a single superconducting transmon qutrit. Because only one superconducting qutrit and two resonators are used, the experimental setup for this scheme is much simplified when compared with the previous proposals requiring a setup of two superconducting qutrits and three cavities. In addition, this scheme is easier and faster to implement than the previous proposals, which require using a complex microwave pulse, or a small pulse Rabi frequency in order to avoid nonresonant transitions.
Fast and simple scheme for generating NOON states of photons in circuit QED
Su, Qi-Ping; Yang, Chui-Ping; Zheng, Shi-Biao
2014-01-01
The generation, manipulation and fundamental understanding of entanglement lies at very heart of quantum mechanics. Among various types of entangled states, the NOON states are a kind of special quantum entangled states with two orthogonal component states in maximal superposition, which have a wide range of potential applications in quantum communication and quantum information processing. Here, we propose a fast and simple scheme for generating NOON states of photons in two superconducting resonators by using a single superconducting transmon qutrit. Because only one superconducting qutrit and two resonators are used, the experimental setup for this scheme is much simplified when compared with the previous proposals requiring a setup of two superconducting qutrits and three cavities. In addition, this scheme is easier and faster to implement than the previous proposals, which require using a complex microwave pulse, or a small pulse Rabi frequency in order to avoid nonresonant transitions. PMID:24469334
Fast and simple method for Goss texture evaluation by neutron diffraction
NASA Astrophysics Data System (ADS)
Kucerakova, M.; Kolařík, K.; Čapek, J.; Vratislav, S.; Kalvoda, L.
2016-09-01
Requirement of low power losses is one of the crucial demands laid on properties of electric steel sheets used in construction of various magnetic circuits. For cold-rolled grain- oriented (CRGO) Fe-3%Si sheets used in majority of power distribution transformers, the Goss texture {110}<001> is known to provide the best utility properties (low power loses, high magnetic permeability). Due to the coarse grain size of CRGO steel, neutron diffraction (ND) is dominantly used to characterize the sheets' texture in order to achieve statistically significant data. In this paper, we present a fast and simple method for characterization of Goss texture perfection level in CRGO steel sheets based on monochromatic ND. The method is tested on 8 samples differing in fabrication technology and magnetic properties. Satisfactory performance of the method and its suitability for a detail texture analyses is tested by juxtaposition of the obtained textural and the magnetic characteristics measured by Barkhausen method.
Fast accurate MEG source localization using a multilayer perceptron trained with real brain noise
NASA Astrophysics Data System (ADS)
Jun, Sung Chan; Pearlmutter, Barak A.; Nolte, Guido
2002-07-01
Iterative gradient methods such as Levenberg-Marquardt (LM) are in widespread use for source localization from electroencephalographic (EEG) and magnetoencephalographic (MEG) signals. Unfortunately, LM depends sensitively on the initial guess, necessitating repeated runs. This, combined with LM's high per-step cost, makes its computational burden quite high. To reduce this burden, we trained a multilayer perceptron (MLP) as a real-time localizer. We used an analytical model of quasistatic electromagnetic propagation through a spherical head to map randomly chosen dipoles to sensor activities according to the sensor geometry of a 4D Neuroimaging Neuromag-122 MEG system, and trained a MLP to invert this mapping in the absence of noise or in the presence of various sorts of noise such as white Gaussian noise, correlated noise, or real brain noise. A MLP structure was chosen to trade off computation and accuracy. This MLP was trained four times, with each type of noise. We measured the effects of initial guesses on LM performance, which motivated a hybrid MLP-start-LM method, in which the trained MLP initializes LM. We also compared the localization performance of LM, MLPs, and hybrid MLP-start-LMs for realistic brain signals. Trained MLPs are much faster than other methods, while the hybrid MLP-start-LMs are faster and more accurate than fixed-4-start-LM. In particular, the hybrid MLP-start-LM initialized by a MLP trained with the real brain noise dataset is 60 times faster and is comparable in accuracy to random-20-start-LM, and this hybrid system (localization error: 0.28 cm, computation time: 36 ms) shows almost as good performance as optimal-1-start-LM (localization error: 0.23 cm, computation time: 22 ms), which initializes LM with the correct dipole location. MLPs trained with noise perform better than the MLP trained without noise, and the MLP trained with real brain noise is almost as good an initial guesser for LM as the correct dipole location.
Fast and Accurate Discovery of Degenerate Linear Motifs in Protein Sequences
Levy, Emmanuel D.; Michnick, Stephen W.
2014-01-01
Linear motifs mediate a wide variety of cellular functions, which makes their characterization in protein sequences crucial to understanding cellular systems. However, the short length and degenerate nature of linear motifs make their discovery a difficult problem. Here, we introduce MotifHound, an algorithm particularly suited for the discovery of small and degenerate linear motifs. MotifHound performs an exact and exhaustive enumeration of all motifs present in proteins of interest, including all of their degenerate forms, and scores the overrepresentation of each motif based on its occurrence in proteins of interest relative to a background (e.g., proteome) using the hypergeometric distribution. To assess MotifHound, we benchmarked it together with state-of-the-art algorithms. The benchmark consists of 11,880 sets of proteins from S. cerevisiae; in each set, we artificially spiked-in one motif varying in terms of three key parameters, (i) number of occurrences, (ii) length and (iii) the number of degenerate or “wildcard” positions. The benchmark enabled the evaluation of the impact of these three properties on the performance of the different algorithms. The results showed that MotifHound and SLiMFinder were the most accurate in detecting degenerate linear motifs. Interestingly, MotifHound was 15 to 20 times faster at comparable accuracy and performed best in the discovery of highly degenerate motifs. We complemented the benchmark by an analysis of proteins experimentally shown to bind the FUS1 SH3 domain from S. cerevisiae. Using the full-length protein partners as sole information, MotifHound recapitulated most experimentally determined motifs binding to the FUS1 SH3 domain. Moreover, these motifs exhibited properties typical of SH3 binding peptides, e.g., high intrinsic disorder and evolutionary conservation, despite the fact that none of these properties were used as prior information. MotifHound is available (http://michnick.bcm.umontreal.ca or http
NASA Astrophysics Data System (ADS)
Katata, Genki; Kajino, Mizuo; Hiraki, Takatoshi; Aikawa, Masahide; Kobayashi, Tomiki; Nagai, Haruyasu
2011-10-01
To apply a meteorological model to investigate fog occurrence, acidification and deposition in mountain forests, the meteorological model WRF was modified to calculate fog deposition accurately by the simple linear function of fog deposition onto vegetation derived from numerical experiments using the detailed multilayer atmosphere-vegetation-soil model (SOLVEG). The modified version of WRF that includes fog deposition (fog-WRF) was tested in a mountain forest on Mt. Rokko in Japan. fog-WRF provided a distinctly better prediction of liquid water content of fog (LWC) than the original version of WRF. It also successfully simulated throughfall observations due to fog deposition inside the forest during the summer season that excluded the effect of forest edges. Using the linear relationship between fog deposition and altitude given by the fog-WRF calculations and the data from throughfall observations at a given altitude, the vertical distribution of fog deposition can be roughly estimated in mountain forests. A meteorological model that includes fog deposition will be useful in mapping fog deposition in mountain cloud forests.
Salaices Avila, Manuel Alejandro; Breiter, Roman; Mott, Henry
2007-01-01
Solid-phase microextraction (SPME) with gas chromatography is to be used for assay of effluent liquid samples from soil column experiments associated with VOC fate/transport studies. One goal of the fate/transport studies is to develop accurate, highly reproducible column breakthrough curves for 1,2-cis-dichloroethylene (cis-DCE) and trichloroethylene (TCE) to better understand interactions with selected natural solid phases. For SPME, the influences of the sample equilibration time, extraction temperature and the ratio of volume of sample bottle to that of the liquid sample (V(T)/V(w)) are the critical factors that could influence accuracy and precision of the measured results. Equilibrium between the gas phase and liquid phase was attained after 200 min of equilibration time. The temperature must be carefully controlled due to variation of both the Henry's constant (K(h)) and the fibre/gas phase distribution coefficient (K(fg)). K(h) decreases with decreasing temperature while K(fg) increases. Low V(T)/V(w) yields better sensitivity but results in analyte losses and negative bias of the resultant assay. High V(T)/V(w) ratio yields reduced sensitivity but analyte losses were found to be minimal, leading to better accuracy and reproducibility. A fast SPME method was achieved, 5 min for SPME extraction and 3.10 min for GC analysis. A linear calibration function in the gas phase was developed to analyse the breakthrough curve data, linear between a range of 0.9-236 microgl(-1), and a detection limit lower than 5 microgl(-1).
SERF: A Simple, Effective, Robust, and Fast Image Super-Resolver From Cascaded Linear Regression.
Hu, Yanting; Wang, Nannan; Tao, Dacheng; Gao, Xinbo; Li, Xuelong
2016-09-01
Example learning-based image super-resolution techniques estimate a high-resolution image from a low-resolution input image by relying on high- and low-resolution image pairs. An important issue for these techniques is how to model the relationship between high- and low-resolution image patches: most existing complex models either generalize hard to diverse natural images or require a lot of time for model training, while simple models have limited representation capability. In this paper, we propose a simple, effective, robust, and fast (SERF) image super-resolver for image super-resolution. The proposed super-resolver is based on a series of linear least squares functions, namely, cascaded linear regression. It has few parameters to control the model and is thus able to robustly adapt to different image data sets and experimental settings. The linear least square functions lead to closed form solutions and therefore achieve computationally efficient implementations. To effectively decrease these gaps, we group image patches into clusters via k-means algorithm and learn a linear regressor for each cluster at each iteration. The cascaded learning process gradually decreases the gap of high-frequency detail between the estimated high-resolution image patch and the ground truth image patch and simultaneously obtains the linear regression parameters. Experimental results show that the proposed method achieves superior performance with lower time consumption than the state-of-the-art methods.
A fast and accurate method for detection of IBD shared haplotypes in genome-wide SNP data.
Bjelland, Douglas W; Lingala, Uday; Patel, Piyush S; Jones, Matt; Keller, Matthew C
2017-02-08
Identical by descent (IBD) segments are used to understand a number of fundamental issues in genetics. IBD segments are typically detected using long stretches of identical alleles between haplotypes in phased, whole-genome SNP data. Phase or SNP call errors in genomic data can degrade accuracy of IBD detection and lead to false-positive/negative calls and to under/overextension of true IBD segments. Furthermore, the number of comparisons increases quadratically with sample size, requiring high computational efficiency. We developed a new IBD segment detection program, FISHR (Find IBD Shared Haplotypes Rapidly), in an attempt to accurately detect IBD segments and to better estimate their endpoints using an algorithm that is fast enough to be deployed on very large whole-genome SNP data sets. We compared the performance of FISHR to three leading IBD segment detection programs: GERMLINE, refined IBD, and HaploScore. Using simulated and real genomic sequence data, we show that FISHR is slightly more accurate than all programs at detecting long (>3 cm) IBD segments but slightly less accurate than refined IBD at detecting short (~1 cm) IBD segments. More centrally, FISHR outperforms all programs in determining the true endpoints of IBD segments, which is crucial for several applications of IBD information. FISHR takes two to three times longer than GERMLINE to run, whereas both GERMLINE and FISHR were orders of magnitude faster than refined IBD and HaploScore. Overall, FISHR provides accurate IBD detection in unrelated individuals and is computationally efficient enough to be utilized on large SNP data sets >60 000 individuals.European Journal of Human Genetics advance online publication, 8 February 2017; doi:10.1038/ejhg.2017.6.
Yuan, Chao; Kosewick, Justin; Wang, Sihe
2013-08-01
The measurement of nicotine and its metabolites has been used to monitor tobacco use. A high-sensitivity method (<1 ng/mL) is necessary for the measurement in serum or plasma to differentiate nonsmokers from passive smokers. Here, we report a novel LC-MS/MS method to quantify nicotine, cotinine, and nornicotine in serum with high sensitivity. Sample preparation involved only protein precipitation, followed by online turbulent flow extraction and analysis on a porous graphitic carbon column in alkaline conditions. The chromatography time was 4 min. No significant matrix effects or interference were observed. The lower limit of quantification was 0.36, 0.32, and 0.38 ng/mL for nicotine, cotinine, and nornicotine, respectively, while accuracy was 91.6-117.1%. No carryover was observed up to a concentration of 48 , 550, and 48 ng/mL for nicotine, cotinine, and nornicotine, respectively. Total CV was <6.5%. The measurement of nicotine and cotinine was compared with an independent LC-MS/MS method and concordant results were obtained. In conclusion, this new method was simple, fast, sensitive, and accurate. It was validated to measure nicotine, cotinine, and nornicotine in serum for monitoring tobacco use.
Roblin, Douglas; Joski, Peter; Ren, Junling; Farmer, Robert; Baldwin, David; Carrell, David; Hart, Gene; Pardee, Roy; Bachman, Donald
2010-01-01
Background and Aims: Individual-level race/ethnicity is important for research into causes and consequences of health disparities. For various non-research reasons, it has rarely been collected on enrollees in integrated delivery systems. Individual-level race/ethnicity can be found in medical record documentation. Manual abstraction on large numbers of medical records is costly. We developed a simple SAS algorithm for electronic abstraction of white and African American race from digitized progress notes and evaluated its accuracy by comparing electronically abstracted race with other data sources. Methods: A simple SAS algorithm, based on text search strings (e.g. white male, African American woman), scanned digitized progress notes for provider face-to-face visits from 2005 through July 2009 in Kaiser Permanente Georgia’s (KPG) and Group Health Cooperative’s (GHC) electronic medical record systems. White and African American race was abstracted. If the patient had more than 1 visit with abstracted race, the patient was classified using the earliest visit. Abstracted race was linked at the individual-level to survey datasets with self-reported race (2005 survey of working age adults, 2007 survey of adults with hypertension, 2000–2005 Medicare surveys) and mother’s race on 2000–2006 birth certificates. White and African American race was abstracted from GHC progress notes from 2005 through July 2009 using the same algorithm and compared to self-reported race on health risk appraisals. Accuracy of the SAS algorithm was assessed by overall proportion matching race from the other datasets, Cohen’s kappa, and McNemar’s test. Results: White or African American race was electronically abstracted for 56,261 KPG and 6,427 GHC enrollees. Abstracted race matched race from the other datasets in 97–99% of enrollees. Cohen’s kappas were highly significant (p<0.05), ranging from 0.939 ± 0.013 (N=657 matches with hypertension survey records) to 0.994 ± 0
Rozovski, Uri; Verstovsek, Srdan; Manshouri, Taghi; Dembitz, Vilma; Bozinovic, Ksenija; Newberry, Kate; Zhang, Ying; Bove, Joseph E.; Pierce, Sherry; Kantarjian, Hagop; Estrov, Zeev
2017-01-01
In most patients with primary myelofibrosis, one of three mutually exclusive somatic mutations is detected. In approximately 60% of patients, the Janus kinase 2 gene is mutated, in 20%, the calreticulin gene is mutated, and in 5%, the myeloproliferative leukemia virus gene is mutated. Although patients with mutated calreticulin or myeloproliferative leukemia genes have a favorable outcome, and those with none of these mutations have an unfavorable outcome, prognostication based on mutation status is challenging due to the heterogeneous survival of patients with mutated Janus kinase 2. To develop a prognostic model based on mutation status, we screened primary myelofibrosis patients seen at the MD Anderson Cancer Center, Houston, USA, between 2000 and 2013 for the presence of Janus kinase 2, calreticulin, and myeloproliferative leukemia mutations. Of 344 primary myelofibrosis patients, Janus kinase 2V617F was detected in 226 (66%), calreticulin mutation in 43 (12%), and myeloproliferative leukemia mutation in 16 (5%); 59 patients (17%) were triple-negatives. A 50% cut-off dichotomized Janus kinase 2-mutated patients into those with high Janus kinase 2V617F allele burden and favorable survival and those with low Janus kinase 2V617F allele burden and unfavorable survival. Patients with a favorable mutation status (high Janus kinase 2V617F allele burden/myeloproliferative leukemia/calreticulin mutation) and aged 65 years or under had a median survival of 126 months. Patients with one risk factor (low Janus kinase 2V617F allele burden/triple-negative or age >65 years) had an intermediate survival duration, and patients aged over 65 years with an adverse mutation status (low Janus kinase 2V617F allele burden or triple-negative) had a median survival of only 35 months. Our simple and easily applied age- and mutation status-based scoring system accurately predicted the survival of patients with primary myelofibrosis. PMID:27686378
NASA Astrophysics Data System (ADS)
Lee, Jeongjin; Kim, Namkug; Lee, Ho; Seo, Joon Beom; Won, Hyung Jin; Shin, Yong Moon; Shin, Yeong Gil
2007-03-01
Automatic liver segmentation is still a challenging task due to the ambiguity of liver boundary and the complex context of nearby organs. In this paper, we propose a faster and more accurate way of liver segmentation in CT images with an enhanced level set method. The speed image for level-set propagation is smoothly generated by increasing number of iterations in anisotropic diffusion filtering. This prevents the level-set propagation from stopping in front of local minima, which prevails in liver CT images due to irregular intensity distributions of the interior liver region. The curvature term of shape modeling level-set method captures well the shape variations of the liver along the slice. Finally, rolling ball algorithm is applied for including enhanced vessels near the liver boundary. Our approach are tested and compared to manual segmentation results of eight CT scans with 5mm slice distance using the average distance and volume error. The average distance error between corresponding liver boundaries is 1.58 mm and the average volume error is 2.2%. The average processing time for the segmentation of each slice is 5.2 seconds, which is much faster than the conventional ones. Accurate and fast result of our method will expedite the next stage of liver volume quantification for liver transplantations.
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng
2016-01-01
An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.
Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J
2015-09-01
Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples.
A simple, fast and sensitive screening LC-ESI-MS/MS method for antibiotics in fish.
Guidi, Letícia Rocha; Santos, Flávio Alves; Ribeiro, Ana Cláudia S R; Fernandes, Christian; Silva, Luiza H M; Gloria, Maria Beatriz A
2017-01-15
The objective of this study was to develop and validate a fast, sensitive and simple liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS/MS) method for the screening of six classes of antibiotics (aminoglycosides, beta-lactams, macrolides, quinolones, sulfonamides and tetracyclines) in fish. Samples were extracted with trichloroacetic acid. LC separation was achieved on a Zorbax Eclipse XDB C18 column and gradient elution using 0.1% heptafluorobutyric acid in water and acetonitrile as mobile phase. Analysis was carried out in multiple reaction monitoring mode via electrospray interface operated in the positive ionization mode, with sulfaphenazole as internal standard. The method was suitable for routine screening purposes of 40 antibiotics, according to EC Guidelines for the Validation of Screening Methods for Residues of Veterinary Medicines, taking into consideration threshold value, cut-off factor, detection capability, limit of detection, sensitivity and specificity. Real fish samples (n=193) from aquaculture were analyzed and 15% were positive for enrofloxacin (quinolone), one of them at a higher concentration than the level of interest (50µgkg(-1)), suggesting possible contamination or illegal use of that antibiotic.
A simple and fast kinetic assay for phytases using phytic acid-protein complex as substrate.
Tran, Thuy Thi; Hatti-Kaul, Rajni; Dalsgaard, Søren; Yu, Shukun
2011-03-15
Phytase (EC 3.1.3.-) hydrolyzes phytate (IP(6)) present in cereals and grains to release inorganic phosphate (P(i)), thereby making it bioavailable. The most commonly used method to assay phytase, developed nearly a century ago, measures the P(i) liberated from IP(6). This traditional endpoint assay is time-consuming and well known for its cumbersomeness in addition to requiring extra caution for handling the toxic regents used. This article reports a simple, fast, and nontoxic kinetic method adaptable for high throughput for assaying phytase using IP(6)-lysozyme as a substrate. The assay is based on the principle that IP(6) forms stable turbid complexes with positively charged lysozyme in a wide pH range, and hydrolysis of the IP(6) in the complex is accompanied by a decrease in turbidity monitored at 600 nm. The turbidity decrease correlates well to the released P(i) from IP(6). This kinetic method was found to be useful in assaying histidine acid phytases, including 3- and 6-phytases, a class representing all commercial phytases, and alkaline β-propeller phytase from Bacillus sp. The influences of temperature, pH, phosphate, and other salts on the kinetic assay were examined. All salts, including NaCl, CaCl(2), and phosphate, showed a concentration-dependent interference.
Handley, Chris M; Hawe, Glenn I; Kell, Douglas B; Popelier, Paul L A
2009-08-14
To model liquid water correctly and to reproduce its structural, dynamic and thermodynamic properties warrants models that account accurately for electronic polarisation. We have previously demonstrated that polarisation can be represented by fluctuating multipole moments (derived by quantum chemical topology) predicted by multilayer perceptrons (MLPs) in response to the local structure of the cluster. Here we further develop this methodology of modeling polarisation enabling control of the balance between accuracy, in terms of errors in Coulomb energy and computing time. First, the predictive ability and speed of two additional machine learning methods, radial basis function neural networks (RBFNN) and Kriging, are assessed with respect to our previous MLP based polarisable water models, for water dimer, trimer, tetramer, pentamer and hexamer clusters. Compared to MLPs, we find that RBFNNs achieve a 14-26% decrease in median Coulomb energy error, with a factor 2.5-3 slowdown in speed, whilst Kriging achieves a 40-67% decrease in median energy error with a 6.5-8.5 factor slowdown in speed. Then, these compromises between accuracy and speed are improved upon through a simple multi-objective optimisation to identify Pareto-optimal combinations. Compared to the Kriging results, combinations are found that are no less accurate (at the 90th energy error percentile), yet are 58% faster for the dimer, and 26% faster for the pentamer.
Shayesteh, Tavakol Heidari; Khajavi, Farzad; Khosroshahi, Abolfazl Ghafuri; Mahjub, Reza
2016-01-01
The determination of blood lead levels is the most useful indicator of the determination of the amount of lead that is absorbed by the human body. Various methods, like atomic absorption spectroscopy (AAS), have already been used for the detection of lead in biological fluid, but most of these methods are based on complicated, expensive, and highly instructed instruments. In this study, a simple and accurate spectroscopic method for the determination of lead has been developed and applied for the investigation of lead concentration in biological samples. In this study, a silica gel column was used to extract lead and eliminate interfering agents in human serum samples. The column was washed with deionized water. The pH was adjusted to the value of 8.2 using phosphate buffer, and then tartrate and cyanide solutions were added as masking agents. The lead content was extracted into the organic phase containing dithizone as a complexion reagent and the dithizone-Pb(II) complex was formed and approved by visible spectrophotometry at 538 nm. The recovery was found to be 84.6 %. In order to validate the method, a calibration curve involving the use of various concentration levels was calculated and proven to be linear in the range of 0.01-1.5 μg/ml, with an R (2) regression coefficient of 0.9968 by statistical analysis of linear model validation. The largest error % values were found to be -5.80 and +11.6 % for intra-day and inter-day measurements, respectively. The largest RSD % values were calculated to be 6.54 and 12.32 % for intra-day and inter-day measurements, respectively. Further, the limit of detection (LOD) was calculated to be 0.002 μg/ml. The developed method was applied to determine the lead content in the human serum of voluntary miners, and it has been proven that there is no statistically significant difference between the data provided from this novel method and the data obtained from previously studied AAS.
FAMBE-pH: a fast and accurate method to compute the total solvation free energies of proteins.
Vorobjev, Yury N; Vila, Jorge A; Scheraga, Harold A
2008-09-04
A fast and accurate method to compute the total solvation free energies of proteins as a function of pH is presented. The method makes use of a combination of approaches, some of which have already appeared in the literature; (i) the Poisson equation is solved with an optimized fast adaptive multigrid boundary element (FAMBE) method; (ii) the electrostatic free energies of the ionizable sites are calculated for their neutral and charged states by using a detailed model of atomic charges; (iii) a set of optimal atomic radii is used to define a precise dielectric surface interface; (iv) a multilevel adaptive tessellation of this dielectric surface interface is achieved by using multisized boundary elements; and (v) 1:1 salt effects are included. The equilibrium proton binding/release is calculated with the Tanford-Schellman integral if the proteins contain more than approximately 20-25 ionizable groups; for a smaller number of ionizable groups, the ionization partition function is calculated directly. The FAMBE method is tested as a function of pH (FAMBE-pH) with three proteins, namely, bovine pancreatic trypsin inhibitor (BPTI), hen egg white lysozyme (HEWL), and bovine pancreatic ribonuclease A (RNaseA). The results are (a) the FAMBE-pH method reproduces the observed pK a's of the ionizable groups of these proteins within an average absolute value of 0.4 p K units and a maximum error of 1.2 p K units and (b) comparison of the calculated total pH-dependent solvation free energy for BPTI, between the exact calculation of the ionization partition function and the Tanford-Schellman integral method, shows agreement within 1.2 kcal/mol. These results indicate that calculation of total solvation free energies with the FAMBE-pH method can provide an accurate prediction of protein conformational stability at a given fixed pH and, if coupled with molecular mechanics or molecular dynamics methods, can also be used for more realistic studies of protein folding, unfolding, and
NASA Technical Reports Server (NTRS)
Goodwin, Sabine A.; Raj, P.
1999-01-01
Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.
Alvarez, M Lucrecia
2014-01-01
Urinary exosomes are nanovesicles (40-100 nm) of endocytic origin that are secreted into the urine when a multivesicular body fuses with the membrane of cells from all nephron segments. Interest in urinary exosomes intensified after the discovery that they contain not only protein and mRNA but also microRNA (miRNA) markers of renal dysfunction and structural injury. Currently, the most widely used protocol for the isolation of urinary exosomes is based on ultracentrifugation, a method that is time consuming, requires expensive equipment, and has low scalability, which limits its applicability in the clinical practice. In this chapter, a simple, fast, and highly scalable step-by-step method for isolation of urinary exosomes is described. This method starts with a 10-min centrifugation of 10 ml urine, then the supernatant is saved (SN1), and the pellet is treated with dithiothreitol and heat to release and recover those exosomes entrapped by polymeric Tamm-Horsfall protein. The treated pellet is then resuspended and centrifuged, and the supernatant obtained (SN2) is combined with the first supernatant, SN1. Next, 3.3 ml of ExoQuick-TC, a commercial exosome precipitation reagent, is added to the total supernatant (SN1 + SN2), mixed well, and saved for at least 12 h at 4 °C. Finally, a pellet of exosomes is obtained after a 30-min centrifugation of the supernatant/ExoQuick-TC mix. We previously compared this method with five others used to isolate urinary exosomes and found that this is the simplest, fastest, and most effective alternative to ultracentrifugation-based protocols if the goal of the study is RNA profiling. A method for isolation and quantification of miRNAs and mRNAs from urinary exosomes is also described here. In addition, we provide a step-by-step description of exosomal miRNA profiling using universal reverse transcription and SYBR qPCR.
Pizarro, Oscar; Friedman, Ariell; Bryson, Mitch; Williams, Stefan B; Madin, Joshua
2017-03-01
Visual 3D reconstruction techniques provide rich ecological and habitat structural information from underwater imagery. However, an unaided swimmer or diver struggles to navigate precisely over larger extents with consistent image overlap needed for visual reconstruction. While underwater robots have demonstrated systematic coverage of areas much larger than the footprint of a single image, access to suitable robotic systems is limited and requires specialized operators. Furthermore, robots are poor at navigating hydrodynamic habitats such as shallow coral reefs. We present a simple approach that constrains the motion of a swimmer using a line unwinding from a fixed central drum. The resulting motion is the involute of a circle, a spiral-like path with constant spacing between revolutions. We test this survey method at a broad range of habitats and hydrodynamic conditions encircling Lizard Island in the Great Barrier Reef, Australia. The approach generates fast, structured, repeatable, and large-extent surveys (~110 m(2) in 15 min) that can be performed with two people and are superior to the commonly used "mow the lawn" method. The amount of image overlap is a design parameter, allowing for surveys that can then be reliably used in an automated processing pipeline to generate 3D reconstructions, orthographically projected mosaics, and structural complexity indices. The individual images or full mosaics can also be labeled for benthic diversity and cover estimates. The survey method we present can serve as a standard approach to repeatedly collecting underwater imagery for high-resolution 2D mosaics and 3D reconstructions covering spatial extents much larger than a single image footprint without requiring sophisticated robotic systems or lengthy deployment of visual guides. As such, it opens up cost-effective novel observations to inform studies relating habitat structure to ecological processes and biodiversity at scales and spatial resolutions not readily
A Fast, Accurate and Sensitive GC-FID Method for the Analyses of Glycols in Water and Urine
NASA Technical Reports Server (NTRS)
Kuo, C. Mike; Alverson, James T.; Gazda, Daniel B.
2017-01-01
Glycols, specifically ethylene glycol and 1,2-propanediol, are some of the major organic compounds found in the humidity condensate samples collected on the International Space Station. The current analytical method for glycols is a GC/MS method with direct sample injection. This method is simple and fast, but it is not very sensitive. Reporting limits for ethylene glycol and 1,2-propanediol are only 1 ppm. A much more sensitive GC/FID method was developed, in which glycols were derivatized with benzoyl chloride for 10 minutes before being extracted with hexane. Using 1,3-propanediol as an internal standard, the detection limits for the GC/FID method was determined to be 50 ppb and the analysis only takes 7 minutes. Data from the GC/MS and the new GC/FID methods shows excellent agreement with each other. Factors affecting the sensitivity, including sample volume, NaOH concentration and volume, volume of benzoyl chloride, reaction time and temperature, were investigated. Interferences during derivatization and possible method to reduce interferences were also investigated.
Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice
2015-06-01
Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.
NASA Astrophysics Data System (ADS)
Zhang, Jianzhong; Huang, Yueqin; Song, Lin-Ping; Liu, Qing-Huo
2011-03-01
We propose a new ray tracing technique in a 3-D heterogeneous isotropic media based on bilinear traveltime interpolation and the wave front group marching. In this technique, the media is discretized into a series of rectangular cells. There are two steps to be carried out: one is a forward step where wave front expansion is evolved from sources to whole computational domain and the subsequent one is a backward step where ray paths are calculated for any source-receiver configuration as desired. In the forward step, we derive a closed-form expression to calculate traveltime at an arbitrary point in a cell using a bilinear interpolation of the known traveltimes on the cell's surface. Then the group marching method (GMM), a fast wave front advancing method, is applied to expand the wave front from the source to all girds. In the backward step, ray paths starting from receivers are traced by finding the intersection points of potential ray propagation vectors with the surfaces of relevant cells. In this step, the same TI scheme is used to compute the candidate intersection points on all surfaces of each relevant cell. In this process, the point with the minimum traveltime is selected as a ray point from which the similar step is continued until sources. A number of numerical experiments demonstrate that our 3-D ray tracing technique is able to achieve very accurate computation of traveltimes and ray paths and meanwhile take much less computer time in comparison with the existing popular ones like the finite-difference-based GMM method, which is combined with the maximum gradient ray tracing, and the shortest path method.
NASA Astrophysics Data System (ADS)
Oñativia, Jon; Schultz, Simon R.; Dragotti, Pier Luigi
2013-08-01
Objective. Inferring the times of sequences of action potentials (APs) (spike trains) from neurophysiological data is a key problem in computational neuroscience. The detection of APs from two-photon imaging of calcium signals offers certain advantages over traditional electrophysiological approaches, as up to thousands of spatially and immunohistochemically defined neurons can be recorded simultaneously. However, due to noise, dye buffering and the limited sampling rates in common microscopy configurations, accurate detection of APs from calcium time series has proved to be a difficult problem. Approach. Here we introduce a novel approach to the problem making use of finite rate of innovation (FRI) theory (Vetterli et al 2002 IEEE Trans. Signal Process. 50 1417-28). For calcium transients well fit by a single exponential, the problem is reduced to reconstructing a stream of decaying exponentials. Signals made of a combination of exponentially decaying functions with different onset times are a subclass of FRI signals, for which much theory has recently been developed by the signal processing community. Main results. We demonstrate for the first time the use of FRI theory to retrieve the timing of APs from calcium transient time series. The final algorithm is fast, non-iterative and parallelizable. Spike inference can be performed in real-time for a population of neurons and does not require any training phase or learning to initialize parameters. Significance. The algorithm has been tested with both real data (obtained by simultaneous electrophysiology and multiphoton imaging of calcium signals in cerebellar Purkinje cell dendrites), and surrogate data, and outperforms several recently proposed methods for spike train inference from calcium imaging data.
Babić, S; Barišić, J; Malev, O; Klobučar, G; Popović, N Topić; Strunjak-Perović, I; Krasnići, N; Čož-Rakovac, R; Klobučar, R Sauerborn
2016-06-01
Sewage sludge (SS) is a complex organic by-product of wastewater treatment plants. Deposition of large amounts of SS can increase the risk of soil contamination. Therefore, there is an increasing need for fast and accurate assessment of SS toxic potential. Toxic effects of SS were tested on earthworm Eisenia fetida tissue, at the subcellular and biochemical level. Earthworms were exposed to depot sludge (DS) concentration ratio of 30 or 70 %, to undiluted and to 100 and 10 times diluted active sludge (AS). The exposure to DS lasted for 24/48 h (acute exposure), 96 h (semi-acute exposure) and 7/14/28 days (sub-chronic exposure) and 48 h for AS. Toxic effects were tested by the measurements of multixenobiotic resistance mechanism (MXR) activity and lipid peroxidation levels, as well as the observation of morphological alterations and behavioural changes. Biochemical markers confirmed the presence of MXR inhibitors in the tested AS and DS and highlighted the presence of SS-induced oxidative stress. The MXR inhibition and thiobarbituric acid reactive substance (TBARS) concentration in the whole earthworm's body were higher after the exposition to lower concentration of the DS. Furthermore, histopathological changes revealed damage to earthworm body wall tissue layers as well as to the epithelial and chloragogen cells in the typhlosole region. These changes were proportional to SS concentration in tested soils and to exposure duration. Obtained results may contribute to the understanding of SS-induced toxic effects on terrestrial invertebrates exposed through soil contact and to identify defence mechanisms of earthworms.
McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan
2015-01-01
Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.
Min, Junwei; Yao, Baoli; Ketelhut, Steffi; Engwer, Christian; Greve, Burkhard; Kemper, Björn
2017-01-15
We present a simple and fast phase aberration compensation method in digital holographic microscopy (DHM) for quantitative phase imaging of living cells. By analyzing the frequency spectrum of an off-axis hologram, phase aberrations can be compensated for automatically without fitting or pre-knowledge of the setup and/or the object. Simple and effective computation makes the method suitable for quantitative online monitoring with highly variable DHM systems. Results from automated quantitative phase imaging of living NIH-3T3 mouse fibroblasts demonstrate the effectiveness and the feasibility of the method.
NASA Astrophysics Data System (ADS)
Fischer, A.; Hoffmann, K.-H.
2004-03-01
In this case study a complex Otto engine simulation provides data including, but not limited to, effects from losses due to heat conduction, exhaust losses and frictional losses. This data is used as a benchmark to test whether the Novikov engine with heat leak, a simple endoreversible model, can reproduce the complex engine behavior quantitatively by an appropriate choice of model parameters. The reproduction obtained proves to be of high quality.
Moriarty, Tom
2016-11-21
The NREL cell measurement lab measures the IV parameters of cells of multiple sizes and configurations. A large contributing factor to errors and uncertainty in Jsc, Imax, Pmax and efficiency can be the irradiance spatial nonuniformity. Correcting for this nonuniformity through its precise and frequent measurement can be very time consuming. This paper explains a simple, fast and effective method based on bicubic interpolation for determining and correcting for spatial nonuniformity and verification of the method's efficacy.
Swanson, Jon; Audie, Joseph
2017-01-16
A fundamental and unsolved problem in biophysical chemistry is the development of a computationally simple, physically intuitive, and generally applicable method for accurately predicting and physically explaining protein-protein binding affinities from protein-protein interaction (PPI) complex coordinates. Here, we propose that the simplification of a previously described six-term PPI scoring function to a four term function results in a simple expression of all physically and statistically meaningful terms that can be used to accurately predict and explain binding affinities for a well-defined subset of PPIs that are characterized by (1) crystallographic coordinates, (2) rigid-body association, (3) normal interface size, and hydrophobicity and hydrophilicity, and (4) high quality experimental binding affinity measurements. We further propose that the four-term scoring function could be regarded as a core expression for future development into a more general PPI scoring function. Our work has clear implications for PPI modeling and structure-based drug design.
Gupta, V; Wang, Y; Romero, A; Heijmen, B; Hoogeman, M; Myronenko, A; Jordan, P
2014-06-01
Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain ground truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto
NASA Technical Reports Server (NTRS)
Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas
1986-01-01
The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.
Fast converging exact power series for the time and period of the simple pendulum
NASA Astrophysics Data System (ADS)
Benacka, Jan
2017-03-01
A time explicit fast converging exact power series solution to the pendulum equation is derived in this paper. A novel series for the period results from it. The approximate formula that comprises the first three terms gives an accuracy of 99.99% up to the amplitude of 90°. The accuracy was compared with that of 11 other approximate period formulas.
Stiedl, Cathrin P; Weber, Karin
2017-03-01
Dogs with a 4-bp deletion in the MDR1 (or ABCB1) gene show intolerance to certain drugs routinely used in veterinary medicine, such as ivermectin, vincristine, and doxorubicin. The mutation leads to a dysfunctional P-glycoprotein drug transporter, which results in drug accumulation in the brain and severe neurotoxicity. A rapid and accurate in-house test to determine the genotype of patients in cases of acute neurotoxic signs or in tumor patients is desirable. We describe a cost-effective detection method with simple technical equipment for veterinary practice. Two allele-specific methods are presented, which allow discrimination of all genotypes, require little hands-on time, and show the results within ~1 h after DNA sampling. DNA from buccal swabs of 115 dogs with known genotype (no mutation, n = 54; heterozygous for the mutation, n = 37; homozygous for the mutation, n = 24) was extracted either by using a column-based extraction kit or by heating swabs in a simple NaOH-Tris buffer. Amplification was performed either by allele-specific fast polymerase chain reaction or by allele-specific loop-mediated isothermal amplification (LAMP). Analysis was done either on agarose gels, by simple endpoint visualization using ultraviolet light, or by measuring the increase of fluorescence and time to threshold crossing. Commercial master mixes reduced the preparation time and minimized sources of error in both methods. Both methods allowed the discrimination of all 3 genotypes, and the results of the new methods matched the results of the previous genotyping. The presented methods could be used for fast individual MDR1/ ABCB1 genotyping with less equipment than existing methods.
Simple and accurate determination of global tau(R) in proteins using (13)C or (15)N relaxation data.
Mispelter, J; Izadi-Pruneyre, N; Quiniou, E; Adjadj, E
2000-03-01
In the study of protein dynamics by (13)C or (15)N relaxation measurements different models from the Lipari-Szabo formalism are used in order to determine the motion parameters. The global rotational correlation time tau(R) of the molecule must be estimated prior to the analysis. In this Communication, the authors propose a new approach in determining an accurate value for tau(R) in order to realize the best fit of R(2) for the whole sequence of the protein, regardless of the different type of motions atoms may experience. The method first determines the highly structured regions of the sequence. For each corresponding site, the Lipari-Szabo parameters are calculated for R(1) and NOE, using an arbitrary value for tau(R). The chi(2) for R(2), summed over the selected sites, shows a clear minimum, as a function of tau(R). This minimum is used to better estimate a proper value for tau(R).
NASA Astrophysics Data System (ADS)
Suzuki, Yasushi; Chen, Guo-Ping; Manna, Uttam; Vij, Jagdish K.; Fukuda, Atsuo
2009-07-01
Simple matrix antiferroelectric liquid crystal displays (SM-AFLCDs) are prototyped to realize field sequential color (FSC) by utilizing the fast pretransitional response. The developed FSC-SM-AFLCDs will lead to the replacement of existing static driven FSC-SM-nematic-LCDs. Bright and clear color can be given to already market-acquired, black-and-white SM-LCDs of up to 1/64-duty and 3-in. diagonal size. To optimize the display performance, we analyze two important factors, the large pretransitional effect and the appropriate reset pulse, in terms of the interlayer interaction potential used in describing the field-induced transition of the antiferroelectric smectic phase.
NASA Astrophysics Data System (ADS)
Karpov, S.; Sokołowski, M.; Gorbovskoy, E.
Here we stress the necessity of cooperation between different wide-field monitoring projects (FAVOR/TORTORA, Pi of the Sky, MASTER, etc), aimed for independent detection of fast optical transients, in order to maximize the area of the sky covered at any moment and to coordinate the monitoring of gamma-ray telescopes' field of view. We review current solutions available for it and propose a simple protocol with dedicated service (ASCI) for such systems to share their current status and pointing schedules.
Song, Botao
2016-12-15
Superhydrophobic metal wire mesh (SMWM) has frequently been applied for the selective and efficient separation of oil/water mixture due to its porous structure and special wettability. However, current methods for the modification of metal wire mesh to be superhydrophobic suffered from problems with respect to complex experimental procedures or time-consuming process. In this study, a very simple, time-saving and single-step electrospray method was proposed to fabricate SMWM and the whole procedure required about only 2min. The morphology, surface composition and wettability of the SMWM were all evaluated, and the oil/water separation ability was further investigated. In addition, a commercial available sponge covered with SMWM was fabricated as an oil adsorbent for the purpose of oil recovery. This study demonstrated a convenient and fast method to modify the metal wire mesh to be superhydrophobic and such simple method might find practical applications in the large-scale removal of oils.
Bosse, Jens B.; Tanneti, Nikhila S.; Hogue, Ian B.; Enquist, Lynn W.
2015-01-01
Dual-color live cell fluorescence microscopy of fast intracellular trafficking processes, such as axonal transport, requires rapid switching of illumination channels. Typical broad-spectrum sources necessitate the use of mechanical filter switching, which introduces delays between acquisition of different fluorescence channels, impeding the interpretation and quantification of highly dynamic processes. Light Emitting Diodes (LEDs), however, allow modulation of excitation light in microseconds. Here we provide a step-by-step protocol to enable any scientist to build a research-grade LED illuminator for live cell microscopy, even without prior experience with electronics or optics. We quantify and compare components, discuss our design considerations, and demonstrate the performance of our LED illuminator by imaging axonal transport of herpes virus particles with high temporal resolution. PMID:26600461
Park, Jongbok; Ryu, Yeontack; Kim, Hansoo; Yu, Choongho
2009-03-11
Wire- and belt-like single-crystalline titanium dioxide nanostructures were synthesized by using a simple thermal annealing method, which has often been avoided for the synthesis of metal oxide nanostructures from high melting point metals such as Ti. The synthesis method requires neither high reaction temperature nor complicated reaction processes, and can be used for producing dense nanomaterials with relatively short reaction time at temperatures much lower than the melting point of titanium and titanium dioxide. Key synthesis factors including the choice of eutectic catalyst, growth temperature, and annealing time were systematically investigated. The synthesis reaction was promoted by a copper eutectic catalyst, producing long nanostructures with short reaction times. For example, it was observed that only 30 min of annealing time at 850 degrees C was enough to produce densely grown approximately 10 microm long nanowires with diameters of approximately 100 nm, and longer reaction time brought about morphology changes from wires to belts as well as producing longer nanostructures up to approximately 30 microm. The nanostructures have the crystalline rutile structure along the [Formula: see text] growth direction. Finally, our simple and effective method for the synthesis of TiO2 nanostructures could be utilized for growing other metal oxide nanowires from high melting temperature metals.
A fast, simple and robust protocol for growing crystals in the lipidic cubic phase.
Aherne, Margaret; Lyons, Joseph A; Caffrey, Martin
2012-12-01
A simple and inexpensive protocol for producing crystals in the sticky and viscous mesophase used for membrane protein crystallization by the in meso method is described. It provides crystals that appear within 15-30 min of setup at 293 K. The protocol gives the experimenter a convenient way of gaining familiarity and a level of comfort with the lipidic cubic mesophase, which can be daunting as a material when first encountered. Having used the protocol to produce crystals of the test protein, lysozyme, the experimenter can proceed with confidence to apply the method to more valuable membrane (and soluble) protein targets. The glass sandwich plates prepared using this robust protocol can further be used to practice harvesting and snap-cooling of in meso-grown crystals, to explore diffraction data collection with mesophase-embedded crystals, and for an assortment of quality control and calibration applications when used in combination with a crystallization robot.
Simple and Fast Method for Fabrication of Endoscopic Implantable Sensor Arrays
Tahirbegi, I. Bogachan; Alvira, Margarita; Mir, Mònica; Samitier, Josep
2014-01-01
Here we have developed a simple method for the fabrication of disposable implantable all-solid-state ion-selective electrodes (ISE) in an array format without using complex fabrication equipment or clean room facilities. The electrodes were designed in a needle shape instead of planar electrodes for a full contact with the tissue. The needle-shape platform comprises 12 metallic pins which were functionalized with conductive inks and ISE membranes. The modified microelectrodes were characterized with cyclic voltammetry, scanning electron microscope (SEM), and optical interferometry. The surface area and roughness factor of each microelectrode were determined and reproducible values were obtained for all the microelectrodes on the array. In this work, the microelectrodes were modified with membranes for the detection of pH and nitrate ions to prove the reliability of the fabricated sensor array platform adapted to an endoscope. PMID:24971473
NASA Astrophysics Data System (ADS)
Jansen, Gunnar; Sohrabi, Reza; Miller, Stephen A.
2017-02-01
Short for Hexahedra from Unique Location in (K)convex Polyhedra - HULK is a simple and efficient algorithm to generate hexahedral meshes from generic STL files describing a geological model to be used in simulation tools based on the finite element, finite volume or finite difference methods. Using binary space partitioning of the input geometry and octree refinement on the grid, a successive increase in accuracy of the mesh is achieved. We present the theoretical basis as well as the implementation procedure with three geological models with varying complexity, providing the basis on which the algorithm is evaluated. HULK generates high accuracy discretizations with cell counts suitable for state-of-the-art subsurface simulators and provides a new method for hexahedral mesh generation in geological settings.
A Simple and Fast Semiautomatic Procedure for the Atomistic Modeling of Complex DNA Polyhedra.
Alves, Cassio; Iacovelli, Federico; Falconi, Mattia; Cardamone, Francesca; Morozzo Della Rocca, Blasco; de Oliveira, Cristiano L P; Desideri, Alessandro
2016-05-23
A semiautomatic procedure to build complex atomistic covalently linked DNA nanocages has been implemented in a user-friendly, free, and fast program. As a test set, seven different truncated DNA polyhedra, composed by B-DNA double helices connected through short single-stranded linkers, have been generated. The atomistic structures, including a tetrahedron, a cube, an octahedron, a dodecahedron, a triangular prism, a pentagonal prism, and a hexagonal prism, have been probed through classical molecular dynamics and analyzed to evaluate their structural and dynamical properties and to highlight possible building faults. The analysis of the simulated trajectories also allows us to investigate the role of the different geometries in defining nanocages stability and flexibility. The data indicate that the cages are stable and that their structural and dynamical parameters measured along the trajectories are slightly affected by the different geometries. These results demonstrate that the constraints imposed by the covalent links induce an almost identical conformational variability independently of the three-dimensional geometry and that the program presented here is a reliable and valid tool to engineer DNA nanostructures.
A simple, low-cost and fast Peltier thermoregulation set-up for electrophysiology.
Corrèges, P; Bugnard, E; Millerin, C; Masiero, A; Andrivet, J P; Bloc, A; Dunant, Y
1998-09-01
Most of the parameters recorded in electrophysiology are strongly temperature dependent. In order to control temperature fluctuations we have built a system that ensures an accurate thermoregulation of the recording chamber. Temperature of physiological preparations can be changed relatively quickly (about 8 degrees C/min) and with a good accuracy (+/- 0.5 degrees C) without inducing thermal oscillations. Contrary to other thermoregulating devices, the temperature regulation is not carried out through the perfused medium but directly at the bottom of the chamber where a 3-cm2 Peltier element has been placed. The element is driven by a dedicated electronic device which controls the amount and the direction of the current flowing across the Peltier thermocouple. All construction details and the appropriate electrical circuits are provided. Using this home-made device, the steady-state chamber temperature could be precisely monitored with a resolution of +/- 0.1 degrees C in a range of 0-40 degrees C. This set-up was tested in experiments designed to evaluate the temperature dependence of synaptic transmission in the Torpedo nerve electroplate synapses and of calcium currents recorded from isolated nerve cells. This low-cost method is suitable for a wide range of applications.
Teo, Hui Ling; Wong, Lingkai; Liu, Qinde; Teo, Tang Lin; Lee, Tong Kooi; Lee, Hian Kee
2016-03-17
To achieve fast and accurate analysis of carbamazepine in surface water, we developed a novel porous membrane-protected micro-solid-phase extraction (μ-SPE) method, followed by liquid chromatography-isotope dilution tandem mass spectrometry (LC-IDMS/MS) analysis. The μ-SPE device (∼0.8 × 1 cm) was fabricated by heat-sealing edges of a polypropylene membrane sheet to devise a bag enclosing the sorbent. The analytes (both carbamazepine and isotope-labelled carbamazepine) were first extracted by μ-SPE device in the sample (10 mL) via agitation, then desorbed in an organic solvent (1 mL) via ultrasonication. Several parameters such as organic solvent for pre-conditioning of μ-SPE device, amount of sorbent, adsorption time, and desorption solvent and time were investigated to optimize the μ-SPE efficiency. The optimized method has limits of detection and quantitation estimated to be 0.5 ng L(-1) and 1.6 ng L(-1), respectively. Surface water samples spiked with different amounts of carbamazepine (close to 20, 500, and 1600 ng L(-1), respectively) were analysed for the validation of method precision and accuracy. Good precision was obtained as demonstrated by relative standard deviations of 0.7% for the samples with concentrations of 500 and 1600 ng kg(-1), and 5.8% for the sample with concentration of 20 ng kg(-1). Good accuracy was also demonstrated by the relative recoveries in the range of 96.7%-103.5% for all samples with uncertainties of 1.1%-5.4%. Owing to the same chemical properties of carbamazepine and isotope-labelled carbamazepine, the isotope ratio in the μ-SPE procedure was accurately controlled. The use of μ-SPE coupled with IDMS analysis significantly facilitated the fast and accurate measurement of carbamazepine in surface water.
Zhang, Xiulan; Zhu, Yonggang; Li, Xie; Guo, Xuhong; Zhang, Bo; Jia, Xin; Dai, Bin
2016-11-09
A simple, fast and low-cost method for dopamine (DA) detection based on turn-on fluorescence using resorcinol is developed. The rapid reaction between resorcinol and DA allows the detection to be performed within 5 min, and the reaction product (azamonardine) with high quantum yield generates strong fluorescence signal for sensitive optical detection. The detection exhibits a high sensitivity to DA with a wide linear range of 10 nM-20 μM and the limit of detection is estimated to be 1.8 nM (S/N = 3). This approach has been successfully applied to determine DA concentrations in human urine samples with satisfactory quantitative recovery of 97.84%-103.50%, which shows great potential in clinical diagnosis.
Shoji, N; Sasano, T; Inukai, K; Satoh-Kuriwada, S; Iikubo, M; Furuuchi, T; Sakamoto, M
2003-11-01
The lack of published information about the minor salivary glands is due in part to the difficulties experienced in collecting and quantifying their secretions. In fact, no method exists for measuring their secretions that is both simple and accurate. This investigation examined the accuracy of our newly developed method (which simply employs the iodine-starch reaction) in 10 healthy non-medicated adults. A strip painted with a solution of iodine in absolute alcohol then with a fine starch powder mixed with castor oil was placed at a designated location on the lower-lip mucosa for 2 min to collect saliva. Black-stained spots of various sizes corresponding to the individual glands could be accurately visualized. After removal of the strip, the total stained area (mm2) was calculated by digitizing the spot areas using a computer system. The correlation coefficient (r) between known volumes of saliva and stain size was 0.995, indicating a close correlation. The correlation coefficient (r) between area values obtained in the first trial in each subject (Y) and the second (X; 10 min later) was 0.963, and the simple regression equation was close to Y=X, indicating good reproducibility. The mean flow rate microl/cm2 per min) obtained by converting mean total area to volume and thence to flow rate was 0.49+/-0.26, in good agreement with published values obtained by others. These results suggest that our newly developed method allows both the distribution and secretion rate of the minor salivary glands to be observed, and that it should be of practical value due to its simplicity, accuracy, and reproducibility.
Qiu, Dong Zhang, Mingxing
2014-08-15
A simple and inclusive method is proposed for accurate determination of the habit plane between bicrystals in transmission electron microscope. Whilst this method can be regarded as a variant of surface trace analysis, the major innovation lies in the improved accuracy and efficiency of foil thickness measurement, which involves a simple tilt of the thin foil about a permanent tilting axis of the specimen holder, rather than cumbersome tilt about the surface trace of the habit plane. Experimental study has been done to validate this proposed method in determining the habit plane between lamellar α{sub 2} plates and γ matrix in a Ti–Al–Nb alloy. Both high accuracy (± 1°) and high precision (± 1°) have been achieved by using the new method. The source of the experimental errors as well as the applicability of this method is discussed. Some tips to minimise the experimental errors are also suggested. - Highlights: • An improved algorithm is formulated to measure the foil thickness. • Habit plane can be determined with a single tilt holder based on the new algorithm. • Better accuracy and precision within ± 1° are achievable using the proposed method. • The data for multi-facet determination can be collected simultaneously.
Cruz, Rebeca; Casal, Susana
2013-11-15
Vitamin E analysis in green vegetables is performed by an array of different methods, making it difficult to compare published data or choosing the adequate one for a particular sample. Aiming to achieve a consistent method with wide applicability, the current study reports the development and validation of a fast micro-method for quantification of vitamin E in green leafy vegetables. The methodology uses solid-liquid extraction based on the Folch method, with tocol as internal standard, and normal-phase HPLC with fluorescence detection. A large linear working range was confirmed, being highly reproducible, with inter-day precisions below 5% (RSD). Method sensitivity was established (below 0.02 μg/g fresh weight), and accuracy was assessed by recovery tests (>96%). The method was tested in different green leafy vegetables, evidencing diverse tocochromanol profiles, with variable ratios and amounts of α- and γ-tocopherol, and other minor compounds. The methodology is adequate for routine analyses, with a reduced chromatographic run (<7 min) and organic solvent consumption, and requires only standard chromatographic equipment available in most laboratories.
3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications
NASA Astrophysics Data System (ADS)
Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David
2004-08-01
Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.
Krenning, Boudewijn J; Voormolen, Marco M; van Geuns, Robert-Jan; Vletter, W B; Lancée, Charles T; de Jong, Nico; Ten Cate, Folkert J; van der Steen, Anton F W; Roelandt, Jos R T C
2006-07-01
Measurement of left ventricular (LV) volume and function are the most common clinical referral questions to the echocardiography laboratory. A fast, practical, and accurate method would offer important advantages to obtain this important information. To validate a new practical method for rapid measurement of LV volume and function. We developed a continuous fast-rotating transducer, with second-harmonic capabilities, for three-dimensional echocardiography (3DE). Fifteen cardiac patients underwent both 3DE and magnetic resonance imaging (reference method) on the same day. 3DE image acquisition was performed during a 10-second breath-hold with a frame rate of 100 frames/sec and a rotational speed of 6 rotations/sec. The individual images were postprocessed with Matlab software using multibeat data fusion. Subsequently, with these images, 12 datasets per cardiac cycle were reconstructed, each comprising seven equidistant cross-sectional images for analysis in the new TomTec 4DLV analysis software, which uses a semi-automated border detection (ABD) algorithm. The ABD requires an average analysis time of 15 minutes per patient. A strong correlation was found between LV end-diastolic volume (r = 0.99; y = 0.95x - 1.14 ml; SEE = 6.5 ml), LV end-systolic volume (r = 0.96; y = 0.89x + 7.91 ml; SEE = 7.0 ml), and LV ejection fraction (r = 0.93; y = 0.69x + 13.36; SEE = 2.4%). Inter- and intraobserver agreement for all measurements was good. The fast-rotating transducer with new ABD software is a dedicated tool for rapid and accurate analysis of LV volume and function.
Jagetic, Lydia J; Newhauser, Wayne D
2015-06-21
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.
ICE-COLA: towards fast and accurate synthetic galaxy catalogues optimizing a quasi-N-body method
NASA Astrophysics Data System (ADS)
Izard, Albert; Crocce, Martin; Fosalba, Pablo
2016-07-01
Next generation galaxy surveys demand the development of massive ensembles of galaxy mocks to model the observables and their covariances, what is computationally prohibitive using N-body simulations. COmoving Lagrangian Acceleration (COLA) is a novel method designed to make this feasible by following an approximate dynamics but with up to three orders of magnitude speed-ups when compared to an exact N-body. In this paper, we investigate the optimization of the code parameters in the compromise between computational cost and recovered accuracy in observables such as two-point clustering and halo abundance. We benchmark those observables with a state-of-the-art N-body run, the MICE Grand Challenge simulation. We find that using 40 time-steps linearly spaced since zi ˜ 20, and a force mesh resolution three times finer than that of the number of particles, yields a matter power spectrum within 1 per cent for k ≲ 1 h Mpc-1 and a halo mass function within 5 per cent of those in the N-body. In turn, the halo bias is accurate within 2 per cent for k ≲ 0.7 h Mpc-1 whereas, in redshift space, the halo monopole and quadrupole are within 4 per cent for k ≲ 0.4 h Mpc-1. These results hold for a broad range in redshift (0 < z < 1) and for all halo mass bins investigated (M > 1012.5 h-1 M⊙). To bring accuracy in clustering to one per cent level we study various methods that re-calibrate halo masses and/or velocities. We thus propose an optimized choice of COLA code parameters as a powerful tool to optimally exploit future galaxy surveys.
Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A; Solberg, Timothy D; Chetty, Indrin J
2009-12-01
(DBS) with no electron splitting. When DBS was used with electron splitting and combined with augmented charged particle range rejection, a technique recently introduced in BEAMnrc, relative efficiencies were approximately 420 (approximately 253 min on a single processor) and approximately 175 (approximately 58 min on a single processor) for the 10 x 10 and 40 x 40 cm2 field sizes, respectively. Calculations of the Siemens Primus treatment head with VMC++ produced relative efficiencies of approximately 1400 (approximately 6 min on a single processor) and approximately 60 (approximately 4 min on a single processor) for the 10 x 10 and 40 x 40 cm2 field sizes, respectively. BEAMnrc PHSP calculations with DBS alone or DBS in combination with charged particle range rejection were more efficient than the other efficiency enhancing techniques used. Using VMC++, accurate simulations of the entire linac treatment head were performed within minutes on a single processor. Noteworthy differences (+/- 1%-3%) in the mean energy, planar fluence, and angular and spectral distributions were observed with the NIST bremsstrahlung cross sections compared with those of Bethe-Heitler (BEAMnrc default bremsstrahlung cross section). However, MC calculated dose distributions in water phantoms (using combinations of VRTs/AEITs and cross-section data) agreed within 2% of measurements. Furthermore, MC calculated dose distributions in a simulated water/air/water phantom, using NIST cross sections, were within 2% agreement with the BEAMnrc Bethe-Heitler default case.
Kakiyama, Genta; Muto, Akina; Takei, Hajime; Nittono, Hiroshi; Murai, Tsuyoshi; Kurosawa, Takao; Hofmann, Alan F.; Pandak, William M.; Bajaj, Jasmohan S.
2014-01-01
We have developed a simple and accurate HPLC method for measurement of fecal bile acids using phenacyl derivatives of unconjugated bile acids, and applied it to the measurement of fecal bile acids in cirrhotic patients. The HPLC method has the following steps: 1) lyophilization of the stool sample; 2) reconstitution in buffer and enzymatic deconjugation using cholylglycine hydrolase/sulfatase; 3) incubation with 0.1 N NaOH in 50% isopropanol at 60°C to hydrolyze esterified bile acids; 4) extraction of bile acids from particulate material using 0.1 N NaOH; 5) isolation of deconjugated bile acids by solid phase extraction; 6) formation of phenacyl esters by derivatization using phenacyl bromide; and 7) HPLC separation measuring eluted peaks at 254 nm. The method was validated by showing that results obtained by HPLC agreed with those obtained by LC-MS/MS and GC-MS. We then applied the method to measuring total fecal bile acid (concentration) and bile acid profile in samples from 38 patients with cirrhosis (17 early, 21 advanced) and 10 healthy subjects. Bile acid concentrations were significantly lower in patients with advanced cirrhosis, suggesting impaired bile acid synthesis. PMID:24627129
NASA Astrophysics Data System (ADS)
Zhang, Bin; Liang, Chunlei
2015-08-01
This paper presents a simple, efficient, and high-order accurate sliding-mesh interface approach to the spectral difference (SD) method. We demonstrate the approach by solving the two-dimensional compressible Navier-Stokes equations on quadrilateral grids. This approach is an extension of the straight mortar method originally designed for stationary domains [7,8]. Our sliding method creates curved dynamic mortars on sliding-mesh interfaces to couple rotating and stationary domains. On the nonconforming sliding-mesh interfaces, the related variables are first projected from cell faces to mortars to compute common fluxes, and then the common fluxes are projected back from the mortars to the cell faces to ensure conservation. To verify the spatial order of accuracy of the sliding-mesh spectral difference (SSD) method, both inviscid and viscous flow cases are tested. It is shown that the SSD method preserves the high-order accuracy of the SD method. Meanwhile, the SSD method is found to be very efficient in terms of computational cost. This novel sliding-mesh interface method is very suitable for parallel processing with domain decomposition. It can be applied to a wide range of problems, such as the hydrodynamics of marine propellers, the aerodynamics of rotorcraft, wind turbines, and oscillating wing power generators, etc.
Ngo, Son Tung; Hung, Huynh Minh; Nguyen, Minh Tho
2016-12-05
The fast pulling ligand (FPL) out of binding cavity using non-equilibrium molecular dynamics (MD) simulations was demonstrated to be a rapid, accurate and low CPU demand method for the determination of the relative binding affinities of a large number of HIV-1 protease (PR) inhibitors. In this approach, the ligand is pulled out of the binding cavity of the protein using external harmonic forces, and the work of pulling force corresponds to the relative binding affinity of HIV-1 PR inhibitor. The correlation coefficient between the pulling work and the experimental binding free energy of R=-0.95 shows that FPL results are in good agreement with experiment. It is thus easier to rank the binding affinities of HIV-1 PR inhibitors, that have similar binding affinities because the mean error bar of pulling work amounts to δW=7%. The nature of binding is discovered using the FPL approach. © 2016 Wiley Periodicals, Inc.
Pantuzzo, Fernando L; Silva, Julio César J; Ciminelli, Virginia S T
2009-09-15
A fast and accurate microwave-assisted digestion method for arsenic determination by flame atomic absorption spectrometry (FAAS) in typical, complex residues from gold mining is presented. Three digestion methods were evaluated: an open vessel digestion using a mixture of HCl:HNO(3):HF acids (Method A) and two microwave digestion methods using a mixture of HCl:H(2)O(2):HNO(3) in high (Method B) and medium-pressure (Method C) vessels. The matrix effect was also investigated. Arsenic concentration from external and standard addition calibration curves (at a 95% confidence level) were statistically equal (p-value=0.122) using microwave digestion in high-pressure vessel. The results from the open vessel digestion were statistically different (p-value=0.007) whereas in the microwave digestion in medium-pressure vessel (Method C) the dissolution of the samples was incomplete.
Lobato, I; Van Dyck, D
2015-09-01
The main features and the GPU implementation of the MULTEM program are presented and described. This new program performs accurate and fast multislice simulations by including higher order expansion of the multislice solution of the high energy Schrödinger equation, the correct subslicing of the three-dimensional potential and top-bottom surfaces. The program implements different kinds of simulation for CTEM, STEM, ED, PED, CBED, ADF-TEM and ABF-HC with proper treatment of the spatial and temporal incoherences. The multislice approach described here treats the specimen as amorphous material which allows a straightforward implementation of the frozen phonon approximation. The generalized transmission function for each slice is calculated when is needed and then discarded. This allows us to perform large simulations that can include millions of atoms and keep the computer memory requirements to a reasonable level.
Yuan, Chao; Burgyan, Maria; Bunch, Dustin R; Reineks, Edmunds; Jackson, Raymond; Steinle, Roxanne; Wang, Sihe
2014-09-01
Vitamins A and E are fat-soluble vitamins that play important roles in several physiological processes. Monitoring their concentrations is needed to detect deficiency and guide therapy. In this study, we developed a high-performance liquid chromatography method to measure the major forms of vitamin A (retinol) and vitamin E (α-tocopherol and γ-tocopherol) in human blood plasma. Vitamins A and E were extracted with hexane and separated on a reversed-phase column using methanol as the mobile phase. Retinol was detected by ultraviolet absorption, whereas tocopherols were detected by fluorescence emission. The chromatographic cycle time was 4.0 min per sample. The analytical measurement range was 0.03-5.14, 0.32-36.02, and 0.10-9.99 mg/L for retinol, α-tocopherol, and γ-tocopherol, respectively. Intr-aassay and total coefficient of variation were <6.0% for all compounds. This method was traceable to standard reference materials offered by the National Institute of Standards and Technology. Reference intervals were established using plasma samples collected from 51 healthy adult donors and were found to be 0.30-1.20, 6.0-23.0, and 0.3-3.2 mg/L for retinol, α-tocopherol, and γ-tocopherol, respectively. In conclusion, we developed and validated a fast, simple, and sensitive high-performance liquid chromatography method for measuring the major forms of vitamins A and E in human plasma.
Pouplana, S; Espargaro, A; Galdeano, C; Viayna, E; Sola, I; Ventura, S; Muñoz-Torrero, D; Sabate, R
2014-01-01
Amyloid aggregation is linked to a large number of human disorders, from neurodegenerative diseases as Alzheimer's disease (AD) or spongiform encephalopathies to non-neuropathic localized diseases as type II diabetes and cataracts. Because the formation of insoluble inclusion bodies (IBs) during recombinant protein production in bacteria has been recently shown to share mechanistic features with amyloid self-assembly, bacteria have emerged as a tool to study amyloid aggregation. Herein we present a fast, simple, inexpensive and quantitative method for the screening of potential anti-aggregating drugs. This method is based on monitoring the changes in the binding of thioflavin-S to intracellular IBs in intact Eschericchia coli cells in the presence of small chemical compounds. This in vivo technique fairly recapitulates previous in vitro data. Here we mainly use the Alzheimer's related β-amyloid peptide as a model system, but the technique can be easily implemented for screening inhibitors relevant for other conformational diseases simply by changing the recombinant amyloid protein target. Indeed, we show that this methodology can be also applied to the evaluation of inhibitors of the aggregation of tau protein, another amyloidogenic protein with a key role in AD.
Selmeci, László; Seres, Leila; Antal, Magda; Lukács, Júlia; Regöly-Mérei, Andrea; Acsády, György
2005-01-01
Oxidative stress is known to be involved in many human pathological processes. Although there are numerous methods available for the assessment of oxidative stress, most of them are still not easily applicable in a routine clinical laboratory due to the complex methodology and/or lack of automation. In research into human oxidative stress, the simplification and automation of techniques represent a key issue from a laboratory point of view at present. In 1996 a novel oxidative stress biomarker, referred to as advanced oxidation protein products (AOPP), was detected in the plasma of chronic uremic patients. Here we describe in detail an automated version of the originally published microplate-based technique that we adapted for a Cobas Mira Plus clinical chemistry analyzer. AOPP reference values were measured in plasma samples from 266 apparently healthy volunteers (university students; 81 male and 185 female subjects) with a mean age of 21.3 years (range 18-33). Over a period of 18 months we determined AOPP concentrations in more than 300 patients in our department. Our experiences appear to demonstrate that this technique is especially suitable for monitoring oxidative stress in critically ill patients (sepsis, reperfusion injury, heart failure) even at daily intervals, since AOPP exhibited rapid responses in both directions. We believe that the well-established relationship between AOPP response and induced damage makes this simple, fast and inexpensive automated technique applicable in daily routine laboratory practice for assessing and monitoring oxidative stress in critically ill or other patients.
Mihály, Judith; Deák, Róbert; Szigyártó, Imola Csilla; Bóta, Attila; Beke-Somfai, Tamás; Varga, Zoltán
2017-03-01
Extracellular vesicles isolated by differential centrifugation from Jurkat T-cell line were investigated by attenuated total reflection Fourier-transform infrared spectroscopy (ATR-FTIR). Amide and CH stretching band intensity ratios calculated from IR bands, characteristic of protein and lipid components, proved to be distinctive for the different extracellular vesicle subpopulations. This proposed 'spectroscopic protein-to-lipid ratio', combined with the outlined spectrum-analysis protocol is valid also for low sample concentrations (0.15-0.05mg/ml total protein content) and can carry information about the presence of other non-vesicular formations such as aggregated proteins, lipoproteins and immune complexes. Detailed analysis of IR data reveals compositional changes of extracellular vesicles subpopulations: second derivative spectra suggest changes in protein composition from parent cell towards exosomes favoring proteins with β-turns and unordered motifs at the expense of intermolecular β-sheet structures. The IR-based protein-to-lipid assessment protocol was tested also for red blood cell derived microvesicles for which similar values were obtained. The potential applicability of this technique for fast and efficient characterization of vesicular components is high as the investigated samples require no further preparations and all the different molecular species can be determined in the same sample. The results indicate that ATR-FTIR measurements provide a simple and reproducible method for the screening of extracellular vesicle preparations. It is hoped that this sophisticated technique will have further impact in extracellular vesicle research.
Technology Transfer Automated Retrieval System (TEKTRAN)
A simple, fast, and cost-effective sample preparation method, previously developed and validated for the analysis of organic contaminants in fish using low-pressure gas chromatography tandem mass spectrometry (LPGC-MS/MS), was evaluated for analysis of polybrominated diphenyl ethers (PBDEs) and dich...
Gasperl, Anna; Morvan-Bertrand, Annette; Prud'homme, Marie-Pascale; van der Graaff, Eric; Roitsch, Thomas
2015-01-01
Despite the fact that fructans are the main constituent of water-soluble carbohydrates in forage grasses and cereal crops of temperate climates, little knowledge is available on the regulation of the enzymes involved in fructan metabolism. The analysis of enzyme activities involved in this process has been hampered by the low affinity of the fructan enzymes for sucrose and fructans used as fructosyl donor. Further, the analysis of fructan composition and enzyme activities is restricted to specialized labs with access to suited HPLC equipment and appropriate fructan standards. The degradation of fructan polymers with high degree of polymerization (DP) by fructan exohydrolases (FEHs) to fructosyloligomers is important to liberate energy in the form of fructan, but also under conditions where the generation of low DP polymers is required. Based on published protocols employing enzyme coupled endpoint reactions in single cuvettes, we developed a simple and fast kinetic 1-FEH assay. This assay can be performed in multi-well plate format using plate readers to determine the activity of 1-FEH against 1-kestotriose, resulting in a significant time reduction. Kinetic assays allow an optimal and more precise determination of enzyme activities compared to endpoint assays, and enable to check the quality of any reaction with respect to linearity of the assay. The enzyme coupled kinetic 1-FEH assay was validated in a case study showing the expected increase in 1-FEH activity during cold treatment. This assay is cost effective and could be performed by any lab with access to a plate reader suited for kinetic measurements and readings at 340 nm, and is highly suited to assess temporal changes and relative differences in 1-FEH activities. Thus, this enzyme coupled kinetic 1-FEH assay is of high importance both to the field of basic fructan research and plant breeding.
Gasperl, Anna; Morvan-Bertrand, Annette; Prud’homme, Marie-Pascale; Roitsch, Thomas
2015-01-01
Despite the fact that fructans are the main constituent of water-soluble carbohydrates in forage grasses and cereal crops of temperate climates, little knowledge is available on the regulation of the enzymes involved in fructan metabolism. The analysis of enzyme activities involved in this process has been hampered by the low affinity of the fructan enzymes for sucrose and fructans used as fructosyl donor. Further, the analysis of fructan composition and enzyme activities is restricted to specialized labs with access to suited HPLC equipment and appropriate fructan standards. The degradation of fructan polymers with high degree of polymerization (DP) by fructan exohydrolases (FEHs) to fructosyloligomers is important to liberate energy in the form of fructan, but also under conditions where the generation of low DP polymers is required. Based on published protocols employing enzyme coupled endpoint reactions in single cuvettes, we developed a simple and fast kinetic 1-FEH assay. This assay can be performed in multi-well plate format using plate readers to determine the activity of 1-FEH against 1-kestotriose, resulting in a significant time reduction. Kinetic assays allow an optimal and more precise determination of enzyme activities compared to endpoint assays, and enable to check the quality of any reaction with respect to linearity of the assay. The enzyme coupled kinetic 1-FEH assay was validated in a case study showing the expected increase in 1-FEH activity during cold treatment. This assay is cost effective and could be performed by any lab with access to a plate reader suited for kinetic measurements and readings at 340 nm, and is highly suited to assess temporal changes and relative differences in 1-FEH activities. Thus, this enzyme coupled kinetic 1-FEH assay is of high importance both to the field of basic fructan research and plant breeding. PMID:26734049
A simple three-dimensional-focusing, continuous-flow mixer for the study of fast protein dynamics
Burke, Kelly S.; Parul, Dzmitry; Reddish, Michael J.; Dyer, R. Brian
2013-01-01
We present a simple, yet flexible microfluidic mixer with a demonstrated mixing time as short as 80 µs that is widely accessible because it is made of commercially available parts. To simplify the study of fast protein dynamics, we have developed an inexpensive continuous-flow microfluidic mixer, requiring no specialized equipment or techniques. The mixer uses three-dimensional, hydrodynamic focusing of a protein sample stream by a surrounding sheath solution to achieve rapid diffusional mixing between the sample and sheath. Mixing initiates the reaction of interest. Reactions can be spatially observed by fluorescence or absorbance spectroscopy. We characterized the pixel-to-time calibration and diffusional mixing experimentally. We achieved a mixing time as short as 80 µs. We studied the kinetics of horse apomyoglobin (apoMb) unfolding from the intermediate (I) state to its completely unfolded (U) state, induced by a pH jump from the initial pH of 4.5 in the sample stream to a final pH of 2.0 in the sheath solution. The reaction time was probed using the fluorescence of 1-anilinonapthalene-8-sulfonate (1,8-ANS) bound to the folded protein. We observed unfolding of apoMb within 760 µs, without populating additional intermediate states under these conditions. We also studied the reaction kinetics of the conversion of pyruvate to lactate catalyzed by lactate dehydrogenase using the intrinsic tryptophan emission of the enzyme. We observe sub-millisecond kinetics that we attribute to Michaelis complex formation and loop domain closure. These results demonstrate the utility of the three-dimensional focusing mixer for biophysical studies of protein dynamics. PMID:23760106
NASA Astrophysics Data System (ADS)
Jagetic, Lydia J.; Newhauser, Wayne D.
2015-06-01
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.
NASA Astrophysics Data System (ADS)
Tanguay, J.; Hou, X.; Esquinas, P.; Vuckovic, M.; Buckley, K.; Schaffer, P.; Bénard, F.; Ruth, T. J.; Celler, A.
2015-11-01
Cyclotron production of {{}99\\text{m}} Tc through the 100Mo(p,2n){{}99\\text{m}} Tc reaction channel is actively being investigated as an alternative to reactor-based 99Mo generation by nuclear fission of 235U. Like most radioisotope production methods, cyclotron production of {{}99\\text{m}} Tc will result in creation of unwanted impurities, including Tc and non-Tc isotopes. It is important to measure the amounts of these impurities for release of cyclotron-produced {{}99\\text{m}} Tc (CPTc) for clinical use. Detection of radioactive impurities will rely on measurements of their gamma (γ) emissions. Gamma spectroscopy is not suitable for this purpose because the overwhelming presence of {{}99\\text{m}} Tc and the count-rate limitations of γ spectroscopy systems preclude fast and accurate measurement of small amounts of impurities. In this article we describe a simple and fast method for measuring γ emission rates from radioactive impurities in CPTc. The proposed method is similar to that used to identify 99Mo breakthrough in generator-produced {{}99\\text{m}} Tc: one dose calibrator (DC) reading of a CPTc source placed in a lead shield is followed by a second reading of the same source in air. Our experimental and theoretical analysis show that the ratio of DC readings in lead to those in air are linearly related to γ emission rates from impurities per MBq of {{}99\\text{m}} Tc over a large range of clinically-relevant production conditions. We show that estimates of the γ emission rates from Tc impurities per MBq of {{}99\\text{m}} Tc can be used to estimate increases in radiation dose (relative to pure {{}99\\text{m}} Tc) to patients injected with CPTc-based radiopharmaceuticals. This enables establishing dosimetry-based clinical-release criteria that can be tested using commercially-available dose calibrators. We show that our approach is highly sensitive to the presence of {{}93\\text{g}} Tc, {{}93\\text{m}} Tc, {{}94\\text{g}} Tc, {{}94\\text{m}} Tc
NASA Astrophysics Data System (ADS)
Mondini, S.; Ferretti, A. M.; Puglisi, A.; Ponti, A.
2012-08-01
Pebbles is a user-friendly software program which implements an accurate, unbiased, and fast method to measure the morphology of a population of nanoparticles (NPs) from TEM micrographs. The morphological parameters of the projected NP shape are obtained by fitting intensity models to the TEM micrograph. Pebbles can be used either in automatic mode, where both fitting and validation are reliably carried out with minimal human intervention, and in manual mode, where the user has full control on the fitting and validation steps. Accuracy in diameter measurement has been shown to be <~1%. When operated in automatic mode, Pebbles can be very fast. The effective speed of 1 NP s-1 has been achieved in favorable cases (packed monolayer of NPs). Since Pebbles is based on a local modeling procedure, it successfully treats cases such as low contrast NPs, NPs with significant diffraction scattering, and inhomogeneous background which often make conventional thresholding procedures fail. Pebbles is accompanied by PebbleJuggler, a software program for the statistical analysis of the sets of best-fit NP models created by Pebbles. Effort has been devoted to make Pebbles and PebbleJuggler the most user-friendly and the least user-tedious we could. Pebbles and PebbleJuggler are available at http://pebbles.istm.cnr.it.Pebbles is a user-friendly software program which implements an accurate, unbiased, and fast method to measure the morphology of a population of nanoparticles (NPs) from TEM micrographs. The morphological parameters of the projected NP shape are obtained by fitting intensity models to the TEM micrograph. Pebbles can be used either in automatic mode, where both fitting and validation are reliably carried out with minimal human intervention, and in manual mode, where the user has full control on the fitting and validation steps. Accuracy in diameter measurement has been shown to be <~1%. When operated in automatic mode, Pebbles can be very fast. The effective speed of 1
NASA Astrophysics Data System (ADS)
Dumbser, Michael; Loubère, Raphaël
2016-08-01
In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order
Menegotti, L.; Delana, A.; Martignano, A.
2008-07-15
Film dosimetry is an attractive tool for dose distribution verification in intensity modulated radiotherapy (IMRT). A critical aspect of radiochromic film dosimetry is the scanner used for the readout of the film: the output needs to be calibrated in dose response and corrected for pixel value and spatial dependent nonuniformity caused by light scattering; these procedures can take a long time. A method for a fast and accurate calibration and uniformity correction for radiochromic film dosimetry is presented: a single film exposure is used to do both calibration and correction. Gafchromic EBT films were read with two flatbed charge coupled device scanners (Epson V750 and 1680Pro). The accuracy of the method is investigated with specific dose patterns and an IMRT beam. The comparisons with a two-dimensional array of ionization chambers using a 18x18 cm{sup 2} open field and an inverse pyramid dose pattern show an increment in the percentage of points which pass the gamma analysis (tolerance parameters of 3% and 3 mm), passing from 55% and 64% for the 1680Pro and V750 scanners, respectively, to 94% for both scanners for the 18x18 open field, and from 76% and 75% to 91% for the inverse pyramid pattern. Application to an IMRT beam also shows better gamma index results, passing from 88% and 86% for the two scanners, respectively, to 94% for both. The number of points and dose range considered for correction and calibration appears to be appropriate for use in IMRT verification. The method showed to be fast and to correct properly the nonuniformity and has been adopted for routine clinical IMRT dose verification.
NASA Astrophysics Data System (ADS)
Gliese, U.; Avanov, L. A.; Barrie, A.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Zeuch, M.; Pollock, C. J.; Jacques, A. D.
2013-12-01
The Fast Plasma Investigation (FPI) of the NASA Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30ms for electrons; 150ms for ions) and spatially differentiated measurements of full the 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity and reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated by setting a fixed detection threshold and, subsequently, measuring a detection system count rate plateau curve to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection amplifier threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully and individually characterize each of the fundamental parameters of the detection system. We present a new detection system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. The fundamental
Esmaeilzadeh, Sara; Valizadeh, Hadi; Zakeri-Milani, Parvin
2016-01-01
Purpose: The main goal of this study was development of a reverse phase high performance liquid chromatography (RP-HPLC) method for flutamide quantitation which is applicable to protein binding studies. Methods: Ultrafilteration method was used for protein binding study of flutamide. For sample analysis, flutamide was extracted by a simple and low cost extraction method using diethyl ether and then was determined by HPLC/UV. Acetanilide was used as an internal standard. The chromatographic system consisted of a reversed-phase C8 column with C8 pre-column, and the mobile phase of a mixture of 29% (v/v) methanol, 38% (v/v) acetonitrile and 33% (v/v) potassium dihydrogen phosphate buffer (50 mM) with pH adjusted to 3.2. Results: Acetanilide and flutamide were eluted at 1.8 and 2.9 min, respectively. The linearity of method was confirmed in the range of 62.5-16000 ng/ml (r2 > 0.99). The limit of quantification was shown to be 62.5 ng/ml. Precision and accuracy ranges found to be (0.2-1.4%, 90-105%) and (0.2-5.3 %, 86.7-98.5 %) respectively. Acetanilide and flutamide capacity factor values of 1.35 and 2.87, tailing factor values of 1.24 and 1.07 and resolution values of 1.8 and 3.22 were obtained in accordance with ICH guidelines. Conclusion: Based on the obtained results a rapid, precise, accurate, sensitive and cost-effective analysis procedure was proposed for quantitative determination of flutamide. PMID:27478788
Meng, Qingyong; Chen, Jun; Zhang, Dong H
2016-04-21
To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ∼20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.
Sakaridis, Ioannis; Ganopoulos, Ioannis; Argiriou, Anagnostis; Tsaftaris, Athanasios
2013-05-01
The substitution of high priced meat with low cost ones and the fraudulent labeling of meat products make the identification and traceability of meat species and their processed products in the food chain important. A polymerase chain reaction followed by a High Resolution Melting (HRM) analysis was developed for species specific detection of buffalo; it was applied in six commercial meat products. A pair of specific 12S and universal 18S rRNA primers were employed and yielded DNA fragments of 220bp and 77bp, respectively. All tested products were found to contain buffalo meat and presented melting curves with at least two visible inflection points derived from the amplicons of the 12S specific and 18S universal primers. The presence of buffalo meat in meat products and the adulteration of buffalo products with unknown species were established down to a level of 0.1%. HRM was proven to be a fast and accurate technique for authentication testing of meat products.
NASA Astrophysics Data System (ADS)
Meng, Qingyong; Chen, Jun; Zhang, Dong H.
2016-04-01
To fast and accurately compute rate coefficients of the H/D + CH4 → H2/HD + CH3 reactions, we propose a segmented strategy for fitting suitable potential energy surface (PES), on which ring-polymer molecular dynamics (RPMD) simulations are performed. On the basis of recently developed permutation invariant polynomial neural-network approach [J. Li et al., J. Chem. Phys. 142, 204302 (2015)], PESs in local configuration spaces are constructed. In this strategy, global PES is divided into three parts, including asymptotic, intermediate, and interaction parts, along the reaction coordinate. Since less fitting parameters are involved in the local PESs, the computational efficiency for operating the PES routine is largely enhanced by a factor of ˜20, comparing with that for global PES. On interaction part, the RPMD computational time for the transmission coefficient can be further efficiently reduced by cutting off the redundant part of the child trajectories. For H + CH4, good agreements among the present RPMD rates and those from previous simulations as well as experimental results are found. For D + CH4, on the other hand, qualitative agreement between present RPMD and experimental results is predicted.
Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Kim, Daniel; Jensen, Jens H.; Wu, Ed X.; Sheth, Sujit S.; Brittenham, Gary M.
2009-01-01
Measurement of proton transverse relaxation rates (R2) is a generally useful means for quantitative characterization of pathological changes in tissue with a variety of clinical applications. The most widely used R2 measurement method is the Carr-Purcell-Meiboom-Gill (CPMG) pulse sequence but its relatively long scan time requires respiratory gating for chest or body MRI, rendering this approach impractical for comprehensive assessment within a clinically acceptable examination time. The purpose of our study was to develop a breath-hold multi-echo fast spin-echo (FSE) sequence for accurate measurement of R2 in the liver and heart. Phantom experiments and studies of subjects in vivo were performed to compare the FSE data with the corresponding even-echo CPMG data. For pooled data, the R2 measurements were strongly correlated (Pearson correlation coefficient = 0.99) and in excellent agreement (mean difference [CPMG-FSE] = 0.10 s−1; 95% limits of agreement were 1.98 and −1.78 s−1) between the two pulse sequences. PMID:19526516
Zhang, Shen; Wu, Qi; Shan, Yichu; Zhao, Qun; Zhao, Baofeng; Weng, Yejing; Sui, Zhigang; Zhang, Lihua; Zhang, Yukui
2016-01-01
Most currently proteomic studies use data-dependent acquisition with dynamic exclusion to identify and quantify the peptides generated by the digestion of biological sample. Although dynamic exclusion permits more identifications and higher possibility to find low abundant proteins, stochastic and irreproducible precursor ion selection caused by dynamic exclusion limit the quantification capabilities, especially for MS/MS based quantification. This is because a peptide is usually triggered for fragmentation only once due to dynamic exclusion. Therefore the fragment ions used for quantification only reflect the peptide abundances at that given time point. Here, we propose a strategy of fast MS/MS acquisition without dynamic exclusion to enable precise and accurate quantification of proteome by MS/MS fragment intensity. The results showed comparable proteome identification efficiency compared to the traditional data-dependent acquisition with dynamic exclusion, better quantitative accuracy and reproducibility regardless of label-free based quantification or isobaric labeling based quantification. It provides us with new insights to fully explore the potential of modern mass spectrometers. This strategy was applied to the relative quantification of two human disease cell lines, showing great promises for quantitative proteomic applications. PMID:27198003
Yuan, Yue; Jiang, Shenlong; Miao, Qingqing; Zhang, Jia; Wang, Mengjing; An, Linna; Cao, Qinjingwen; Guan, Yafeng; Zhang, Qun; Liang, Gaolin
2014-07-01
A water-soluble, biocompatible, and fluorescent chemosensor (1) for label-free, simple, and fast detection of mercury ions (Hg(2+)) in aqueous solutions and in HepG2 cells with high selectivity is reported herein. Chelation of 1 with Hg(2+) results in the disappearance of its fluorescence emission at 350 nm and the appearance of a new emission at 405 nm. Selectivity and interference studies indicated that 1 could be selectively chelated by Hg(2+) without interference from other metal ions. Insight into the mechanisms responsible for its fluorescence effect was gained from ultrafast transient absorption spectroscopy. With these properties, 1 was successfully applied for imaging Hg(2+) in living cells and for removing Hg(2+) from river water. Moreover, we also constructed a simple device for fast and effective removal of Hg(2+) from contaminated liquid samples.
Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; Meng, Patrick; Xie, Lei
2016-01-01
Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and
NASA Astrophysics Data System (ADS)
Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas
2013-03-01
Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.
NASA Astrophysics Data System (ADS)
Ma, Peng-Cheng; Yan, Lei-Lei; Chen, Gui-Bin; Li, Xiao-Wei; Zhan, You-Bang
2016-12-01
The control of slow and fast light propagation is a challenging task. Here, we theoretically study the dynamics of a driven optomechanical cavity coupled to a charged nanomechanical resonator (NR) via Coulomb interaction. We find that the tunable switch between slow- and fast-light for two signal modes can be observed from the output field by adjusting the laser-cavity detuning in this system. Moreover, the frequencies of two signal light can be tuned by Coulomb coupling strength. In comparison with previous schemes, the clear advantage of our scheme is that we can simply switch from fast- to slow-light in two signal modes by only adjusting the laser-cavity deturning from Δ ={ω1} to Δ =-{ω1} . The proposal may have potential application in optical router and quantum optomechanical memory.
Lee, Dohoon; Lee, Jinwoo; Kim, Jungbae; Kim, Jaeyun; Na, Hyon Bin; Kim, Bokie; Shin, Chae-Ho; Kwak, Ja Hun; Dohnalkova, Alice; Grate, Jay W.; Hyeon, Taeghwan; Kim, Hak Sung
2005-12-05
We fabricated a highly sensitive and fast glucose biosensor by simply immobilizing glucose oxidase in mesocellular carbon foam. Due to its unique structure, the MSU-F-C enabled high enzyme loading without serious mass transfer limitation, resulting in high catalytic efficiency. As a result, the glucose biosensor fabricated with MSU-F-C/GOx showed a high sensitivity and fast response. Given these results and the inherent electrical conductivity, we anticipate that MSU-F-C will make a useful matrix for enzyme immobilization in various biocatalytic and electrobiocatalytic applications.
McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan
2015-01-01
Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687
Hwang, Eun Gu; Lee, Yunjung
2016-01-01
Simple radiography is the best diagnostic tool for rib fractures caused by chest trauma, but it has some limitations. Thus, other tools are also being used. The aims of this study were to investigate the effectiveness of ultrasonography (US) for identifying rib fractures and to identify influencing factors of its effectiveness. Between October 2003 and August 2007, 201 patients with blunt chest trauma were available to undergo chest radiographic and US examinations for diagnosis of rib fractures. The two modalities were compared in terms of effectiveness based on simple radiographic readings and US examination results. We also investigated the factors that influenced the effectiveness of US examination. Rib fractures were detected on radiography in 69 patients (34.3%) but not in 132 patients. Rib fractures were diagnosed by using US examination in 160 patients (84.6%). Of the 132 patients who showed no rib fractures on radiography, 92 showed rib fractures on US. Among the 69 patients of rib fracture detected on radiography, 33 had additional rib fractures detected on US. Of the patients, 76 (37.8%) had identical radiographic and US results, and 125 (62.2%) had fractures detected on US that were previously undetected on radiography or additional fractures detected on US. Age, duration until US examination, and fracture location were not significant influencing factors. However, in the group without detected fractures on radiography, US showed a more significant effectiveness than in the group with detected fractures on radiography (P=0.003). US examination could detect unnoticed rib fractures on simple radiography. US examination is especially more effective in the group without detected fractures on radiography. More attention should be paid to patients with chest trauma who have no detected fractures on radiography. PMID:28119889
Gorrell, Jamieson C; Boutin, Stan; Raveh, Shirley; Neuhaus, Peter; Côté, Steeve D; Coltman, David W
2012-09-01
We determined the sequence of the male-specific minor histocompatibility complex antigen (Smcy) from the Y chromosome of seven squirrel species (Sciuridae, Rodentia). Based on conserved regions inside the Smcy intron sequence, we designed PCR primers for sex determination in these species that can be co-amplified with nuclear loci as controls. PCR co-amplification yields two products for males and one for females that are easily visualized as bands by agarose gel electrophoresis. Our method provides simple and reliable sex determination across a wide range of squirrel species.
de Oliveira, Marcelo Firmino; Vieira, Andressa Tironi; Batista, Antônio Carlos Ferreira; Rodrigues, Hugo de Souza; Stradiotto, Nelson Ramos
2011-01-01
A simple, fast, and complete route for the production of methylic and ethylic biodiesel from tucum oil is described. Aliquots of the oil obtained directly from pressed tucum (pulp and almonds) were treated with potassium methoxide or ethoxide at 40°C for 40 min. The biodiesel form was removed from the reactor and washed with 0.1 M HCl aqueous solution. A simple distillation at 100°C was carried out in order to remove water and alcohol species from the biodiesel. The oxidative stability index was obtained for the tucum oil as well as the methylic and ethylic biodiesel at 6.13, 2.90, and 2.80 h, for storage times higher than 8 days. Quality control of the original oil and of the methylic and ethylic biodiesels, such as the amount of glycerin produced during the transesterification process, was accomplished by the TLC, GC-MS, and FT-IR techniques. The results obtained in this study indicate a potential biofuel production by simple treatment of tucum, an important Amazonian fruit. PMID:21629751
NASA Astrophysics Data System (ADS)
Pradhan, Susmita; Das, Rashmita; Bhar, Radhaballabh; Bandyopadhyay, Rajib; Pramanik, Panchanan
2017-02-01
A new simple chemical method for synthesis of nanocrystalline bismuth telluride (Bi2Te3) has been developed by microwave assisted reduction of homogeneous tartrate complexes of bismuth and tellurium metal ions with hydrazine. The reaction is performed at pH 10. The nano-crystallites have rhombohedral phase identified by XRD. The size distribution of nanoparticle is narrow and it ranges between 50 to 70 nm. FESEM shows that the fine powders are composed of small crystallites. The TEM micrographs show mostly deformed spherical particles and the lattice fringes are found to be 0.137 nm. Energy dispersive X-ray spectroscopy (EDX) analysis shows the atomic composition ratio between bismuth and tellurium is 2:3. Thermoelectric properties of the materials are studied after sintering by spark plasma sintering method (SPS). The grain size of the material after sintering is in the nanometer range. The material shows enhanced Seebeck coefficient and electrical conductivity value at 300 K. The figure of merit is found to be 1.18 at 300 K.
Gómez Ruiz, Braulio; Roux, Stéphanie; Courtois, Francis; Bonazzi, Catherine
2016-11-15
A simple, rapid and reliable method was developed for quantifying ascorbic (AA) and dehydroascorbic (DHAA) acids and validated in 20mM malate buffer (pH 3.8). It consists in a spectrophotometric measurement of AA, either directly on the solution added with metaphosphoric acid or after reduction of DHAA into AA by dithiothreitol. This method was developed with real time measurement of reactions kinetics in bulk reactors in mind, and was checked in terms of linearity, limits of detection and quantification, fidelity and accuracy. The linearity was found satisfactory on the range of 0-6.95mM with limits of detection and quantification of 0.236mM and 0.467mM, respectively. The method was found acceptable in terms of fidelity and accuracy with a coefficient of variation for repeatability and reproducibility below 6% for AA and below 15% for DHAA, and with a recovery range of 97-102% for AA and 88-112% for DHAA.
A new simple and fast thermally-solvent assisted method to bond PMMA-PMMA in micro-fluidics devices
NASA Astrophysics Data System (ADS)
Bamshad, Arshya; Nikfarjam, Alireza; Khaleghi, Hossein
2016-06-01
A rapid and simple thermally-solvent assisted method of bonding was introduced for poly(methyl methacrylate) (PMMA) based microfluidic substrates. The technique is a low-temperature (68 {}^\\circ \\text{C} ), and rapid (15 \\min ) bonding technique; in addition, only a fan-assisted oven with some paper clamps are used. Two different solvents (ethanol and isopropyl alcohol) with two different methods of cooling (one-step and three steps) were employed to determine the best solvent and method of cooling (residual stresses may be released in different cooling methods) by considering bonding strength and quality. In this bonding technique, a thin film of solvent between two PMMA sheets disperses tends to dissolve a thin film of PMMA sheet surface, then evaporate, and finally reconnect monomers of the PMMA sheets at the specific operating temperature. The operating temperature of this method comes from the coincidence of the solubility parameter graph of PMMA with the solubility parameter graph of the solvents. Different tests such as tensile strength test, deformation test, leakage tests, and surface characteristics tests were performed to find the optimum conditions for this bonding strategy. The best bonding quality and the highest bonding strength (28.47 \\text{MPa} ) occurred when 70% isopropyl alcohol solution was employed with the one-step cooling method. Furthermore, the bonding reversibility was taken into account and critical percentages for irreversible bonding were obtained for both of the solvents and methods. This method provides a perfect bonding quality for PMMA substrates, and can be used in laboratories without needing any expensive and special instruments, because of its merits such as lower bonding time, lower-cost, and higher strength etc in comparison with the majority of other common bonding techniques.
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.; Riihimaki, Laura D.; Michalsky, Joseph; Hodges, G. B.
2014-10-25
We introduce and evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone at five wavelengths (415, 500, 615, 673 and 870nm), under fully overcast conditions. Our retrieval is based on a one-line semi-analytical equation and widely accepted assumptions regarding the weak spectral dependence of cloud optical properties, such as cloud optical depth and asymmetry parameter, in the visible and near-infrared spectral range. To illustrate the performance of our retrieval, we use as input measurements of spectral atmospheric transmission from Multi-Filter Rotating Shadowband Radiometer (MFRSR). These MFRSR data are collected at two well-established continental sites in the United States supported by the U.S. Department of Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program and National Oceanic and Atmospheric Administration (NOAA). The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo. In particular, these comparisons are made at four MFRSR wavelengths (500, 615, 673 and 870nm) and for four seasons (winter, spring, summer and fall) at the ARM site using multi-year (2008-2013) MFRSR and MODIS data. Good agreement, on average, for these wavelengths results in small values (≤0.01) of the corresponding root mean square errors (RMSEs) for these two sites. The obtained RMSEs are comparable with those obtained previously for the shortwave albedos (MODIS-derived versus tower-measured) for these sites during growing seasons. We also demonstrate good agreement between tower-based daily-averaged surface albedos measured for “nearby” overcast and non-overcast days. Thus, our retrieval originally developed for overcast conditions likely can be extended for non-overcast days by interpolating between overcast retrievals.
Machado, Ignacio; Bergmann, Gabriela; Pistón, Mariela
2016-03-01
A simple and fast ultrasound-assisted procedure for the determination of iron and zinc in infant formulas is presented. The analytical determinations were carried out by flame atomic absorption spectrometry. Multivariate experiments were performed for optimization; in addition, a comparative study was carried out using two ultrasonic devices. A method using an ultrasonic bath was selected because several samples can be prepared simultaneously, and there is less contamination risk. Analytical precision (sr(%)) was 3.3% and 4.1% for iron and zinc, respectively. Trueness was assessed using a reference material and by comparison of the results obtained analyzing commercial samples using a reference method. The results were statistically equivalent to the certified values and in good agreement with those obtained using the reference method. The proposed method can be easily implemented in laboratories for routine analysis with the advantage of being rapid and in agreement with green chemistry.
Song, Weitao; Zhang, Yiqun; Li, Guijie; Chen, Haiyan; Wang, Hui; Zhao, Qi; He, Dong; Zhao, Chun; Ding, Lan
2014-01-15
This paper presented a fast, simple and green sample pretreatment method for the extraction of 8 carbamate pesticides in rice. The carbamate pesticides were extracted by microwave assisted water steam extraction method, and the extract obtained was immediately applied on a C18 solid phase extraction cartridge for clean-up and concentration. The eluate containing target compounds was finally analysed by high performance liquid chromatography with mass spectrometry. The parameters affecting extraction efficiency were investigated and optimised. The limits of detection ranging from 1.1 to 4.2ngg(-1) were obtained. The recoveries of 8 carbamate pesticides ranged from 66% to 117% at three spiked levels, and the inter- and intra-day relative standard deviation values were less than 9.1%. Compared with traditional methods, the proposed method cost less extraction time and organic solvent.
2014-01-01
We introduce a simple and fast approach for predicting RNA chemical shifts from interatomic distances that performs with an accuracy similar to existing predictors and enables the first chemical shift-restrained simulations of RNA to be carried out. Our analysis demonstrates that the applied restraints can effectively guide conformational sampling toward regions of space that are more consistent with chemical shifts than the initial coordinates used for the simulations. As such, our approach should be widely applicable in mapping the conformational landscape of RNAs via chemical shift-guided molecular dynamics simulations. The simplicity and demonstrated sensitivity to three-dimensional structure should also allow our method to be used in chemical shift-based RNA structure prediction, validation, and refinement. PMID:25255209
A fast and simple program for solving local Schrödinger equations in two and three dimensions
NASA Astrophysics Data System (ADS)
Janecek, S.; Krotscheck, E.
2008-06-01
We describe a simple and rapidly converging code for solving local Schrödinger equations in two and three dimensions. Our method utilizes a fourth-order factorization of the imaginary time evolution operator which improves the convergence rate by one to two orders of magnitude compared with a second-order Trotter factorization. We present the theory behind the method and strategies for assessing convergence and accuracy. Our code requires one user defined function which specifies the local external potential. We describe the definition of this function as well as input and output functionalities. Program summaryProgram title: 3dsch/2dsch Catalogue identifier: AEAQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 40 085 No. of bytes in distributed program, including test data, etc.: 285 957 Distribution format: tar.gz Programming language: Fortran 90 Computer: Tested on x86, amd64, Itanium2, and MIPS architectures. Should run on any architecture providing a Fortran 90 compiler Operating system: So far tested under UNIX/Linux and Irix. Any OS with a Fortran 90 compiler available should suffice RAM: 2 MB to 16 GB, depending on system size Classification: 6.10, 7.3 External routines: FFTW3 ( http://www.fftw.org/), Lapack ( http://www.netlib.org/lapack/) Nature of problem: Numerical calculation of low-lying states of 2D and 3D local Schrödinger equations in configuration space. Solution method: 4th order factorization of the diffusion operator. Restrictions: The code is at this time designed for up to 152 states in 3D and for up to 100 states in 2D. This number can easily be increased by generating more trial states in the initialization routine. Additional comments: Sample input files for the 2D and the 3D
Moester, Martiene J.C.; Schoeman, Monique A.E.; Oudshoorn, Ineke B.; Beusekom, Mara M. van; Mol, Isabel M.; Kaijzel, Eric L.; Löwik, Clemens W.G.M.; Rooij, Karien E. de
2014-01-03
Highlights: •We validate a simple and fast method of quantification of in vitro mineralization. •Fluorescently labeled agents can detect calcium deposits in the mineralized matrix of cell cultures. •Fluorescent signals of the probes correlated with Alizarin Red S staining. -- Abstract: Alizarin Red S staining is the standard method to indicate and quantify matrix mineralization during differentiation of osteoblast cultures. KS483 cells are multipotent mouse mesenchymal progenitor cells that can differentiate into chondrocytes, adipocytes and osteoblasts and are a well-characterized model for the study of bone formation. Matrix mineralization is the last step of differentiation of bone cells and is therefore a very important outcome measure in bone research. Fluorescently labelled calcium chelating agents, e.g. BoneTag and OsteoSense, are currently used for in vivo imaging of bone. The aim of the present study was to validate these probes for fast and simple detection and quantification of in vitro matrix mineralization by KS483 cells and thus enabling high-throughput screening experiments. KS483 cells were cultured under osteogenic conditions in the presence of compounds that either stimulate or inhibit osteoblast differentiation and thereby matrix mineralization. After 21 days of differentiation, fluorescence of stained cultures was quantified with a near-infrared imager and compared to Alizarin Red S quantification. Fluorescence of both probes closely correlated to Alizarin Red S staining in both inhibiting and stimulating conditions. In addition, both compounds displayed specificity for mineralized nodules. We therefore conclude that this method of quantification of bone mineralization using fluorescent compounds is a good alternative for the Alizarin Red S staining.
Yun, Myeong Gu; Kim, Ye Kyun; Ahn, Cheol Hyoun; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun; Kim, Yong-Hoon
2016-01-01
We have demonstrated that photo-thin film transistors (photo-TFTs) fabricated via a simple defect-generating process could achieve fast recovery, a high signal to noise (S/N) ratio, and high sensitivity. The photo-TFTs are inverted-staggered bottom-gate type indium-gallium-zinc-oxide (IGZO) TFTs fabricated using atomic layer deposition (ALD)-derived Al2O3 gate insulators. The surfaces of the Al2O3 gate insulators are damaged by ion bombardment during the deposition of the IGZO channel layers by sputtering and the damage results in the hysteresis behavior of the photo-TFTs. The hysteresis loops broaden as the deposition power density increases. This implies that we can easily control the amount of the interface trap sites and/or trap sites in the gate insulator near the interface. The photo-TFTs with large hysteresis-related defects have high S/N ratio and fast recovery in spite of the low operation voltages including a drain voltage of 1 V, positive gate bias pulse voltage of 3 V, and gate voltage pulse width of 3 V (0 to 3 V). In addition, through the hysteresis-related defect-generating process, we have achieved a high responsivity since the bulk defects that can be photo-excited and eject electrons also increase with increasing deposition power density. PMID:27553518
NASA Astrophysics Data System (ADS)
Yun, Myeong Gu; Kim, Ye Kyun; Ahn, Cheol Hyoun; Cho, Sung Woon; Kang, Won Jun; Cho, Hyung Koun; Kim, Yong-Hoon
2016-08-01
We have demonstrated that photo-thin film transistors (photo-TFTs) fabricated via a simple defect-generating process could achieve fast recovery, a high signal to noise (S/N) ratio, and high sensitivity. The photo-TFTs are inverted-staggered bottom-gate type indium-gallium-zinc-oxide (IGZO) TFTs fabricated using atomic layer deposition (ALD)-derived Al2O3 gate insulators. The surfaces of the Al2O3 gate insulators are damaged by ion bombardment during the deposition of the IGZO channel layers by sputtering and the damage results in the hysteresis behavior of the photo-TFTs. The hysteresis loops broaden as the deposition power density increases. This implies that we can easily control the amount of the interface trap sites and/or trap sites in the gate insulator near the interface. The photo-TFTs with large hysteresis-related defects have high S/N ratio and fast recovery in spite of the low operation voltages including a drain voltage of 1 V, positive gate bias pulse voltage of 3 V, and gate voltage pulse width of 3 V (0 to 3 V). In addition, through the hysteresis-related defect-generating process, we have achieved a high responsivity since the bulk defects that can be photo-excited and eject electrons also increase with increasing deposition power density.
NASA Astrophysics Data System (ADS)
Wu, Zefei; Guo, Yanqing; Guo, Yuzheng; Huang, Rui; Xu, Shuigang; Song, Jie; Lu, Huanhuan; Lin, Zhenxu; Han, Yu; Li, Hongliang; Han, Tianyi; Lin, Jiangxiazi; Wu, Yingying; Long, Gen; Cai, Yuan; Cheng, Chun; Su, Dangsheng; Robertson, John; Wang, Ning
2016-01-01
The transfer-free synthesis of high-quality, large-area graphene on a given dielectric substrate, which is highly desirable for device applications, remains a significant challenge. In this paper, we report on a simple rapid thermal treatment (RTT) method for the fast and direct growth of high-quality, large-scale monolayer graphene on a SiO2/Si substrate from solid carbon sources. The stack structure of a solid carbon layer/copper film/SiO2 is adopted in the RTT process. The inserted copper film does not only act as an active catalyst for the carbon precursor but also serves as a ``filter'' that prevents premature carbon dissolution, and thus, contributes to graphene growth on SiO2/Si. The produced graphene exhibits a high carrier mobility of up to 3000 cm2 V-1 s-1 at room temperature and standard half-integer quantum oscillations. Our work provides a promising simple transfer-free approach using solid carbon sources to obtain high-quality graphene for practical applications.
Borges, Ney Carter; Barrientos-Astigarraga, Rafael Eliseo; Sverdloff, Carlos Eduardo; Donato, José Luiz; Moreno, Patricia; Felix, Leila; Galvinas, Paulo Alexandre Rebelo; Moreno, Ronilson Agnaldo
2012-11-01
In the present study a simple, fast, sensitive and robust method to quantify mirtazapine in human plasma using quetiapine as the internal standard (IS) is described. The analyte and the IS were extracted from human plasma by a simple protein precipitation with methanol and were analyzed by high-performance liquid chromatography coupled to an electrospray tandem triple quadrupole mass spectrometer (HPLC-ESI-MS/MS). Chromatography was performed isocratically on a C(18), 5 µm analytical column and the run time was 1.8 min. The lower limit of quantitation was 0.5 ng/mL and a linear calibration curve over the range 0.5-150 ng/mL was obtained, showing acceptable accuracy and precision. This analytical method was applied in a relative bioavailability study in order to compare a test mirtazapine 30 mg single-dose formulation vs a reference formulation in 31 volunteers of both sexes. The study was conducted in an open randomized two-period crossover design and with a 14 day washout period. Since the 90% confidence interval for C(max) , AUC(last) and AUC(0-inf) were within the 80-125% interval proposed by the Food and Drug Administration and ANVISA (Brazilian Health Surveillance Agency), it was concluded that mirtazapine 30 mg/dose is bioequivalent to the reference formulation, according to both the rate and extent of absorption.
Essa, Khalid S.
2013-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472
Simple BCD circuit accurately counts to 24
NASA Technical Reports Server (NTRS)
Spafford, M. L.
1965-01-01
Ripple-through counter with divide-by-24 output pulse is used in digital control clocks to register hours and give a daily output signal. It uses commercially available digital modules that incorporate and/gates with flip-flops.
Floden, Evan W.; Tommaso, Paolo D.; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming
2016-01-01
The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. PMID:27106060
NASA Astrophysics Data System (ADS)
Liem, J. S.; Dong, F.; Owano, T. G.; Baer, D. S.
2010-12-01
Stable isotopes of water vapor are powerful tracers to investigate the hydrological cycle and ecological processes. Therefore, continuous, in-situ and accurate measurements of δ18O and δ2H are critical to advance the understanding of water-cycle dynamics worldwide. Furthermore, the combination of meteorological techniques and high-frequency isotopic water measurements can provide detailed time-resolved information on the eco-physiological performance of plants and enable improved understanding of water fluxes at ecosystem scales. In this work, we present recent development and field deployment of a novel Water Vapor Isotope Measurement System (WVIMS) capable of simultaneous in situ measurements of δ18O and δ2H and water mixing ratio (H2O) with high precision, accuracy and speed (up to 10 Hz measurement rate). The WVIMS consists of an Analyzer (Water Vapor Isotope Analyzer), based on cavity enhanced laser absorption spectroscopy, and a Standard Source (Water Vapor Isotope Standard Source), based on quantitative evaporation of a liquid water standard (with known isotopic content), and operates in a dual-inlet configuration. The WVIMS automatically controls the entire sample and data collection, data analysis and calibration process to allow for continuous, autonomous unattended long-term operation. The WVIMS has been demonstrated for accurate (i.e. fully calibrated) measurements ranging from 500 ppmv (typical of arctic environments) to over 30,000 ppmv (typical of tropical environments) in air. Dual-inlet operation, which involves regular calibration with isotopic water vapor reference standards, essentially eliminates measurement drift, ensures data reliability, and allows operation over an extremely wide ambient temperature range (5-45C). This presentation will include recent measurements recorded using the WVIMS in plant growth chambers and in arctic environments. The availability of this new instrumentation provides new opportunities for detailed continuous
Wan Chan Tseung, H; Ma, J; Beltran, C
2014-06-15
Purpose: To build a GPU-based Monte Carlo (MC) simulation of proton transport with detailed modeling of elastic and non-elastic (NE) protonnucleus interactions, for use in a very fast and cost-effective proton therapy treatment plan verification system. Methods: Using the CUDA framework, we implemented kernels for the following tasks: (1) Simulation of beam spots from our possible scanning nozzle configurations, (2) Proton propagation through CT geometry, taking into account nuclear elastic and multiple scattering, as well as energy straggling, (3) Bertini-style modeling of the intranuclear cascade stage of NE interactions, and (4) Simulation of nuclear evaporation. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions with therapeutically-relevant nuclei, (2) Pencil-beam dose calculations in homogeneous phantoms, (3) A large number of treatment plan dose recalculations, and compared with Geant4.9.6p2/TOPAS. A workflow was devised for calculating plans from a commercially available treatment planning system, with scripts for reading DICOM files and generating inputs for our MC. Results: Yields, energy and angular distributions of secondaries from NE collisions on various nuclei are in good agreement with the Geant4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%–2mm for 70–230 MeV pencil-beam dose distributions in water, soft tissue, bone and Ti phantoms is 100%. The pass rate at 2%–2mm for treatment plan calculations is typically above 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is around 20s for 1×10{sup 7} proton histories. Conclusion: Our GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle NE interactions on any nucleus. Dosimetric calculations demonstrate very good agreement with Geant4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil
Poulin, E; Racine, E; Beaulieu, L; Binnekamp, D
2014-06-15
Purpose: In high dose rate brachytherapy (HDR-B), actual catheter reconstruction protocols are slow and errors prompt. The purpose of this study was to evaluate the accuracy and robustness of an electromagnetic (EM) tracking system for improved catheter reconstruction in HDR-B protocols. Methods: For this proof-of-principle, a total of 10 catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a Philips-design 18G biopsy needle (used as an EM stylet) and the second generation Aurora Planar Field Generator from Northern Digital Inc. The Aurora EM system exploits alternating current technology and generates 3D points at 40 Hz. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical CT system with a resolution of 0.089 mm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, 5 catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 seconds or less. This would imply that for a typical clinical implant of 17 catheters, the total reconstruction time would be less than 3 minutes. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.92 ± 0.37 mm and 1.74 ± 1.39 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be significantly more accurate (unpaired t-test, p < 0.05). A mean difference of less than 0.5 mm was found between successive EM reconstructions. Conclusion: The EM reconstruction was found to be faster, more accurate and more robust than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators. We would like to disclose that the equipments, used in this study, is coming from a collaboration with Philips Medical.
Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef
2015-12-01
Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery.
Alanio, A; Beretti, J-L; Dauphin, B; Mellado, E; Quesne, G; Lacroix, C; Amara, A; Berche, P; Nassif, X; Bougnoux, M-E
2011-05-01
New Aspergillus species have recently been described with the use of multilocus sequencing in refractory cases of invasive aspergillosis. The classical phenotypic identification methods routinely used in clinical laboratories failed to identify them adequately. Some of these Aspergillus species have specific patterns of susceptibility to antifungal agents, and misidentification may lead to inappropriate therapy. We developed a matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometry (MS)-based strategy to adequately identify Aspergillus species to the species level. A database including the reference spectra of 28 clinically relevant species from seven Aspergillus sections (five common and 23 unusual species) was engineered. The profiles of young and mature colonies were analysed for each reference strain, and species-specific spectral fingerprints were identified. The performance of the database was then tested on 124 clinical and 16 environmental isolates previously characterized by partial sequencing of the β-tubulin and calmodulin genes. One hundred and thirty-eight isolates of 140 (98.6%) were correctly identified. Two atypical isolates could not be identified, but no isolate was misidentified (specificity: 100%). The database, including species-specific spectral fingerprints of young and mature colonies of the reference strains, allowed identification regardless of the maturity of the clinical isolate. These results indicate that MALDI-TOF MS is a powerful tool for rapid and accurate identification of both common and unusual species of Aspergillus. It can give better results than morphological identification in clinical laboratories.
Tsargorodska, Anna; El Zubir, Osama; Darroch, Brice; Cartron, Michaël L; Basova, Tamara; Hunter, C Neil; Nabok, Alexei V; Leggett, Graham J
2014-08-26
We describe a fast, simple method for the fabrication of reusable, robust gold nanostructures over macroscopic (cm(2)) areas. A wide range of nanostructure morphologies is accessible in a combinatorial fashion. Self-assembled monolayers of alkylthiolates on chromium-primed polycrystalline gold films are patterned using a Lloyd's mirror interferometer and etched using mercaptoethylamine in ethanol in a rapid process that does not require access to clean-room facilities. The use of a Cr adhesion layer facilitates the cleaning of specimens by immersion in piranha solution, enabling their repeated reuse without significant change in their absorbance spectra over two years. A library of 200 different nanostructures was prepared and found to exhibit a range of optical behavior. Annealing yielded structures with a uniformly high degree of crystallinity that exhibited strong plasmon bands. Using a combinatorial approach, correlations were established between the preannealing morphologies (determined by the fabrication conditions) and the postannealing optical properties that enabled specimens to be prepared "to order" with a selected localized surface plasmon resonance. The refractive index sensitivity of gold nanostructures formed in this way was found to correlate closely with measurements reported for structures fabricated by other methods. Strong enhancements were observed in the Raman spectra of tetra-tert-butyl-substituted phthalocyanine. The shift in the position of the plasmon band after site-specific attachment of histidine-tagged green fluorescent protein (His-GFP) and bacteriochlorophyll a was measured for a range of nanostructured films, enabling the rapid identification of the one that yielded the largest shift. This approach offers a simple route to the production of durable, reusable, macroscopic arrays of gold nanostructures with precisely controllable morphologies.
Kawana, Shuichi; Nakagawa, Katsuhiro; Hasegawa, Yuki; Yamaguchi, Seiji
2010-11-15
A simple and rapid method for quantitative analysis of amino acids, including valine (Val), leucine (Leu), isoleucine (Ile), methionine (Met) and phenylalanine (Phe), in whole blood has been developed using GC/MS. In this method, whole blood was collected using a filter paper technique, and a 1/8 in. blood spot punch was used for sample preparation. Amino acids were extracted from the sample, and the extracts were purified using cation-exchange resins. The isotope dilution method using ²H₈-Val, ²H₃-Leu, ²H₃-Met and ²H₅-Phe as internal standards was applied. Following propyl chloroformate derivatization, the derivatives were analyzed using fast-GC/MS. The extraction recoveries using these techniques ranged from 69.8% to 87.9%, and analysis time for each sample was approximately 26 min. Calibration curves at concentrations from 0.0 to 1666.7 μmol/l for Val, Leu, Ile and Phe and from 0.0 to 333.3 μmol/l for Met showed good linearity with regression coefficients=1. The method detection limits for Val, Leu, Ile, Met and Phe were 24.2, 16.7, 8.7, 1.5 and 12.9 μmol/l, respectively. This method was applied to blood spot samples obtained from patients with phenylketonuria (PKU), maple syrup urine disease (MSUD), hypermethionine and neonatal intrahepatic cholestasis caused by citrin deficiency (NICCD), and the analysis results showed that the concentrations of amino acids that characterize these diseases were increased. These results indicate that this method provides a simple and rapid procedure for precise determination of amino acids in whole blood.
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; Rossi, Simone
2015-11-12
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear and nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...
2015-11-12
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
NASA Astrophysics Data System (ADS)
Esfandiari, H.; Amiri, S.; Lichti, D. D.; Anglin, C.
2014-06-01
A C-arm is a mobile X-ray device that is frequently used during orthopaedic surgeries. It consists of a semi-circular, arc-shaped arm that holds an X-ray transmitter at one end and an X-ray detector at the other. Intramedullary nail (IM nail) fixation is a popular orthopaedic surgery in which a metallic rod is placed into the patient's fractured bone (femur or tibia) and fixed using metal screws. The main challenge of IM-nail fixation surgery is to achieve the X-ray shot in which the distal holes of the IM nail appear as circles (desired view) so that the surgeon can easily insert the screws. Although C-arm X-ray devices are routinely used in IM-nail fixation surgeries, the surgeons or radiation technologists (rad-techs) usually use it in a trial-and-error manner. This method raises both radiation exposure and surgery time. In this study, we have designed and developed an IM-nail distal locking navigation technique that leads to more accurate and faster screw placement with a lower radiation dose and a minimum number of added steps to the operation to make it more accepted within the orthopaedic community. The specific purpose of this study was to develop and validate an automated technique for identifying the current pose of the IM nail relative to the C-arm. An accuracy assessment was performed to test the reliability of the navigation results. Translational accuracy was demonstrated to be better than 1 mm, roll and pitch rotations better than 2° and yaw rotational accuracy better than 2-5° depending on the separate angle. Computation time was less than 3.5 seconds.
Dubreil, Estelle; Gautier, Sophie; Fourmond, Marie-Pierre; Bessiral, Mélaine; Gaugain, Murielle; Verdon, Eric; Pessel, Dominique
2017-04-01
An approach is described to validate a fast and simple targeted screening method for antibiotic analysis in meat and aquaculture products by LC-MS/MS. The strategy of validation was applied for a panel of 75 antibiotics belonging to different families, i.e., penicillins, cephalosporins, sulfonamides, macrolides, quinolones and phenicols. The samples were extracted once with acetonitrile, concentrated by evaporation and injected into the LC-MS/MS system. The approach chosen for the validation was based on the Community Reference Laboratory (CRL) guidelines for the validation of screening qualitative methods. The aim of the validation was to prove sufficient sensitivity of the method to detect all the targeted antibiotics at the level of interest, generally the maximum residue limit (MRL). A robustness study was also performed to test the influence of different factors. The validation showed that the method is valid to detect and identify 73 antibiotics of the 75 antibiotics studied in meat and aquaculture products at the validation levels.
Jank, L; Hoff, R B; Tarouco, P C; Barreto, F; Pizzolato, T M
2012-01-01
This study presents the development and validation of a simple method for the detection and quantification of six β-lactam antibiotics residues (ceftiofur, penicillin G, penicillin V, oxacillin, cloxacillin and dicloxacillin) in bovine milk using a fast liquid-liquid extraction (LLE) for sample preparation, followed by liquid chromatography-electrospray-tandem mass spectrometry (LC-MS/MS). LLE consisted of the addition of acetonitrile to the sample, followed by addition of sodium chloride, centrifugation and direct injection of an aliquot into the LC-MS/MS system. Separation was performed in a C(18) column, using acetonitrile and water, both with 0.1% of formic acid, as mobile phase. Method validation was performed according to the criteria of Commission Decision 2002/657/EC. Limits of detection ranged from 0.4 (penicillin G and penicillin V) to 10.0 ng ml(-1) (ceftiofur), and linearity was achieved. The decision limit (CCα), detection capability (CCβ), accuracy, inter- and intra-day repeatability of the method are reported.
Zhang, Chi; Liu, Song; Zhou, Yaoqi
2006-04-01
Molecular networks in cells are organized into functional modules, where genes in the same module interact densely with each other and participate in the same biological process. Thus, identification of modules from molecular networks is an important step toward a better understanding of how cells function through the molecular networks. Here, we propose a simple, automatic method, called MC(2), to identify functional modules by enumerating and merging cliques in the protein-interaction data from large-scale experiments. Application of MC(2) to the S. cerevisiae protein-interaction data produces 84 modules, whose sizes range from 4 to 69 genes. The majority of the discovered modules are significantly enriched with a highly specific process term (at least 4 levels below root) and a specific cellular component in Gene Ontology (GO) tree. The average fraction of genes with the most enriched GO term for all modules is 82% for specific biological processes and 78% for specific cellular components. In addition, the predicted modules are enriched with coexpressed proteins. These modules are found to be useful for annotating unknown genes and uncovering novel functions of known genes. MC(2) is efficient, and takes only about 5 min to identify modules from the current yeast gene interaction network with a typical PC (Intel Xeon 2.5 GHz CPU and 512 MB memory). The CPU time of MC(2) is affordable (12 h) even when the number of interactions is increased by a factor of 10. MC(2) and its results are publicly available on http://theory.med.buffalo.edu/MC2.
Wang, Jianchang; Liu, Libing; Wang, Jinfeng; Sun, Xiaoxia; Yuan, Wanzhe
2017-01-01
Feline herpesvirus 1 (FHV-1), an enveloped dsDNA virus, is one of the major pathogens of feline upper respiratory tract disease (URTD) and ocular disease. Currently, polymerase chain reaction (PCR) remains the gold standard diagnostic tool for FHV-1 infection but is relatively expensive, requires well-equipped laboratories and is not suitable for field tests. Recombinase polymerase amplification (RPA), an isothermal gene amplification technology, has been explored for the molecular diagnosis of infectious diseases. In this study, an exo-RPA assay for FHV-1 detection was developed and validated. Primers targeting specifically the thymidine kinase (TK) gene of FHV-1 were designed. The RPA reaction was performed successfully at 39°C and the results were obtained within 20 min. Using different copy numbers of recombinant plasmid DNA that contains the TK gene as template, we showed the detection limit of exo-RPA was 102 copies DNA/reaction, the same as that of real time PCR. The exo-RPA assay did not cross-detect feline panleukopenia virus, feline calicivirus, bovine herpesvirus-1, pseudorabies virus or chlamydia psittaci, a panel of pathogens important in feline URTD or other viruses in Alphaherpesvirinae, demonstrating high specificity. The assay was validated by testing 120 nasal and ocular conjunctival swabs of cats, and the results were compared with those obtained with real-time PCR. Both assays provided the same testing results in the clinical samples. Compared with real time PCR, the exo-RPA assay uses less-complex equipment that is portable and the reaction is completed much faster. Additionally, commercial RPA reagents in vacuum-sealed pouches can tolerate temperatures up to room temperature for days without loss of activity, suitable for shipment and storage for field tests. Taken together, the exo-RPA assay is a simple, fast and cost-effective alternative to real time PCR, suitable for use in less advanced laboratories and for field detection of FHV-1
Wang, Jianchang; Liu, Libing; Wang, Jinfeng; Sun, Xiaoxia; Yuan, Wanzhe
2017-01-01
Feline herpesvirus 1 (FHV-1), an enveloped dsDNA virus, is one of the major pathogens of feline upper respiratory tract disease (URTD) and ocular disease. Currently, polymerase chain reaction (PCR) remains the gold standard diagnostic tool for FHV-1 infection but is relatively expensive, requires well-equipped laboratories and is not suitable for field tests. Recombinase polymerase amplification (RPA), an isothermal gene amplification technology, has been explored for the molecular diagnosis of infectious diseases. In this study, an exo-RPA assay for FHV-1 detection was developed and validated. Primers targeting specifically the thymidine kinase (TK) gene of FHV-1 were designed. The RPA reaction was performed successfully at 39°C and the results were obtained within 20 min. Using different copy numbers of recombinant plasmid DNA that contains the TK gene as template, we showed the detection limit of exo-RPA was 102 copies DNA/reaction, the same as that of real time PCR. The exo-RPA assay did not cross-detect feline panleukopenia virus, feline calicivirus, bovine herpesvirus-1, pseudorabies virus or chlamydia psittaci, a panel of pathogens important in feline URTD or other viruses in Alphaherpesvirinae, demonstrating high specificity. The assay was validated by testing 120 nasal and ocular conjunctival swabs of cats, and the results were compared with those obtained with real-time PCR. Both assays provided the same testing results in the clinical samples. Compared with real time PCR, the exo-RPA assay uses less-complex equipment that is portable and the reaction is completed much faster. Additionally, commercial RPA reagents in vacuum-sealed pouches can tolerate temperatures up to room temperature for days without loss of activity, suitable for shipment and storage for field tests. Taken together, the exo-RPA assay is a simple, fast and cost-effective alternative to real time PCR, suitable for use in less advanced laboratories and for field detection of FHV-1
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Highly accurate fast lung CT registration
NASA Astrophysics Data System (ADS)
Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd
2013-03-01
Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.
NASA Astrophysics Data System (ADS)
Gliese, U.; Gershman, D. J.; Dorelli, J.; Avanov, L. A.; Barrie, A. C.; Clark, G. B.; Kujawski, J. T.; Mariano, A. J.; Coffey, V. N.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Zeuch, M. A.; Dickson, C.; Smith, D. L.; Salo, C.; MacDonald, E.; Kreisler, S.; Jacques, A. D.; Giles, B. L.; Pollock, C. J.
2015-12-01
The Fast Plasma Investigation (FPI) on NASA's Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers and 16 Dual Ion Spectrometers with 4 of each type on each of 4 spacecraft to enable fast (30 ms for electrons; 150 ms for ions) and spatially differentiated measurements of the full 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity, the reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions of magnetically reconnecting plasmas. We have developed a detailed model of the spectrometer detection system, its behavior and its signal, crosstalk and noise sources. Based on this, we have devised a new calibration method that enables accurate and repeatable measurement of micro-channel plate (MCP) gain, signal loss due to variation in MCP gain and crosstalk effects in one single measurement. The foundational concepts of this new calibration method, named threshold scan, are presented. It is shown how this method has been successfully applied both on ground and in-flight to achieve highly accurate and precise calibration of all 64 spectrometers. Calibration parameters that will evolve in flight are determined daily providing a robust characterization of sensor suite performance, as a basis for both in-situ hardware adjustment and data processing to scientific units, throughout mission lifetime. This is shown to be very desirable as the instruments will produce higher quality raw science data that will require smaller post-acquisition data-corrections using results from in-flight derived pitch angle distribution measurements and ground calibration measurements. The practical application
NASA Astrophysics Data System (ADS)
Virtuani, A.; Rigamonti, G.; Friesen, G.; Chianese, D.; Beljean, P.
2012-11-01
Performance testing of highly efficient, highly capacitive c-Si modules with pulsed solar simulators requires particular care. These devices in fact usually require a steady-state solar simulator or pulse durations longer than 100-200 ms in order to avoid measurement artifacts. The aim of this work was to validate an alternative method for the testing of highly capacitive c-Si modules using a 10 ms single pulse solar simulator. Our approach attempts to reconstruct a quasi-steady-state I-V (current-voltage) curve of a highly capacitive device during one single 10 ms flash by applying customized voltage profiles--in place of a conventional V ramp—to the terminals of the device under test. The most promising results were obtained by using V profiles which we name ‘dragon-back’ (DB) profiles. When compared to the reference I-V measurement (obtained by using a multi-flash approach with approximately 20 flashes), the DB V profile method provides excellent results with differences in the estimation of Pmax (as well as of Isc, Voc and FF) below ±0.5%. For the testing of highly capacitive devices the method is accurate, fast (two flashes—possibly one—required), cost-effective and has proven its validity with several technologies making it particularly interesting for in-line testing.
Fontaine, Johannes; Schirmer, Barbara; Hörr, Jutta
2002-07-03
Further NIRS calibrations were developed for the accurate and fast prediction of the total contents of methionine, cystine, lysine, threonine, tryptophan, and other essential amino acids, protein, and moisture in the most important cereals and brans or middlings for animal feed production. More than 1100 samples of global origin collected over five years were analyzed for amino acids following the Official Methods of the United States and European Union. Detailed data and graphics are given to characterize the obtained calibration equations. NIRS was validated with 98 independent samples for wheat and 78 samples for corn and compared to amino acid predictions using linear crude protein regression equations. With a few exceptions, validation showed that 70-98% of the amino acid variance in the samples could be explained using NIRS. Especially for lysine and methionine, the most limiting amino acids for farm animals, NIRS can predict contents in cereals much better than crude protein regressions. Through low cost and high speed of analysis NIRS enables the amino acid analysis of many samples in order to improve the accuracy of feed formulation and obtain better quality and lower production costs.
ERIC Educational Resources Information Center
St. Andre, Ralph E.
Simple machines have become a lost point of study in elementary schools as teachers continue to have more material to cover. This manual provides hands-on, cooperative learning activities for grades three through eight concerning the six simple machines: wheel and axle, inclined plane, screw, pulley, wedge, and lever. Most activities can be…
NASA Technical Reports Server (NTRS)
Gliese, U.; Avanov, L. A.; Barrie, A. C.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Gershman, D. J.; Dorelli, J. C.; Zeuch, M. A.; Pollock, C. J.; Jacques, A. D.
2015-01-01
The Fast Plasma Investigation (FPI) on NASAs Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30 ms for electrons; 150 ms for ions) and spatially differentiated measurements of the full 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity, the reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions of magnetically reconnecting plasmas. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated using the plateau curve technique. In this, a fixed detection threshold is set. The detection system count rate is then measured as a function of MCP voltage to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection system discrimination threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully describe the behavior of the detection system and individually characterize each of its fundamental parameters. To improve this situation, we have developed a detailed phenomenological description of the detection system, its behavior and its signal, crosstalk and noise sources. Based on this, we have devised a new detection
Uitterdijk, André; Sneep, Stefan; van Duin, Richard W B; Krabbendam-Peters, Ilona; Gorsse-Bakker, Charlotte; Duncker, Dirk J; van der Giessen, Willem J; van Beusekom, Heleen M M
2013-10-01
The objective of this study was to compare heart-specific fatty acid binding protein (hFABP) and high-sensitivity troponin I (hsTnI) via serial measurements to identify early time points to accurately quantify infarct size and no-reflow in a preclinical swine model of ST-elevated myocardial infarction (STEMI). Myocardial necrosis, usually confirmed by hsTnI or TnT, takes several hours of ischemia before plasma levels rise in the absence of reperfusion. We evaluated the fast marker hFABP compared with hsTnI to estimate infarct size and no-reflow upon reperfused (2 h occlusion) and nonreperfused (8 h occlusion) STEMI in swine. In STEMI (n = 4) and STEMI + reperfusion (n = 8) induced in swine, serial blood samples were taken for hFABP and hsTnI and compared with triphenyl tetrazolium chloride and thioflavin-S staining for infarct size and no-reflow at the time of euthanasia. hFABP increased faster than hsTnI upon occlusion (82 ± 29 vs. 180 ± 73 min, P < 0.05) and increased immediately upon reperfusion while hsTnI release was delayed 16 ± 3 min (P < 0.05). Peak hFABP and hsTnI reperfusion values were reached at 30 ± 5 and 139 ± 21 min, respectively (P < 0.05). Infarct size (containing 84 ± 0.6% no-reflow) correlated well with area under the curve for hFABP (r(2) = 0.92) but less for hsTnI (r(2) = 0.53). At 50 and 60 min reperfusion, hFABP correlated best with infarct size (r(2) = 0.94 and 0.93) and no-reflow (r(2) = 0.96 and 0.94) and showed high sensitivity for myocardial necrosis (2.3 ± 0.6 and 0.4 ± 0.6 g). hFABP rises faster and correlates better with infarct size and no-reflow than hsTnI in STEMI + reperfusion when measured early after reperfusion. The highest sensitivity detecting myocardial necrosis, 0.4 ± 0.6 g at 60 min postreperfusion, provides an accurate and early measurement of infarct size and no-reflow.
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2017-03-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata
2016-01-01
A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland).
Huanca-Mamani, W; Rivera-Cabello, D; Maita-Maita, J
2015-07-17
In this study, we report a modified CTAB-PVP method combined with silicon dioxide (silica) treatment for the extraction of high quality genomic DNA from a single larva or pupa. This method efficiently obtains DNA from small specimens, which is difficult and challenging because of the small amount of starting tissue. Maceration with liquid nitrogen, phenol treatment, and the ethanol precipitation step are eliminated using this methodology. The A260/A280 absorbance ratios of the isolated DNA were approximately 1.8, suggesting that the DNA is pure and can be used for further molecular analysis. The quality of the isolated DNA permits molecular applications and represents a fast, cheap, and effective alternative method for laboratories with low budgets.
Benoist, P. ); Carta, M. ); Palmiotti, G. ); Salvatores, M. )
1989-11-01
A method to calculate the effectiveness of the control assembly in a fast neutron reactor is proposed. For each type of heterogeneous assembly (control or follower), a polar parameter, taking into account the assembly absorption and the axial leakage of neutrons inside the assembly, is defined. In a similar way, a bipolar parameter, taking into account the reaction of the assembly to a transverse flux gradient, is also defined. These two parameters, deduced from transport theory, are used to determine the absorption cross section and the diffusion coefficient of an equivalent homogeneous control or follower assembly. These new parameters are introduced in a one-group diffusion code, calculating the reactor as a whole with any number of control and follower assemblies. An approximate generalization to multigroup theory is proposed. Numerical comparisons show that this equivalent diffusion method gives results that are much closer to transport results than those obtained by the classical diffusion theory.
NASA Astrophysics Data System (ADS)
Hofto, Laura; Hofto, Meghan; Cross, Jessica; Cafiero, Mauricio
2007-09-01
Many diseases can be traced to point mutations in the DNA coding for specific enzymes. These point mutations result in the change of one amino acid residue in the enzyme. We have developed a model using simple molecular orbital calculations which can be used to quantitatively determine the change in interaction between the enzyme's active site and necessary ligands upon mutation. We have applied this model to three hydroxylase proteins: phenylalanine hydroxylase, tyrosine hydroxylase, and tryptophan hydroxylase, and we have obtained excellent correlation between our results and observed disease symptoms. Furthermore, we are able to use this agreement as a baseline to screen other mutations which may also cause onset of disease symptoms. Our focus is on systems where the binding is due largely to dispersion, which is much more difficult to model inexpensively than pure electrostatic interactions. Our calculations are run in parallel on a sixteen processor cluster of 64-bit Athlon processors.
Pallapothu, Leela Mohan Kumar; Batta, Neelima; Pigili, Ravi Kumar; Yejella, Rajendra Prasad
2015-02-01
A simple, rapid and sensitive analytical method using liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) detection with positive ion electrospray ionization was developed for the determination of dienogest in human K2 EDTA plasma using levonorgestrel d6 as an internal standard (IS). Dienogest and IS were extracted from human plasma using simple liquid-liquid extraction. Chromatographic separation was achieved on a Zorbax XDB-Phenyl column (4.6 × 75 mm, 3.5 µm) under isocratic conditions using acetonitrile-5 mm ammonium acetate (70:30, v/v) at a flow rate of 0.60 mL/min. The protonated precursor to product ion transitions monitored for dienogest and IS were at m/z 312.30 → 135.30 and 319.00 → 251.30, respectively. The method was validated with a linearity range of 1.003-200.896 ng/mL having a total analysis time for each chromatograph of 3.0 min. The method has shown tremendous reproducibility with intra- and inter-day precision (coefficient of variation) <3.97 and 6.10%, respectively, and accuracy within ±4.0% of nominal values. The validated method was applied to a pharmacokinetic study in human plasma samples generated after administration of a single oral dose of 2.0 mg dienogest tablets to healthy female volunteers and was proved to be highly reliable for the analysis of clinical samples.
Chailapakul, Orawon; Korsrisakul, Sarawadee; Siangproh, Weena; Grudpan, Kate
2008-01-15
This paper reports, for the first, the fast and simultaneous detection of prominent heavy metals, including: lead, cadmium and copper using microchip CE with electrochemical detection. The direct amperometric detection mode for microchip CE was successfully applied to these heavy metal ions. The influences of separation voltage, detection potential, as well as the concentration and pH value of the running buffer on the response of the detector were carefully assayed and optimized. The results clearly show that reliable analysis for lead, cadmium, and copper by the degree of electrophoretic separation occurs in less than 3min using a MES buffer (pH 7.0, 25mM) and l-histidine, with 1.2kV separation voltage and -0.8V detection potential. The detection limits for Pb(2+), Cd(2+), and Cu(2+) were 1.74, 0.73 and 0.13microM (S/N=3). The %R.S.D. of each peak current was <6% and migration times <2% for prolonged operation. To demonstrate the potential and future role of microchip CE, analytical possibilities and a new route in the raw sample analysis were presented. The results obtained allow the proposed microchip CE-ED acts as an alternative approach for metal analysis in foods.
Zeng, Li-Min; Wang, Hao-Yang; Guo, Yin-Long
2010-03-01
A fast, selective, and sensitive GC-MS method has been developed and validated for the determination of boric acid in the drinking water by derivatization with triethanolamine. This analytic strategy successfully converts the inorganic, nonvolatile boric acid B(OH)(3) present in the drinking water to a volatile triethanolamine borate B(OCH(2)CH(2))(3)N in a quantitative manner, which facilitates the GC measurement. The SIM mode was applied in the analysis and showed high accuracy, specificity, and reproducibility, as well as reducing the matrix effect effectively. The calibration curve was obtained from 0.01 microg/mL to 10.0 microg/mL with a satisfactory correlation coefficient of 0.9988. The limit of detection for boric acid was 0.04 microg/L. Then the method was applied for detection of the amount of boric acid in bottled drinking water and the results are in accordance with the reported concentration value of boric acid. This study offers a perspective into the utility of GC-MS as an alternate quantitative tool for detection of B(OH)(3), even for detection of boron in various other samples by digesting the boron compounds to boric acid.
Tao, Chenyu; Zhang, Qingde; Feng, Na; Shi, Deshi; Liu, Bang
2016-03-01
The qualitative and quantitative declaration of food ingredients is important to consumers, especially for genetically modified food as it experiences a rapid increase in sales. In this study, we designed an accurate and rapid detection system using colloidal gold immunochromatographic strip assay (GICA) methods to detect genetically modified cow milk. First, we prepared 2 monoclonal antibodies for human α-lactalbumin (α-LA) and measured their antibody titers; the one with the higher titer was used for further experiments. Then, we found the optimal pH value and protein amount of GICA for detection of pure milk samples. The developed strips successfully detected genetically modified cow milk and non-modified cow milk. To determine the sensitivity of GICA, a quantitative ELISA system was used to determine the exact amount of α-LA, and then genetically modified milk was diluted at different rates to test the sensitivity of GICA; the sensitivity was 10 μg/mL. Our results demonstrated that the applied method was effective to detect human α-LA in cow milk.
NASA Astrophysics Data System (ADS)
Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro
It is reported that frequency component of approximately 0.25Hz of heart rate time series (RSA) is corresponding to the respiratory frequency. In this paper, we proposed that continuous estimation method of respiratory fequency during sleep using the number of extreme points of heart rate time series in real time. Equation for calculation of the method is very simple and the method can continuously calculate frequency by window width of about 18 beats. To evaluate accuracy of proposal method, RSA frequency was calculated using proposal method from the heart rate time series during supine rest. Result, minimum error rate was observed when RSA had time lag for about 11s and error rate was about 13.8%. Result of estimating RSA frequency time series during sleep, it varied regularly during non-REM and varied irregularly during REM. This result is similar as report of previous study about respiratory variability during sleep. Therefore, it is considered that proposal method possible to apply respiratory monitoring system during sleep.
Magiera, Sylwia; Kwietniowska, Ewelina
2016-11-15
In this study, an easy, simple and efficient method for the determination of naringenin enantiomers in fruit juices after salting-out-assisted liquid-liquid extraction (SALLE) and high-performance liquid chromatography (HPLC) with diode-array detection (DAD) was developed. The sample treatment is based on the use of water-miscible acetonitrile as the extractant and acetonitrile phase separation under high-salt conditions. After extraction, juice samples were incubated with hydrochloric acid in order to achieve hydrolysis of naringin to naringenin. The hydrolysis parameters were optimized by using a half-fraction factorial central composite design (CCD). After sample preparation, chromatographic separation was obtained on a Chiralcel® OJ-RH column using the mobile phase consisting of 10mM aqueous ammonium acetate:methanol:acetonitrile (50:30:20; v/v/v) with detection at 288nm. The average recovery of the analyzed compounds ranged from 85.6 to 97.1%. The proposed method was satisfactorily used for the determination of naringenin enantiomers in various fruit juices samples.
Accurate van der Waals coefficients from density functional theory
Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn
2012-01-01
The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765
Caballo, C; Sicilia, M D; Rubio, S
2014-02-01
A simple, sensitive, rapid and economic method was developed for the quantification of enantiomers of chiral pesticides as mecoprop (MCPP) and dichlorprop (DCPP) in soil samples using supramolecular solvent-based microextraction (SUSME) combined with liquid chromatography coupled to mass spectrometry (LC-MS/MS). SUSME has been described for the extraction of chiral pesticides in water, but this is firstly applied to soil samples. MCPP and DCPP are herbicides widely used in agriculture that have two enantiomeric forms (R- and S-) differing in environmental fate and toxicity. Therefore, it is essential to have analytical methods for monitoring individual DCPP and MCPP enantiomers in environmental samples. MCPP and DCPP were extracted in a supramolecular solvent (SUPRAS) made up of dodecanoic acid aggregates, the extract was dried under a nitrogen stream, the two herbicides dissolved in acetate buffer and the aqueous extract directly injected in the LC-MS/MS system. The recoveries obtained were independent of soil composition and age of herbicide residues. The detection and quantitation limits of the developed method for the determination of R- and S-MCPP and R- and S-DCPP in soils were 0.03 and 0.1 ng g(-1), respectively, and the precision, expressed as relative standard deviation (n=6), for enantiomer concentrations of 5 and 100 ng g(-1) were in the ranges 4.1-6.1% and 2.9-4.1%. Recoveries for soil samples spiked with enantiomer concentrations within the interval 5-180 ng g(-1) and enantiomeric ratios (ERs) of 1, 3 and 9, ranged between 93 and 104% with standard deviations of the percent recovery varying between 0.3% and 6.0%. Because the SUPRAS can solubilize analytes through different type of interactions (dispersion, dipole-dipole and hydrogen bonds), it could be used to extract a great variety of pesticides (including both polar and non-polar) in soils.
NASA Technical Reports Server (NTRS)
Cezairliyan, Ared
1988-01-01
Design and operation of accurate millisecond and microsecond resolution optical pyrometers developed at the National Bureau of Standards during the last two decades are described. Results of tests are presented and estimates of uncertainties in temperature measurements are given. Calibration methods are discussed and examples of applications of fast pyrometry are given. Ongoing research in developing fast multiwavelength and spatial scanning pyrometers are summarized.
Mareschal, Sylvain; Ruminy, Philippe; Bagacean, Cristina; Marchand, Vinciane; Cornic, Marie; Jais, Jean-Philippe; Figeac, Martin; Picquenot, Jean-Michel; Molina, Thierry Jo; Fest, Thierry; Salles, Gilles; Haioun, Corinne; Leroy, Karen; Tilly, Hervé; Jardin, Fabrice
2015-04-09
Diffuse large B-cell lymphoma, the most common non-Hodgkin lymphoma, is subdivided into germinal center B-cell-like and activated B-cell-like subtypes. Unfortunately, these lymphomas are difficult to differentiate in routine diagnosis, impeding the development of treatments. Patients with these lymphomas can benefit from specific therapies. We therefore developed a simple and rapid classifier based on a reverse transcriptase multiplex ligation-dependent probe amplification assay and 14 gene signatures. Compared with the Affymetrix U133+2 gold standard, all 46 samples (95% CI, 92%-100%) of a validation cohort classified by both techniques were attributed to the expected subtype. Similarly, 93% of the 55 samples (95% CI, 82%-98%) of a second independent series characterized with a mid-throughput gene expression profiling method were classified correctly. Unclassifiable sample proportions reached 13.2% and 13.8% in these cohorts, comparable with the frequency originally reported. The developed assay was also sensitive enough to obtain reliable results from formalin-fixed, paraffin-embedded samples and flexible enough to include prognostic factors such as MYC/BCL2 co-expression. Finally, in a series of 135 patients, both overall (P = 0.01) and progression-free (P = 0.004) survival differences between the two subtypes were confirmed. Because the multiplex ligation-dependent probe amplification method is already in use and requires only common instruments and reagents, it could easily be applied to clinical trial patient stratification to help in treatment decisions.
Choi, Sol Ji; Jung, Mun Yhung
2017-03-02
We have developed a simple and fast sample preparation technique in combination with a gas chromatography-tandem mass spectrometry (GC-MS/MS) for the quantification of 2-methylimidazole (2-MeI) and 4-methylimidazole (4-MeI) in colas and dark beers. Conventional sample preparation technique for GC-MS requires laborious and time-consuming steps consisting of sample concentration, pH adjustment, ion pair extraction, centrifugation, back-extraction, centrifugation, derivatization, and extraction. Our sample preparation technique consists of only 2 steps (in situ derivation and extraction) which requires less than 3 min. This method provided high linearity, low limit of detection and limit of quantification, high recovery, and high intra- and interday repeatability. It was found that internal standard method with diluted stable isotope (4-MeI-d6 ) and 2-ethylimidazole (2-EI) could not correctly compensate the matrix effects. Thus, standard addition technique was used for the quantification of 2- and 4-MeI. The established method was successfully applied to colas and dark beers for the determination of 2-MeI and 4-MeI. The 4-MeI contents in colas and dark beers ranged from 8 to 319 μg/L and from trace to 417 μg/L, respectively. Small quantity (0 to 8 μg/L) of 2-MeI was found only in dark beers. The contents of 4-MeI (22 μg/L) in colas obtained from fast food restaurants were significantly lower than those (177 μg/L) in canned or bottled colas.
Simple, fast, bright, and stable light sources.
Tordera, Daniel; Meier, Sebastian; Lenes, Martijn; Costa, Rubén D; Ortí, Enrique; Sarfert, Wiebke; Bolink, Henk J
2012-02-14
In this work we show that solution-processed light-emitting electrochemical cells (LECs) based on only an ionic iridium complex and a small amount of ionic liquid exhibit exceptionally good performances when applying a pulsed current: sub-second turn-on times and almost constant high luminances (>600 cd m(-2) ) and power efficiencies over the first 600 h. This demonstrates the potential of LECs for applications in solid-state signage and lighting.
NASA Astrophysics Data System (ADS)
Chipot, Christophe; Rozanska, Xavier; Dixit, Surjit B.
2005-11-01
The usefulness of free-energy calculations in non-academic environments, in general, and in the pharmaceutical industry, in particular, is a long-time debated issue, often considered from the angle of cost/performance criteria. In the context of the rational drug design of low-affinity, non-peptide inhibitors to the SH2 domain of the pp60src tyrosine kinase, the continuing difficulties encountered in an attempt to obtain accurate free-energy estimates are addressed. free-energy calculations can provide a convincing answer, assuming that two key-requirements are fulfilled: (i) thorough sampling of the configurational space is necessary to minimize the statistical error, hence raising the question: to which extent can we sacrifice the computational effort, yet without jeopardizing the precision of the free-energy calculation? (ii) the sensitivity of binding free-energies to the parameters utilized imposes an appropriate parametrization of the potential energy function, especially for non-peptide molecules that are usually poorly described by multipurpose macromolecular force fields. Employing the free-energy perturbation method, accurate ranking, within ±0.7 kcal/mol, is obtained in the case of four non-peptide mimes of a sequence recognized by the pp60src SH2 domain.
Sapozhnikova, Yelena; Simons, Tawana; Lehotay, Steven J
2015-05-13
A simple, fast, and cost-effective sample preparation method, previously developed and validated for the analysis of organic contaminants in fish using low-pressure gas chromatography-tandem mass spectrometry (LPGC-MS/MS), was evaluated for the analysis of polybrominated diphenyl ethers (PBDEs) and dichlorodiphenyltrichloroethane (DDT) pesticides using enzyme-linked immunosorbent assay (ELISA). The sample preparation technique was based on the quick, easy, cheap, rugged, effective, and safe (QuEChERS) approach with filter-vial dispersive solid phase extraction (d-SPE). Incurred PBDEs and DDTs were analyzed in three types of fish with 3-10% lipid content: Pacific croaker, salmon, and National Institute of Standards and Technology (NIST) Standard Reference Material 1947 (Lake Michigan fish tissue). LPGC-MS/MS and ELISA results were in agreement: 108-111 and 65-82% accuracy ELISA versus LPGC-MS/MS results for PBDEs and DDTs, respectively. Similar detection limits were achieved for ELISA and LPGC-MS/MS. Matrix effects (MEs) were significant (e.g., -60%) for PBDE measurement in ELISA, but not a factor in the case of DDT pesticides. This study demonstrated that the sample preparation method can be adopted for semiquantitative screening analysis of fish samples by commercial kits for PBDEs and DDTs.
Gleisner, Heike; Einax, Jürgen W; Morés, Silvane; Welz, Bernhard; Carasek, Eduardo
2011-04-05
A fast and reliable method has been developed for the determination of total and soluble fluorine in toothpaste, important quality control parameters in dentifrices. The method is based on the molecular absorption of gallium mono-fluoride, GaF, using a commercially available high-resolution continuum source atomic absorption spectrometer. Transversely heated platform tubes with zirconium as permanent chemical modifier were used throughout. Before each sample injection, a palladium and zirconium modifier solution and a gallium reagent were deposited onto the graphite platform and thermally pretreated to transform them into their active forms. The samples were only diluted and introduced directly into the graphite tube together with additional gallium reagent. Under these conditions the fluoride was stable up to a pyrolysis temperature of 550 °C, and the optimum vaporization (molecule formation) temperature was 1550 °C. The GaF molecular absorption was measured at 211.248 nm, and the limits of detection and quantification were 5.2 pg and 17 pg, respectively, corresponding to a limit of quantification of about 30 μg g(-1) (ppm) F in the original toothpaste. The proposed method was used for the determination of total and soluble fluorine content in toothpaste samples from different manufactures. The samples contained different ionic fluoride species and sodium monofluorophosphate (MFP) with covalently bonded fluorine. The results for total fluorine were compared with those obtained with a modified conventional headspace gas chromatographic procedure. Accuracy and precision of the two procedures were comparable, but the proposed procedure was much less labor-intensive, and about five times faster than the latter one.
ERIC Educational Resources Information Center
Korn, Abe
1994-01-01
Presents an activity that enables students to answer for themselves the question of how fast a body must travel before the nonrelativistic expression must be replaced with the correct relativistic expression by deciding on the accuracy required in describing the kinetic energy of a body. (ZWH)
Accurate compressed look up table method for CGH in 3D holographic display.
Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian
2015-12-28
Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future.
... the audience themselves. It is important to get direct audience involvement at some point to test the ... words are defined clearly. Sentences are simple, specific, direct, and written in the active voice. Each idea ...
Simple scale interpolator facilitates reading of graphs
NASA Technical Reports Server (NTRS)
Fetterman, D. E., Jr.
1965-01-01
Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.
Fast support vector machines for continuous data.
Kramer, Kurt A; Hall, Lawrence O; Goldgof, Dmitry B; Remsen, Andrew; Luo, Tong
2009-08-01
Support vector machines (SVMs) can be trained to be very accurate classifiers and have been used in many applications. However, the training time and, to a lesser extent, prediction time of SVMs on very large data sets can be very long. This paper presents a fast compression method to scale up SVMs to large data sets. A simple bit-reduction method is applied to reduce the cardinality of the data by weighting representative examples. We then develop SVMs trained on the weighted data. Experiments indicate that bit-reduction SVM produces a significant reduction in the time required for both training and prediction with minimum loss in accuracy. It is also shown to typically be more accurate than random sampling when the data are not overcompressed.
ERIC Educational Resources Information Center
Coy, Mary
2008-01-01
With standardized English Language Arts exams on the horizon, the author thought a game of Antonyms would provide not only a quick language arts activity for her sixth graders, but also a nice segue to an art lesson in contrast. In this article, she describes a project, a simple saucer on a pedestal base, which required students to demonstrate…
Fast evaluation of polarizable forces.
Wang, Wei; Skeel, Robert D
2005-10-22
Polarizability is considered to be the single most significant development in the next generation of force fields for biomolecular simulations. However, the self-consistent computation of induced atomic dipoles in a polarizable force field is expensive due to the cost of solving a large dense linear system at each step of a simulation. This article introduces methods that reduce the cost of computing the electrostatic energy and force of a polarizable model from about 7.5 times the cost of computing those of a nonpolarizable model to less than twice the cost. This is probably sufficient for the routine use of polarizable forces in biomolecular simulations. The reduction in computing time is achieved by an efficient implementation of the particle-mesh Ewald method, an accurate and robust predictor based on least-squares fitting, and non-stationary iterative methods whose fast convergence is accelerated by a simple preconditioner. Furthermore, with these methods, the self-consistent approach with a larger timestep is shown to be faster than the extended Lagrangian approach. The use of dipole moments from previous timesteps to calculate an accurate initial guess for iterative methods leads to an energy drift, which can be made acceptably small. The use of a zero initial guess does not lead to perceptible energy drift if a reasonably strict convergence criterion for the iteration is imposed.
Gelman, Hannah; Gruebele, Martin
2014-01-01
Fast folding proteins have been a major focus of computational and experimental study because they are accessible to both techniques: they are small and fast enough to be reasonably simulated with current computational power, but have dynamics slow enough to be observed with specially developed experimental techniques. This coupled study of fast folding proteins has provided insight into the mechanisms which allow some proteins to find their native conformation well less than 1 ms and has uncovered examples of theoretically predicted phenomena such as downhill folding. The study of fast folders also informs our understanding of even “slow” folding processes: fast folders are small, relatively simple protein domains and the principles that govern their folding also govern the folding of more complex systems. This review summarizes the major theoretical and experimental techniques used to study fast folding proteins and provides an overview of the major findings of fast folding research. Finally, we examine the themes that have emerged from studying fast folders and briefly summarize their application to protein folding in general as well as some work that is left to do. PMID:24641816
Fast and accurate automated measurements in digitized stereophotogrammetric radiographs.
Vrooman, H A; Valstar, E R; Brand, G J; Admiraal, D R; Rozing, P M; Reiber, J H
1998-05-01
Until recently, Roentgen Stereophotogrammetric Analysis (RSA) required the manual definition of all markers using a high-resolution measurement table. To automate this tedious and time-consuming process and to eliminate observer variabilities, an analytical software package has been developed and validated for the detection, identification, and matching of markers in RSA radiographs. The digital analysis procedure consisted of the following steps: (1) the detection of markers using a variant of the Hough circle-finder technique; (2) the identification and labeling of the detected markers; (3) the reconstruction of the three-dimensional position of the bone markers and the prosthetic markers; and (4) the computation of micromotion. To assess the influence of film digitization, the measurements obtained from nine phantom radiographs using two different film scanners were compared with the results obtained by manual processing. All markers in the phantom radiographs were automatically detected and correctly labeled. The best results were obtained with a Vidar VXR-12 CCD scanner, for which the measurement errors were comparable to the errors associated with the manual approach. To assess the in vivo reproducibility, 30 patient radiographs were analyzed twice with the manual as well as with the automated procedure. Approximately, 85% of all calibration markers and bone markers were automatically detected and correctly matched. The calibration errors and the rigid-body errors show that the accuracy of the automated procedure is comparable to the accuracy of the manual procedure. The rigid-body errors had comparable mean values for both techniques: 0.05 mm for the tibia and 0.06 mm for the prosthesis. The reproducibility of the automated procedure showed to be slightly better than that of the manual procedure. The maximum errors in the computed translation and rotation of the tibial component were 0.11 mm and 0.24, compared to 0.13 mm and 0.27 for the manual RSA procedure. The total processing time is less than 10 min per radiograph, including interactive corrections, compared to approximately 1 h for the manual approach. In conclusion, a new and widely applicable, computer-assisted technique has become available to detect, identify, and match markers in RSA radiographs and to assess the micromotion of endoprostheses. This new technique will be used in our clinic for our hip, knee, and elbow studies.
Fast and Accurate Estimates of Divergence Times from Big Data.
Mello, Beatriz; Tao, Qiqing; Tamura, Koichiro; Kumar, Sudhir
2017-01-01
Ongoing advances in sequencing technology have led to an explosive expansion in the molecular data available for building increasingly larger and more comprehensive timetrees. However, Bayesian relaxed-clock approaches frequently used to infer these timetrees impose a large computational burden and discourage critical assessment of the robustness of inferred times to model assumptions, influence of calibrations, and selection of optimal data subsets. We analyzed eight large, recently published, empirical datasets to compare time estimates produced by RelTime (a non-Bayesian method) with those reported by using Bayesian approaches. We find that RelTime estimates are very similar to Bayesian approaches, yet RelTime requires orders of magnitude less computational time. This means that the use of RelTime will enable greater rigor in molecular dating, because faster computational speeds encourage more extensive testing of the robustness of inferred timetrees to prior assumptions (models and calibrations) and data subsets. Thus, RelTime provides a reliable and computationally thrifty approach for dating the tree of life using large-scale molecular datasets.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (i.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
Fast and accurate automatic structure prediction with HHpred.
Hildebrand, Andrea; Remmert, Michael; Biegert, Andreas; Söding, Johannes
2009-01-01
Automated protein structure prediction is becoming a mainstream tool for biological research. This has been fueled by steady improvements of publicly available automated servers over the last decade, in particular their ability to build good homology models for an increasing number of targets by reliably detecting and aligning more and more remotely homologous templates. Here, we describe the three fully automated versions of the HHpred server that participated in the community-wide blind protein structure prediction competition CASP8. What makes HHpred unique is the combination of usability, short response times (typically under 15 min) and a model accuracy that is competitive with those of the best servers in CASP8.
Fast and accurate database searches with MS-GF+Percolator.
Granholm, Viktor; Kim, Sangtae; Navarro, José C F; Sjölund, Erik; Smith, Richard D; Käll, Lukas
2014-02-07
One can interpret fragmentation spectra stemming from peptides in mass-spectrometry-based proteomics experiments using so-called database search engines. Frequently, one also runs post-processors such as Percolator to assess the confidence, infer unique peptides, and increase the number of identifications. A recent search engine, MS-GF+, has shown promising results, due to a new and efficient scoring algorithm. However, MS-GF+ provides few statistical estimates about the peptide-spectrum matches, hence limiting the biological interpretation. Here, we enabled Percolator processing for MS-GF+ output and observed an increased number of identified peptides for a wide variety of data sets. In addition, Percolator directly reports p values and false discovery rate estimates, such as q values and posterior error probabilities, for peptide-spectrum matches, peptides, and proteins, functions that are useful for the whole proteomics community.
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Fast and Accurate Support Vector Machines on Large Scale Systems
Vishnu, Abhinav; Narasimhan, Jayenthi; Holder, Larry; Kerbyson, Darren J.; Hoisie, Adolfy
2015-09-08
Support Vector Machines (SVM) is a supervised Machine Learning and Data Mining (MLDM) algorithm, which has become ubiquitous largely due to its high accuracy and obliviousness to dimensionality. The objective of SVM is to find an optimal boundary --- also known as hyperplane --- which separates the samples (examples in a dataset) of different classes by a maximum margin. Usually, very few samples contribute to the definition of the boundary. However, existing parallel algorithms use the entire dataset for finding the boundary, which is sub-optimal for performance reasons. In this paper, we propose a novel distributed memory algorithm to eliminate the samples which do not contribute to the boundary definition in SVM. We propose several heuristics, which range from early (aggressive) to late (conservative) elimination of the samples, such that the overall time for generating the boundary is reduced considerably. In a few cases, a sample may be eliminated (shrunk) pre-emptively --- potentially resulting in an incorrect boundary. We propose a scalable approach to synchronize the necessary data structures such that the proposed algorithm maintains its accuracy. We consider the necessary trade-offs of single/multiple synchronization using in-depth time-space complexity analysis. We implement the proposed algorithm using MPI and compare it with libsvm--- de facto sequential SVM software --- which we enhance with OpenMP for multi-core/many-core parallelism. Our proposed approach shows excellent efficiency using up to 4096 processes on several large datasets such as UCI HIGGS Boson dataset and Offending URL dataset.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Simple Waveforms, Simply Described
NASA Technical Reports Server (NTRS)
Baker, John G.
2008-01-01
Since the first Lazarus Project calculations, it has been frequently noted that binary black hole merger waveforms are 'simple.' In this talk we examine some of the simple features of coalescence and merger waveforms from a variety of binary configurations. We suggest an interpretation of the waveforms in terms of an implicit rotating source. This allows a coherent description, of both the inspiral waveforms, derivable from post-Newtonian(PN) calculations, and the numerically determined merger-ringdown. We focus particularly on similarities in the features of various Multipolar waveform components Generated by various systems. The late-time phase evolution of most L these waveform components are accurately described with a sinple analytic fit. We also discuss apparent relationships among phase and amplitude evolution. Taken together with PN information, the features we describe can provide an approximate analytic description full coalescence wavefoRms. complementary to other analytic waveforns approaches.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
NASA Astrophysics Data System (ADS)
Esposito, S.; Pisanti, O.
The following sections are included: * Elementary Considerations * The Integral Equation to the Neutron Distribution * The Critical Size for a Fast Reactor * Supercritical Reactors * Problems and Exercises
NASA Technical Reports Server (NTRS)
Martin, E. D.; Lomax, H.
1977-01-01
Revised and extended versions of a fast, direct (noniterative) numerical Cauchy-Riemann solver are presented for solving finite difference approximations of first order systems of partial differential equations. Although the difference operators treated are linear and elliptic, one significant application of these extended direct Cauchy-Riemann solvers is in the fast, semidirect (iterative) solution of fluid dynamic problems governed by the nonlinear mixed elliptic-hyperbolic equations of transonic flow. Different versions of the algorithms are derived and the corresponding FORTRAN computer programs for a simple example problem are described and listed. The algorithms are demonstrated to be efficient and accurate.
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad
Accurate spectral color measurements
NASA Astrophysics Data System (ADS)
Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.
1999-08-01
Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.
Fast determination of bioactive compounds from Lycopersicon esculentum Mill. leaves.
Taveira, Marcos; Ferreres, Federico; Gil-Izquierdo, Angel; Oliveira, Luísa; Valentão, Patrícia; Andrade, Paula B
2012-11-15
Lycopersicon esculentum leaves, usually considered as a by-product of tomato production, present several bioactive compounds of interest for industries like food, pharmaceutical and cosmetics. Nevertheless, before industrial application, suitable methods to identify and quantify those metabolites should be developed. In this study agitation with aqueous methanol was used for phenolic compounds extraction. Solid-phase extraction (SPE) was performed as the purification step before alkaloids analysis. Among the SPE sorbents tested, sulphonic acid bonded silica with H(+) counterion (SCX) proved to be the most efficient one for removing interfering components. Fifteen phenolics and four steroidic alkaloids were identified in 35 and 20 min analysis, respectively. The optimised methods were validated, revealing to be accurate, fast, simple and sensitive. Thus, these methods represent an easy and fast analytical approach, using equipment available in almost laboratory, which render them to be appropriate for routine analysis.
Pendulum: Rich physics from a simple system
Nelson, R.A.; Olsson, M.G.
1986-02-01
We provide a comprehensive discussion of the corrections needed to accurately measure the acceleration of gravity using a plane pendulum. A simple laboratory experiment is described in which g was determined to four significant figures of accuracy.
2009-10-01
Detecting Codes: General Theory and Their Application in Feedback Communication Systems. Kluwer Academic, 1995. [8] D.E. Knuth , The Art of Computer ... computation . Index Terms—Fast CRC, low-complexity CRC, checksum, error-detection code, Hamming code, period of polynomial, fast software implementation...simulations, and performance analysis of systems and networks. CRC implementation in software is desirable, because many computers do not have hardware
Accurate tracking of high dynamic vehicles with translated GPS
NASA Astrophysics Data System (ADS)
Blankshain, Kenneth M.
The GPS concept and the translator processing system (TPS) which were developed for accurate and cost-effective tracking of various types of high dynamic expendable vehicles are described. A technique used by the translator processing system (TPS) to accomplish very accurate high dynamic tracking is presented. Automatic frequency control and fast Fourier transform processes are combined to track 100 g acceleration and 100 g/s jerk with 1-sigma velocity measurement error less than 1 ft/sec.
Generalized Gradient Approximation Made Simple
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-10-01
Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}
Stout, Erik E; Beloozerova, Irina N
2013-01-01
Most movements need to be accurate. The neuronal mechanisms controlling accuracy during movements are poorly understood. In this study we compare the activity of fast- and slow-conducting pyramidal tract neurons (PTNs) of the motor cortex in cats as they walk over both a flat surface, a task that does not require accurate stepping and can be accomplished without the motor cortex, as well as along a horizontal ladder, a task that requires accuracy and the activity of the motor cortex to be successful. Fast- and slow-conducting PTNs are known to have distinct biophysical properties as well as different afferent and efferent connections. We found that while the activity of all PTNs changes substantially upon transition from simple locomotion to accurate stepping on the ladder, slow-conducting PTNs respond in a much more concerted manner than fast-conducting ones. As a group, slow-conducting PTNs increase discharge rate, especially during the late stance and early swing phases, decrease discharge variability, have a tendency to shift their preferred phase of the discharge into the swing phase, and almost always produce a single peak of activity per stride during ladder locomotion. In contrast, the fast-conducting PTNs do not display such concerted changes to their activity. In addition, upon transfer from simple locomotion to accurate stepping on the ladder slow-conducting PTNs more profoundly increase the magnitude of their stride-related frequency modulation compared with fast-conducting PTNs. We suggest that slow-conducting PTNs are involved in control of accuracy of locomotor movements to a greater degree than fast-conducting PTNs. PMID:23381901
Van Dyke, W.J.
1992-04-07
A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing. 4 figs.
Van Dyke, William J.
1992-01-01
A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing.
Accurate Fission Data for Nuclear Safety
NASA Astrophysics Data System (ADS)
Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.
2014-05-01
The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.
Robot navigation using simple sensor fusion
Jollay, D.M.; Ricks, R.E.
1988-01-01
Sensors on an autonomous mobile system are essential in enviornment determination for navigation purposes. As is well documented in previous publications, sonar sensors are inadequate in providing a depiction of a real world environment and therefore do not provide accurate information for navigation, it not used in conjunction with another type of sensor. This paper describes a simple, inexpensive, and relatively fast navigation algorithm involving vision and sonar sensor fusion for use in navigating an autonomous robot in an unknown and potentially dynamic environment. Navigation of the mobile robot was accomplished by use of a TV camera as the primary sensor. Input data received from the camera were digitized through a video module and then processed using a dedicated vision system to enable detection of obstacles and to determine edge positions relative to the robot. Since 3D vision was not attempted due to its complex and time consuming nature, sonar sensors were then sued as secondary sensors in order to determine the proximity of detected obstacles. By then fusing the sensor data, the robot was able to navigate (quickly and collision free) to a given goal, achieving obstacle avoidance in real-time.
A simple and efficient algorithm for connected component labeling in color images
NASA Astrophysics Data System (ADS)
Celebi, M. Emre
2012-03-01
Connected component labeling is a fundamental operation in binary image processing. A plethora of algorithms have been proposed for this low-level operation with the early ones dating back to the 1960s. However, very few of these algorithms were designed to handle color images. In this paper, we present a simple algorithm for labeling connected components in color images using an approximately linear-time seed fill algorithm. Experiments on a large set of photographic and synthetic images demonstrate that the proposed algorithm provides fast and accurate labeling without requiring excessive stack space.
ERIC Educational Resources Information Center
Essexville-Hampton Public Schools, MI.
Described are components of Project FAST (Functional Analysis Systems Training) a nationally validated project to provide more effective educational and support services to learning disordered children and their regular elementary classroom teachers. The program is seen to be based on a series of modules of delivery systems ranging from mainstream…
Chiang, Wen-Chieh; Chen, Chao-Yu; Lee, Ting-Chen; Lee, Hui-Ling; Lin, Yu-Wen
2015-01-01
Recently, the International Agency for Research on cancer classified outdoor air pollution and particulate matter from outdoor air pollution as carcinogenic to humans (IARC Group 1), based on sufficient evidence of carcinogenicity in humans and experimental animals and strong mechanistic evidence. In particular, a wide variety of volatile organic compounds (VOCs) are volatized or released into the atmosphere and can become ubiquitous, as they originate from many different natural and anthropogenic sources, such as paints, pesticides, vehicle exhausts, cooking fumes, and tobacco smoke. Humans may be exposed to VOCs through inhalation, ingestion, or dermal contact, which may increase the risk of leukemia, birth defects, neurocognitive impairment, and cancer. Therefore, the focus of this study was the development of a simple, effective and rapid sample preparation method for the simultaneous determination of seven metabolites (6 mercaptic acids+t,t-muconic acid) derived from five VOCs (acrylamide, 1,3-butadiene, acrylonitrile, benzene, and xylene) in human urine by using automated on-line solid-phase extraction (SPE) coupled with liquid chromatography-electrospray tandem mass spectrometry (LC-MS/MS). An aliquot of each diluted urinary sample was directly injected into an autosampler through a trap column to reduce contamination, and then the retained target compounds were eluted by back-flush mode into an analytical column for separation. Negative electrospray ionization tandem mass spectrometry was utilized for quantification. The coefficients of correlation (r(2)) for the calibration curves were greater than 0.995. Reproducibility was assessed by the precision and accuracy of intra-day and inter-day precision, which showed results for coefficient of variation (CV) that were low 0.9 to 6.6% and 3.7 to 8.5%, respectively, and results for recovery that ranged from 90.8 to 108.9% and 92.1 to 107.7%, respectively. The limits of detection (LOD) and limits of
Remane, Daniela; Meyer, Markus R; Peters, Frank T; Wissenbach, Dirk K; Maurer, Hans H
2010-07-01
In clinical and forensic toxicology, different extraction procedures as well as analytical methods are used to monitor different drug classes of interest in biosamples. Multi-analyte procedures are preferable because they make the analytical strategy much simpler and cheaper and allow monitoring of analytes of different drug classes in one single body sample. For development of such a multi-analyte liquid chromatography-tandem mass spectrometry approach, a rapid and simple method for the extraction of 136 analytes from the following drug classes has been established: antidepressants, neuroleptics, benzodiazepines, beta-blockers, oral antidiabetics, and analytes relevant in the context of brain death diagnosis. Recovery, matrix effects, and process efficiency were tested at two concentrations using six different lots of blank plasma. The recovery results obtained using absolute peak areas were compared with those calculated using area ratios analyte/internal standard. The recoveries ranged from 8% to 84% for antidepressants, from 10% to 79% for neuroleptics, from 60% to 81% for benzodiazepines, from 1% to 71% for beta-blockers, from 10% to 73% for antidiabetics, and from 60% to 86% for analytes relevant in the context of brain death diagnosis. With the exception of 52 analytes at low concentration and 37 at high concentration, all compounds showed recoveries with acceptable variability with less than 15% and 20% coefficients of variation. Recovery results obtained by comparing peak area ratios were nearly the same, but 35 analytes at low concentration and 17 at high concentration lay above the acceptance criteria. Matrix effects with more than 25% were observed for 18 analytes. The results were acceptable for 119 analytes at high concentrations.
Bayne, C.K.; Angelini, P.
1981-08-01
Theoretical and experimental studies compared the abilities of volumetric and gravimetric dispensers to dispense accurately fissile and fertile fuel particles. Such devices are being developed for the fabrication of sphere-pac fuel rods for high-temperature gas-cooled light water and fast breeder reactors. The theoretical examination suggests that, although the fuel particles are dispensed more accurately by the gravimetric dispenser, the amount of nuclear material in the fuel particles dispensed by the two methods is not significantly different. The experimental results demonstrated that the volumetric dispenser can dispense both fuel particles and nuclear materials that meet standards for fabricating fuel rods. Performance of the more complex gravimetric dispenser was not significantly better than that of the simple yet accurate volumetric dispenser.
NASA Astrophysics Data System (ADS)
Heyda, P. G.
2002-07-01
Underwater projectiles and vehicles may be able to travel at hundreds of miles per hour with the help of an attached cavity that is produced mainly by inertial forces and can greatly reduce fluid drag. A simple, idealized model explains the essentials of the phenomenon.
Ultrasonic system for accurate distance measurement in the air.
Licznerski, Tomasz J; Jaroński, Jarosław; Kosz, Dariusz
2011-12-01
This paper presents a system that accurately measures the distance travelled by ultrasound waves through the air. The simple design of the system and its obtained accuracy provide a tool for non-contact distance measurements required in the laser's optical system that investigates the surface of the eyeball.
Machine learning scheme for fast extraction of chemically interpretable interatomic potentials
NASA Astrophysics Data System (ADS)
Dolgirev, Pavel E.; Kruglov, Ivan A.; Oganov, Artem R.
2016-08-01
We present a new method for a fast, unbiased and accurate representation of interatomic interactions. It is a combination of an artificial neural network and our new approach for pair potential reconstruction. The potential reconstruction method is simple and computationally cheap and gives rich information about interactions in crystals. This method can be combined with structure prediction and molecular dynamics simulations, providing accuracy similar to ab initio methods, but at a small fraction of the cost. We present applications to real systems and discuss the insight provided by our method.
Mirador: A Simple, Fast Search Interface for Remote Sensing Data
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Strub, Richard; Seiler, Edward; Joshi, Talak; MacHarrie, Peter
2008-01-01
A major challenge for remote sensing science researchers is searching and acquiring relevant data files for their research projects based on content, space and time constraints. Several structured query (SQ) and hierarchical navigation (HN) search interfaces have been develop ed to satisfy this requirement, yet the dominant search engines in th e general domain are based on free-text search. The Goddard Earth Sci ences Data and Information Services Center has developed a free-text search interface named Mirador that supports space-time queries, inc luding a gazetteer and geophysical event gazetteer. In order to compe nsate for a slightly reduced search precision relative to SQ and HN t echniques, Mirador uses several search optimizations to return result s quickly. The quick response enables a more iterative search strateg y than is available with many SQ and HN techniques.
Genuine Onion: Simple, Fast, Flexible, and Cheap Website Authentication
2015-05-21
onion address by itself does not offer this. Making use of the traditional web trust infrastructure, DuckDuckGo and Facebook offer certificates for...make use of this in .onion space. By generating many keys whose hash had ‘ facebook ’ as initial string and then looking among the full hashes for an...adequately felicitous result, Facebook was able to obtain facebookcorewwwi.onion for its address. Whatever its value for Facebook , this is clearly not
Fast sweeping method for the factored eikonal equation
NASA Astrophysics Data System (ADS)
Fomel, Sergey; Luo, Songting; Zhao, Hongkai
2009-09-01
We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
RAZ scoring is a simple, accurate and inexpensive screening technique for cardiomyopathy. Although HF QRS ECG is highly sensitive for cardiomyopathy, its specificity may be compromised in patients with cardiac pathologies other than cardiomyopathy, such as uncomplicated coronary artery disease or multiple coronary disease risk factors. Further studies are required to determine whether HF QRS might be useful for monitoring cardiomyopathy severity or the efficacy of therapy in a longitudinal fashion.
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Gompertz kinetics model of fast chemical neurotransmission currents.
Easton, Dexter M
2005-10-01
At a chemical synapse, transmitter molecules ejected from presynaptic terminal(s) bind reversibly with postsynaptic receptors and trigger an increase in channel conductance to specific ions. This paper describes a simple but accurate predictive model for the time course of the synaptic conductance transient, based on Gompertz kinetics. In the model, two simple exponential decay terms set the rates of development and decline of transmitter action. The first, r, triggering conductance activation, is surrogate for the decelerated rate of growth of conductance, G. The second, r', responsible for Y, deactivation of the conductance, is surrogate for the decelerated rate of decline of transmitter action. Therefore, the differential equation for the net conductance change, g, triggered by the transmitter is dg/dt=g(r-r'). The solution of that equation yields the product of G(t), representing activation, and Y(t), which defines the proportional decline (deactivation) of the current. The model fits, over their full-time course, published records of macroscopic ionic current associated with fast chemical transmission. The Gompertz model is a convenient and accurate method for routine analysis and comparison of records of synaptic current and putative transmitter time course. A Gompertz fit requiring only three independent rate constants plus initial current appears indistinguishable from a Markov fit using seven rate constants.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Accurate Encoding and Decoding by Single Cells: Amplitude Versus Frequency Modulation
Micali, Gabriele; Aquino, Gerardo; Richards, David M.; Endres, Robert G.
2015-01-01
Cells sense external concentrations and, via biochemical signaling, respond by regulating the expression of target proteins. Both in signaling networks and gene regulation there are two main mechanisms by which the concentration can be encoded internally: amplitude modulation (AM), where the absolute concentration of an internal signaling molecule encodes the stimulus, and frequency modulation (FM), where the period between successive bursts represents the stimulus. Although both mechanisms have been observed in biological systems, the question of when it is beneficial for cells to use either AM or FM is largely unanswered. Here, we first consider a simple model for a single receptor (or ion channel), which can either signal continuously whenever a ligand is bound, or produce a burst in signaling molecule upon receptor binding. We find that bursty signaling is more accurate than continuous signaling only for sufficiently fast dynamics. This suggests that modulation based on bursts may be more common in signaling networks than in gene regulation. We then extend our model to multiple receptors, where continuous and bursty signaling are equivalent to AM and FM respectively, finding that AM is always more accurate. This implies that the reason some cells use FM is related to factors other than accuracy, such as the ability to coordinate expression of multiple genes or to implement threshold crossing mechanisms. PMID:26030820
Multimodal spatial calibration for accurately registering EEG sensor positions.
Zhang, Jianhua; Chen, Jian; Chen, Shengyong; Xiao, Gang; Li, Xiaoli
2014-01-01
This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.
... How They Work Kidney Disease A-Z Simple Kidney Cysts What are simple kidney cysts? Simple kidney cysts are abnormal, fluid-filled ... that form in the kidneys. What are the kidneys and what do they do? The kidneys are ...
Armstrong, April
2015-11-01
Simple elbow dislocation refers to those elbow dislocations that do not involve an osseous injury. A complex elbow dislocation refers to an elbow that has dislocated with an osseous injury. Most simple elbow dislocations are treated nonoperatively. Understanding the importance of the soft tissue injury following a simple elbow dislocation is a key to being successful with treatment.
ERIC Educational Resources Information Center
Endres, Frank L.
Symbolic Interactive Matrix Processing Language (SIMPLE) is a conversational matrix-oriented source language suited to a batch or a time-sharing environment. The two modes of operation of SIMPLE are conversational mode and programing mode. This program uses a TAURUS time-sharing system and cathode ray terminals or teletypes. SIMPLE performs all…
Fast Bayesian inference of optical trap stiffness and particle diffusion
Bera, Sudipta; Paul, Shuvojit; Singh, Rajesh; Ghosh, Dipanjan; Kundu, Avijit; Banerjee, Ayan; Adhikari, R.
2017-01-01
Bayesian inference provides a principled way of estimating the parameters of a stochastic process that is observed discretely in time. The overdamped Brownian motion of a particle confined in an optical trap is generally modelled by the Ornstein-Uhlenbeck process and can be observed directly in experiment. Here we present Bayesian methods for inferring the parameters of this process, the trap stiffness and the particle diffusion coefficient, that use exact likelihoods and sufficient statistics to arrive at simple expressions for the maximum a posteriori estimates. This obviates the need for Monte Carlo sampling and yields methods that are both fast and accurate. We apply these to experimental data and demonstrate their advantage over commonly used non-Bayesian fitting methods. PMID:28139705
Fast Bayesian inference of optical trap stiffness and particle diffusion
NASA Astrophysics Data System (ADS)
Bera, Sudipta; Paul, Shuvojit; Singh, Rajesh; Ghosh, Dipanjan; Kundu, Avijit; Banerjee, Ayan; Adhikari, R.
2017-01-01
Bayesian inference provides a principled way of estimating the parameters of a stochastic process that is observed discretely in time. The overdamped Brownian motion of a particle confined in an optical trap is generally modelled by the Ornstein-Uhlenbeck process and can be observed directly in experiment. Here we present Bayesian methods for inferring the parameters of this process, the trap stiffness and the particle diffusion coefficient, that use exact likelihoods and sufficient statistics to arrive at simple expressions for the maximum a posteriori estimates. This obviates the need for Monte Carlo sampling and yields methods that are both fast and accurate. We apply these to experimental data and demonstrate their advantage over commonly used non-Bayesian fitting methods.
The Fast Scattering Code (FSC): Validation Studies and Program Guidelines
NASA Technical Reports Server (NTRS)
Tinetti, Ana F.; Dunn, Mark H.
2011-01-01
The Fast Scattering Code (FSC) is a frequency domain noise prediction program developed at the NASA Langley Research Center (LaRC) to simulate the acoustic field produced by the interaction of known, time harmonic incident sound with bodies of arbitrary shape and surface impedance immersed in a potential flow. The code uses the equivalent source method (ESM) to solve an exterior 3-D Helmholtz boundary value problem (BVP) by expanding the scattered acoustic pressure field into a series of point sources distributed on a fictitious surface placed inside the actual scatterer. This work provides additional code validation studies and illustrates the range of code parameters that produce accurate results with minimal computational costs. Systematic noise prediction studies are presented in which monopole generated incident sound is scattered by simple geometric shapes - spheres (acoustically hard and soft surfaces), oblate spheroids, flat disk, and flat plates with various edge topologies. Comparisons between FSC simulations and analytical results and experimental data are presented.
ON THE EQUILIBRIUM STRUCTURE OF SIMPLE LIQUIDS
It is shown that the repulsive (not merely the positive) portion of the Lennard - Jones potential quantitatively dominates the equilibrium structure of...the Lennard - Jones liquid. A simple and accurate approximation for the radial distribution function at high densities is presented.
Determining accurate distances to nearby galaxies
NASA Astrophysics Data System (ADS)
Bonanos, Alceste Zoe
2005-11-01
Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a
Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations
2011-09-30
System via Accurate Light Calculations Curtis D. Mobley Sequoia Scientific, Inc. 2700 Richards Road, Suite 107 Bellevue, WA 98005 phone: 425...incorporate extremely fast but accurate light calculations into coupled physical-biological-optical ocean ecosystem models as used for operational three...dimensional ecosystem predictions. Improvements in light calculations lead to improvements in predictions of chlorophyll concentrations and other
Fast Image Texture Classification Using Decision Trees
NASA Technical Reports Server (NTRS)
Thompson, David R.
2011-01-01
Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.
Fast and sensitive detection of indels induced by precise gene targeting
Yang, Zhang; Steentoft, Catharina; Hauge, Camilla; Hansen, Lars; Thomsen, Allan Lind; Niola, Francesco; Vester-Christensen, Malene B.; Frödin, Morten; Clausen, Henrik; Wandall, Hans H.; Bennett, Eric P.
2015-01-01
The nuclease-based gene editing tools are rapidly transforming capabilities for altering the genome of cells and organisms with great precision and in high throughput studies. A major limitation in application of precise gene editing lies in lack of sensitive and fast methods to detect and characterize the induced DNA changes. Precise gene editing induces double-stranded DNA breaks that are repaired by error-prone non-homologous end joining leading to introduction of insertions and deletions (indels) at the target site. These indels are often small and difficult and laborious to detect by traditional methods. Here we present a method for fast, sensitive and simple indel detection that accurately defines indel sizes down to ±1 bp. The method coined IDAA for Indel Detection by Amplicon Analysis is based on tri-primer amplicon labelling and DNA capillary electrophoresis detection, and IDAA is amenable for high throughput analysis. PMID:25753669
Operator Priming and Generalization of Practice in Adults' Simple Arithmetic
ERIC Educational Resources Information Center
Chen, Yalin; Campbell, Jamie I. D.
2016-01-01
There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication,…
No Generalization of Practice for Nonzero Simple Addition
ERIC Educational Resources Information Center
Campbell, Jamie I. D.; Beech, Leah C.
2014-01-01
Several types of converging evidence have suggested recently that skilled adults solve very simple addition problems (e.g., 2 + 1, 4 + 2) using a fast, unconscious counting algorithm. These results stand in opposition to the long-held assumption in the cognitive arithmetic literature that such simple addition problems normally are solved by fact…
On numerically accurate finite element
NASA Technical Reports Server (NTRS)
Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.
1974-01-01
A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.
Must Kohn-Sham oscillator strengths be accurate at threshold?
Yang Zenghui; Burke, Kieron; Faassen, Meta van
2009-09-21
The exact ground-state Kohn-Sham (KS) potential for the helium atom is known from accurate wave function calculations of the ground-state density. The threshold for photoabsorption from this potential matches the physical system exactly. By carefully studying its absorption spectrum, we show the answer to the title question is no. To address this problem in detail, we generate a highly accurate simple fit of a two-electron spectrum near the threshold, and apply the method to both the experimental spectrum and that of the exact ground-state Kohn-Sham potential.
Primitive layered gabbros from fast-spreading lower oceanic crust.
Gillis, Kathryn M; Snow, Jonathan E; Klaus, Adam; Abe, Natsue; Adrião, Alden B; Akizawa, Norikatsu; Ceuleneer, Georges; Cheadle, Michael J; Faak, Kathrin; Falloon, Trevor J; Friedman, Sarah A; Godard, Marguerite; Guerin, Gilles; Harigane, Yumiko; Horst, Andrew J; Hoshide, Takashi; Ildefonse, Benoit; Jean, Marlon M; John, Barbara E; Koepke, Juergen; Machi, Sumiaki; Maeda, Jinichiro; Marks, Naomi E; McCaig, Andrew M; Meyer, Romain; Morris, Antony; Nozaka, Toshio; Python, Marie; Saha, Abhishek; Wintsch, Robert P
2014-01-09
Three-quarters of the oceanic crust formed at fast-spreading ridges is composed of plutonic rocks whose mineral assemblages, textures and compositions record the history of melt transport and crystallization between the mantle and the sea floor. Despite the importance of these rocks, sampling them in situ is extremely challenging owing to the overlying dykes and lavas. This means that models for understanding the formation of the lower crust are based largely on geophysical studies and ancient analogues (ophiolites) that did not form at typical mid-ocean ridges. Here we describe cored intervals of primitive, modally layered gabbroic rocks from the lower plutonic crust formed at a fast-spreading ridge, sampled by the Integrated Ocean Drilling Program at the Hess Deep rift. Centimetre-scale, modally layered rocks, some of which have a strong layering-parallel foliation, confirm a long-held belief that such rocks are a key constituent of the lower oceanic crust formed at fast-spreading ridges. Geochemical analysis of these primitive lower plutonic rocks--in combination with previous geochemical data for shallow-level plutonic rocks, sheeted dykes and lavas--provides the most completely constrained estimate of the bulk composition of fast-spreading oceanic crust so far. Simple crystallization models using this bulk crustal composition as the parental melt accurately predict the bulk composition of both the lavas and the plutonic rocks. However, the recovered plutonic rocks show early crystallization of orthopyroxene, which is not predicted by current models of melt extraction from the mantle and mid-ocean-ridge basalt differentiation. The simplest explanation of this observation is that compositionally diverse melts are extracted from the mantle and partly crystallize before mixing to produce the more homogeneous magmas that erupt.
Primitive layered gabbros from fast-spreading lower oceanic crust
NASA Astrophysics Data System (ADS)
Gillis, Kathryn M.; Snow, Jonathan E.; Klaus, Adam; Abe, Natsue; Adrião, Álden B.; Akizawa, Norikatsu; Ceuleneer, Georges; Cheadle, Michael J.; Faak, Kathrin; Falloon, Trevor J.; Friedman, Sarah A.; Godard, Marguerite; Guerin, Gilles; Harigane, Yumiko; Horst, Andrew J.; Hoshide, Takashi; Ildefonse, Benoit; Jean, Marlon M.; John, Barbara E.; Koepke, Juergen; Machi, Sumiaki; Maeda, Jinichiro; Marks, Naomi E.; McCaig, Andrew M.; Meyer, Romain; Morris, Antony; Nozaka, Toshio; Python, Marie; Saha, Abhishek; Wintsch, Robert P.
2014-01-01
Three-quarters of the oceanic crust formed at fast-spreading ridges is composed of plutonic rocks whose mineral assemblages, textures and compositions record the history of melt transport and crystallization between the mantle and the sea floor. Despite the importance of these rocks, sampling them in situ is extremely challenging owing to the overlying dykes and lavas. This means that models for understanding the formation of the lower crust are based largely on geophysical studies and ancient analogues (ophiolites) that did not form at typical mid-ocean ridges. Here we describe cored intervals of primitive, modally layered gabbroic rocks from the lower plutonic crust formed at a fast-spreading ridge, sampled by the Integrated Ocean Drilling Program at the Hess Deep rift. Centimetre-scale, modally layered rocks, some of which have a strong layering-parallel foliation, confirm a long-held belief that such rocks are a key constituent of the lower oceanic crust formed at fast-spreading ridges. Geochemical analysis of these primitive lower plutonic rocks--in combination with previous geochemical data for shallow-level plutonic rocks, sheeted dykes and lavas--provides the most completely constrained estimate of the bulk composition of fast-spreading oceanic crust so far. Simple crystallization models using this bulk crustal composition as the parental melt accurately predict the bulk composition of both the lavas and the plutonic rocks. However, the recovered plutonic rocks show early crystallization of orthopyroxene, which is not predicted by current models of melt extraction from the mantle and mid-ocean-ridge basalt differentiation. The simplest explanation of this observation is that compositionally diverse melts are extracted from the mantle and partly crystallize before mixing to produce the more homogeneous magmas that erupt.
Delatorre, Carolina; Rodríguez, Ana; Rodríguez, Lucía; Majada, Juan P; Ordás, Ricardo J; Feito, Isabel
2017-01-01
Plant growth regulators (PGRs) are very different chemical compounds that play essential roles in plant development and the regulation of physiological processes. They exert their functions by a mechanism called cross-talk (involving either synergistic or antagonistic actions) thus; it is for great interest to study as many PGRs as possible to obtain accurate information about plant status. Much effort has been applied to develop methods capable of analyze large numbers of these compounds but frequently excluding some chemical families or important PGRs within each family. In addition, most of the methods are specially designed for matrices easy to work with. Therefore, we wanted to develop a method which achieved the requirements lacking in the literature and also being fast and reliable. Here we present a simple, fast and robust method for the extraction and quantification of 20 different PGRs using UHPLC-MS/MS optimized in complex matrices.
A fast algorithm for treating dielectric discontinuities in charged spherical colloids.
Xu, Zhenli
2012-03-01
Electrostatic interactions between multiple colloids in ionic fluids are attracting much attention in studies of biological and soft matter systems. The evaluation of the polarization surface charges due to the spherical dielectric discontinuities poses a challenging problem to highly efficient computer simulations. In this paper, we propose a new method for fast calculating the electric field of spaced spheres using the multiple reflection expansion. The method uses a technique of recursive reflections among the spherical interfaces based on a formula of the multiple image representation, resulting in a simple, accurate and close-form expression of the surface polarization charges. Numerical calculations of the electric potential energies of charged spheres demonstrate the method is highly accurate with small number of reflections, and thus attractive for the use in practical simulations of related problems such as colloid suspension and macromolecular interactions.
ERIC Educational Resources Information Center
Straulino, S.; Bonechi, L.
2010-01-01
Two lenses make it possible to create a simple telescope with quite large magnification. The set-up is very simple and can be reproduced in schools, provided the laboratory has a range of lenses with different focal lengths. In this article, the authors adopt the Keplerian configuration, which is composed of two converging lenses. This instrument,…
ERIC Educational Resources Information Center
Herald, Christine
2010-01-01
During the month of May, the author's eighth-grade physical science students study the six simple machines through hands-on activities, reading assignments, videos, and notes. At the end of the month, they can easily identify the six types of simple machine: inclined plane, wheel and axle, pulley, screw, wedge, and lever. To conclude this unit,…
A CFD-based wind solver for a fast response transport and dispersion model
Gowardhan, Akshay A; Brown, Michael J; Pardyjak, Eric R; Senocak, Inanc
2010-01-01
In many cities, ambient air quality is deteriorating leading to concerns about the health of city inhabitants. In urban areas with narrow streets surrounded by clusters of tall buildings, called street canyons, air pollution from traffic emissions and other sources is difficult to disperse and may accumulate resulting in high pollutant concentrations. For various situations, including the evacuation of populated areas in the event of an accidental or deliberate release of chemical, biological and radiological agents, it is important that models should be developed that produce urban flow fields quickly. For these reasons it has become important to predict the flow field in urban street canyons. Various computational techniques have been used to calculate these flow fields, but these techniques are often computationally intensive. Most fast response models currently in use are at a disadvantage in these cases as they are unable to correlate highly heterogeneous urban structures with the diagnostic parameterizations on which they are based. In this paper, a fast and reasonably accurate computational fluid dynamics (CFD) technique that solves the Navier-Stokes equations for complex urban areas has been developed called QUIC-CFD (Q-CFD). This technique represents an intermediate balance between fast (on the order of minutes for a several block problem) and reasonably accurate solutions. The paper details the solution procedure and validates this model for various simple and complex urban geometries.
Simple formulae for the transmittance of strip gratings
NASA Astrophysics Data System (ADS)
Compton, R. C.; Whitbourn, L. B.; McPhedran, R. C.
1983-11-01
The simple, accurate formulas presented for transmittance through strip gratings of infinitesimal thickness are valid irrespective of the angle of incidence of the radiation on the gratings, and can take into account the effects of a dielectric substrate. The formulas' range of applicability is demonstrated through a comparison of their predictions with the results of rigorous calculations. The formulas are simple to implement on a computer or programmable calculator, and are accurate over a wide range of grid parameters and wavelengths.
Accurate ab Initio Spin Densities.
Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus
2012-06-12
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].
Accurate, reproducible measurement of blood pressure.
Campbell, N R; Chockalingam, A; Fodor, J G; McKay, D W
1990-01-01
The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine consumption, smoking and physical exertion within half an hour before measurement. The use of standardized techniques to measure blood pressure will help to avoid large systematic errors. Poor technique can account for differences in readings of more than 15 mm Hg and ultimately misdiagnosis. Most of the recommended procedures are simple and, when routinely incorporated into clinical practice, require little additional time. The equipment must be appropriate and in good condition. Physicians should have a suitable selection of cuff sizes readily available; the use of the correct cuff size is essential to minimize systematic errors in blood pressure measurement. Semiannual calibration of aneroid sphygmomanometers and annual inspection of mercury sphygmomanometers and blood pressure cuffs are recommended. We review the methods recommended for measuring blood pressure and discuss the factors known to produce large differences in blood pressure readings. PMID:2192791
Gary S. Groenewold
2005-08-01
Simple bond cleavage is a class of fragmentation reactions in which a single bond is broken, without formation of new bonds between previously unconnected atoms. Because no bond making is involved, simple bond cleavages are endothermic, and activation energies are generally higher than for rearrangement eliminations. The rate of simple bond cleavage reactions is a strong function of the internal energy of the molecular ion, which reflects a loose transition state that resembles reaction products, and has a high density of accessible states. For this reason, simple bond cleavages tend to dominate fragmentation reactions for highly energized molecular ions. Simple bond cleavages have negligible reverse activation energy, and hence they are used as valuable probes of ion thermochemistry, since the energy dependence of the reactions can be related to the bond energy. In organic mass spectrometry, simple bond cleavages of odd electron ions can be either homolytic or heterolytic, depending on whether the fragmentation is driven by the radical site or the charge site. Simple bond cleavages of even electron ions tend to be heterolytic, producing even electron product ions and neutrals.
NASA Astrophysics Data System (ADS)
Price, Daniel J.; Laibe, Guillaume
2015-07-01
We describe a simple method for simulating the dynamics of small grains in a dusty gas, relevant to micron-sized grains in the interstellar medium and grains of centimetre size and smaller in protoplanetary discs. The method involves solving one extra diffusion equation for the dust fraction in addition to the usual equations of hydrodynamics. This `diffusion approximation for dust' is valid when the dust stopping time is smaller than the computational timestep. We present a numerical implementation using smoothed particle hydrodynamics that is conservative, accurate and fast. It does not require any implicit timestepping and can be straightforwardly ported into existing 3D codes.
ERIC Educational Resources Information Center
Blond, J. P.; Boggett, D. M.
1980-01-01
Discusses some basic physical ideas about light scattering and describes a simple Raman spectrometer, a single prism monochromator and a multiplier detector. This discussion is intended for British undergraduate physics students. (HM)
... caffeine and other stimulants found in coffee, tea, chocolate, and many soft drinks. Studies have not found ... side effects. How do fibrosis and simple cysts affect your risk for breast cancer? Neither fibrosis nor ...
ERIC Educational Resources Information Center
White, A. S.
1976-01-01
Describes a simple water channel, for use with an overhead projector. It is run from a water tap and may be used for flow visualization experiments, including the effect of streamlining and elementary building aerodynamics. (MLH)
Early Childhood: Simple Science.
ERIC Educational Resources Information Center
Jones, Clare B.; Shafer, Kathryn E.
1987-01-01
Encourages teachers to take advantage of the natural curiosity of young children in enhancing their interest in science. Describes four simple activities involving water, living and non-living things, air pollution, and food. (TW)
ERIC Educational Resources Information Center
Kirkwood, James J.
1994-01-01
Students explore the workings of the lever, wheel and axle, and the inclined plane as they build simple toys--a bulldozer and a road grader. The project takes four weeks. Diagrams and procedures are included. (PR)
Fast imaging of live organisms with sculpted light sheets
NASA Astrophysics Data System (ADS)
Chmielewski, Aleksander K.; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T.; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F.
2015-04-01
Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed.
Chromatic Information and Feature Detection in Fast Visual Analysis
Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.
2016-01-01
The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-and-white movies provide compelling representations of real world scenes. Also, the contrast sensitivity of color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. We conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in. PMID:27478891
Chromatic Information and Feature Detection in Fast Visual Analysis.
Del Viva, Maria M; Punzi, Giovanni; Shevell, Steven K
2016-01-01
The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-and-white movies provide compelling representations of real world scenes. Also, the contrast sensitivity of color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. We conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.
Chromatic information and feature detection in fast visual analysis
Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.; ...
2016-08-01
The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less
Chromatic information and feature detection in fast visual analysis
Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.; Solomon, Samuel G.
2016-08-01
The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity of color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.
Fast imaging of live organisms with sculpted light sheets
Chmielewski, Aleksander K.; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T.; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F.
2015-01-01
Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed. PMID:25893952
Sorokine, Alexandre
2011-10-01
Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layout system using customized styles.
Simple heuristics in over-the-counter drug choices: a new hint for medical education and practice
Riva, Silvia; Monti, Marco; Antonietti, Alessandro
2011-01-01
Introduction Over-the-counter (OTC) drugs are widely available and often purchased by consumers without advice from a health care provider. Many people rely on self-management of medications to treat common medical conditions. Although OTC medications are regulated by the National and the International Health and Drug Administration, many people are unaware of proper dosing, side effects, adverse drug reactions, and possible medication interactions. Purpose This study examined how subjects make their decisions to select an OTC drug, evaluating the role of cognitive heuristics which are simple and adaptive rules that help the decision-making process of people in everyday contexts. Subjects and methods By analyzing 70 subjects’ information-search and decision-making behavior when selecting OTC drugs, we examined the heuristics they applied in order to assess whether simple decision-making processes were also accurate and relevant. Subjects were tested with a sequence of two experimental tests based on a computerized Java system devised to analyze participants’ choices in a virtual environment. Results We found that subjects’ information-search behavior reflected the use of fast and frugal heuristics. In addition, although the heuristics which correctly predicted subjects’ decisions implied significantly fewer cues on average than the subjects did in the information-search task, they were accurate in describing order of information search. A simple combination of a fast and frugal tree and a tallying rule predicted more than 78% of subjects’ decisions. Conclusion The current emphasis in health care is to shift some responsibility onto the consumer through expansion of self medication. To know which cognitive mechanisms are behind the choice of OTC drugs is becoming a relevant purpose of current medical education. These findings have implications both for the validity of simple heuristics describing information searches in the field of OTC drug choices and
NASA Astrophysics Data System (ADS)
Jägers, Aswin P. L.; Sliepen, Guus; Bettonvil, Felix C. M.; Hammerschlag, Robert H.
2008-07-01
In the near future ELTs (Extreme Large Telescopes) will be built. Preferably these telescopes should operate without obstructions in the near surrounding to reach optimal seeing conditions and avoid large turbulences with wind-gust accelerations around large obstacles. This applies also to future large solar telescopes. At present two foldable dome prototypes have been built on the Canary Islands: the Dutch Open Telescope (DOT, La Palma) and the GREGOR Telescope (Tenerife), having a diameter of 7 and 9 meter, respectively. The domes are usually fully retracted during observations. The research consists of measurements on the two domes. New camera systems are developed and placed inside the domes for precise dome deformation measurements within 0.1 mm over the whole dome size. Simultaneously, a variety of wind-speed and -direction sensors measure the wind field around the dome. In addition, fast sensitive air-pressure sensors placed on the supporting bows measure the wind pressure. The aim is to predict accurately the expected forces and deformations on up-scaled, fully retractable domes to make their construction more economically. The dimensions of 7 and 9 meter are large enough for realistic on-site tests in gusty wind and will give much more information than wind tunnel experiments.
Rapid and Accurate Identification of Candida albicans Isolates by Use of PNA FISHFlow▿
Trnovsky, Jan; Merz, William; Della-Latta, Phyllis; Wu, Fann; Arendrup, Maiken Cavling; Stender, Henrik
2008-01-01
We developed the simple, rapid (1 h), and accurate PNA FISHFlow method for the identification of Candida albicans. The method exploits unique in solution in situ hybridization conditions under which the cells are simultaneously fixed and hybridized. This method facilitates the accurate identification of clinical yeast isolates using two scoring techniques: flow cytometry and fluorescence microscopy. PMID:18287325
1983-10-01
following basic equations can be deduced for orthotropic circular cylindrical shells. Let a be the radius of the midsurface of the shell, x, y, z the...axial, circumferential and radial coordinates and a, a the dimensionless midsurface coordinates along lines of curvatures (a - , a - . The threea a...8217The components of strain at an arbitrary point of the shell are related to the midsurface displacements by [8,15,16] e ( 1 v , 3 2w e a a a ,2)- 0 a
Simple and accurate theory for strong shock waves in a dense hard-sphere fluid.
Montanero, J M; López de Haro, M; Santos, A; Garzó, V
1999-12-01
Following an earlier work by Holian et al. [Phys. Rev. E 47, R24 (1993)] for a dilute gas, we present a theory for strong shock waves in a hard-sphere fluid described by the Enskog equation. The idea is to use the Navier-Stokes hydrodynamic equations but taking the temperature in the direction of shock propagation rather than the actual temperature in the computation of the transport coefficients. In general, for finite densities, this theory agrees much better with Monte Carlo simulations than the Navier-Stokes and (linear) Burnett theories, in contrast to the well-known superiority of the Burnett theory for dilute gases.
A simple, accurate, field-portable mixing ratio generator and Rayleigh distillation device
Technology Transfer Automated Retrieval System (TEKTRAN)
Routine field calibration of water vapor analyzers has always been a challenging problem for those making long-term flux measurements at remote sites. Automated sampling of standard gases from compressed tanks, the method of choice for CO2 calibration, cannot be used for H2O. Calibrations are typica...
Simple and accurate empirical absolute volume calibration of a multi-sensor fringe projection system
NASA Astrophysics Data System (ADS)
Gdeisat, Munther; Qudeisat, Mohammad; AlSa`d, Mohammed; Burton, David; Lilley, Francis; Ammous, Marwan M. M.
2016-05-01
This paper suggests a novel absolute empirical calibration method for a multi-sensor fringe projection system. The optical setup of the projector-camera sensor can be arbitrary. The term absolute calibration here means that the centre of the three dimensional coordinates in the resultant calibrated volume coincides with a preset centre to the three-dimensional real-world coordinate system. The use of a zero-phase fringe marking spot is proposed to increase depth calibration accuracy, where the spot centre is determined with sub-pixel accuracy. Also, a new method is proposed for transversal calibration. Depth and transversal calibration methods have been tested using both single sensor and three-sensor fringe projection systems. The standard deviation of the error produced by this system is 0.25 mm. The calibrated volume produced by this method is 400 mm×400 mm×140 mm.
A simple capacitive method to evaluate ethanol fuel samples
NASA Astrophysics Data System (ADS)
Vello, Tatiana P.; de Oliveira, Rafael F.; Silva, Gustavo O.; de Camargo, Davi H. S.; Bufon, Carlos C. B.
2017-02-01
Ethanol is a biofuel used worldwide. However, the presence of excessive water either during the distillation process or by fraudulent adulteration is a major concern in the use of ethanol fuel. High water levels may cause engine malfunction, in addition to being considered illegal. Here, we describe the development of a simple, fast and accurate platform based on nanostructured sensors to evaluate ethanol samples. The device fabrication is facile, based on standard microfabrication and thin-film deposition methods. The sensor operation relies on capacitance measurements employing a parallel plate capacitor containing a conformational aluminum oxide (Al2O3) thin layer (15 nm). The sensor operates over the full range water concentration, i.e., from approximately 0% to 100% vol. of water in ethanol, with water traces being detectable down to 0.5% vol. These characteristics make the proposed device unique with respect to other platforms. Finally, the good agreement between the sensor response and analyses performed by gas chromatography of ethanol biofuel endorses the accuracy of the proposed method. Due to the full operation range, the reported sensor has the technological potential for use as a point-of-care analytical tool at gas stations or in the chemical, pharmaceutical, and beverage industries, to mention a few.
A simple capacitive method to evaluate ethanol fuel samples
Vello, Tatiana P.; de Oliveira, Rafael F.; Silva, Gustavo O.; de Camargo, Davi H. S.; Bufon, Carlos C. B.
2017-01-01
Ethanol is a biofuel used worldwide. However, the presence of excessive water either during the distillation process or by fraudulent adulteration is a major concern in the use of ethanol fuel. High water levels may cause engine malfunction, in addition to being considered illegal. Here, we describe the development of a simple, fast and accurate platform based on nanostructured sensors to evaluate ethanol samples. The device fabrication is facile, based on standard microfabrication and thin-film deposition methods. The sensor operation relies on capacitance measurements employing a parallel plate capacitor containing a conformational aluminum oxide (Al2O3) thin layer (15 nm). The sensor operates over the full range water concentration, i.e., from approximately 0% to 100% vol. of water in ethanol, with water traces being detectable down to 0.5% vol. These characteristics make the proposed device unique with respect to other platforms. Finally, the good agreement between the sensor response and analyses performed by gas chromatography of ethanol biofuel endorses the accuracy of the proposed method. Due to the full operation range, the reported sensor has the technological potential for use as a point-of-care analytical tool at gas stations or in the chemical, pharmaceutical, and beverage industries, to mention a few. PMID:28240312
Eisenhardt, K M; Sull, D N
2001-01-01
The success of Yahoo!, eBay, Enron, and other companies that have become adept at morphing to meet the demands of changing markets can't be explained using traditional thinking about competitive strategy. These companies have succeeded by pursuing constantly evolving strategies in market spaces that were considered unattractive according to traditional measures. In this article--the third in an HBR series by Kathleen Eisenhardt and Donald Sull on strategy in the new economy--the authors ask, what are the sources of competitive advantage in high-velocity markets? The secret, they say, is strategy as simple rules. The companies know that the greatest opportunities for competitive advantage lie in market confusion, but they recognize the need for a few crucial strategic processes and a few simple rules. In traditional strategy, advantage comes from exploiting resources or stable market positions. In strategy as simple rules, advantage comes from successfully seizing fleeting opportunities. Key strategic processes, such as product innovation, partnering, or spinout creation, place the company where the flow of opportunities is greatest. Simple rules then provide the guidelines within which managers can pursue such opportunities. Simple rules, which grow out of experience, fall into five broad categories: how- to rules, boundary conditions, priority rules, timing rules, and exit rules. Companies with simple-rules strategies must follow the rules religiously and avoid the temptation to change them too frequently. A consistent strategy helps managers sort through opportunities and gain short-term advantage by exploiting the attractive ones. In stable markets, managers rely on complicated strategies built on detailed predictions of the future. But when business is complicated, strategy should be simple.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Accurately measuring dynamic coefficient of friction in ultraform finishing
NASA Astrophysics Data System (ADS)
Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.
2013-09-01
UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.
Fast foods are quick, reasonably priced, and readily available alternatives to home cooking. While convenient and economical for a busy lifestyle, fast foods are typically high in calories, fat, saturated ...
Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.
Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian
2015-09-01
Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to
NASA Astrophysics Data System (ADS)
He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu
2014-11-01
Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.
Garber, Andrea K; Lustig, Robert H
2011-09-01
Studies of food addiction have focused on highly palatable foods. While fast food falls squarely into that category, it has several other attributes that may increase its salience. This review examines whether the nutrients present in fast food, the characteristics of fast food consumers or the presentation and packaging of fast food may encourage substance dependence, as defined by the American Psychiatric Association. The majority of fast food meals are accompanied by a soda, which increases the sugar content 10-fold. Sugar addiction, including tolerance and withdrawal, has been demonstrated in rodents but not humans. Caffeine is a "model" substance of dependence; coffee drinks are driving the recent increase in fast food sales. Limited evidence suggests that the high fat and salt content of fast food may increase addictive potential. Fast food restaurants cluster in poorer neighborhoods and obese adults eat more fast food than those who are normal weight. Obesity is characterized by resistance to insulin, leptin and other hormonal signals that would normally control appetite and limit reward. Neuroimaging studies in obese subjects provide evidence of altered reward and tolerance. Once obese, many individuals meet criteria for psychological dependence. Stress and dieting may sensitize an individual to reward. Finally, fast food advertisements, restaurants and menus all provide environmental cues that may trigger addictive overeating. While the concept of fast food addiction remains to be proven, these findings support the role of fast food as a potentially addictive substance that is most likely to create dependence in vulnerable populations.
A simple digital delay for nuclear physics experiments
NASA Astrophysics Data System (ADS)
Marques, J. G.; Cruz, C.
2014-05-01
A simple high precision digital delay for nuclear physics experiments was developed using fast ECL electronics. The circuit uses an oscillator synchronized with the signal to be delayed and a presettable counter. It is capable of delaying a negative NIM signal by 2 μs with a precision better than 50 ps. The circuit was developed for use in slow-fast coincidence units for Perturbed Angular Correlation spectrometers but it is not limited to this application.
ERIC Educational Resources Information Center
Temiz, Burak Kagan; Yavuz, Ahmet
2015-01-01
This study was done to develop a simple and inexpensive wave driver that can be used in experiments on string waves. The wave driver was made using a battery-operated toy car, and the apparatus can be used to produce string waves at a fixed frequency. The working principle of the apparatus is as follows: shortly after the car is turned on, the…
Simple Magnetometer for Autopilots
NASA Technical Reports Server (NTRS)
Garner, H. D.
1982-01-01
Simple, low-cost magnetometer is suitable for heading-reference applications in autopilots and other directional control systems. Sensing element utilizes commercially available transformer core; and supporting electronics consist of one transistor, two readily-available integrated-circuit chips, and associated resistors and capacitors.
NASA Technical Reports Server (NTRS)
Dix, M. G.; Harrison, D. R.; Edwards, T. M.
1982-01-01
Bubble vial with external aluminum-foil electrodes is sensing element for simple indicating tiltmeter. To measure bubble displacement, bridge circuit detects difference in capacitance between two sensing electrodes and reference electrode. Tiltmeter was developed for experiment on forecasting seismic events by changes in Earth's magnetic field.
ERIC Educational Resources Information Center
Norbury, John W.
2006-01-01
A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that…
Entropy Is Simple, Qualitatively.
ERIC Educational Resources Information Center
Lambert, Frank L.
2002-01-01
Suggests that qualitatively, entropy is simple. Entropy increase from a macro viewpoint is a measure of the dispersal of energy from localized to spread out at a temperature T. Fundamentally based on statistical and quantum mechanics, this approach is superior to the non-fundamental "disorder" as a descriptor of entropy change. (MM)
ERIC Educational Resources Information Center
Shallcross, Dudley E.; Harrison, Tim G.
2007-01-01
The newly revised specifications for GCSE science involve greater consideration of climate change. This topic appears in either the chemistry or biology section, depending on the examination board, and is a good example of "How Science Works." It is therefore timely that students are given an opportunity to conduct some simple climate modelling.…
ERIC Educational Resources Information Center
Cole, K.C.
1982-01-01
Discusses San Francisco's Exploratorium, a science teaching center with 500 exhibits focusing on human perception, but extending to everything from the mechanics of voice to the art of illusion, from holograms to harmonics. The Exploratorium emphasizes "simple science" (refractions/resonances, sounds/shadows) to tune in the senses and turn on the…
2013-05-01
Simple Lookup Service (sLS) is a REST/JSON based lookup service that allows users to publish information in the form of key-value pairs and search for the published information. The lookup service supports both pull and push model. This software can be used to create a distributed architecture/cloud.
ERIC Educational Resources Information Center
Eggen, Per-Odd
2009-01-01
This article describes the construction of an inexpensive, robust, and simple hydrogen electrode, as well as the use of this electrode to measure "standard" potentials. In the experiment described here the students can measure the reduction potentials of metal-metal ion pairs directly, without using a secondary reference electrode. Measurements…
Mcfast, a Parameterized Fast Monte Carlo for Detector Studies
NASA Astrophysics Data System (ADS)
Boehnlein, Amber S.
McFast is a modularized and parameterized fast Monte Carlo program which is designed to generate physics analysis information for different detector configurations and subdetector designs. McFast is based on simple geometrical object definitions and includes hit generation, parameterized track generation, vertexing, a muon system, electromagnetic calorimetry, and trigger framework for physics studies. Auxiliary tools include a geometry editor, visualization, and an i/o system.
NASA Astrophysics Data System (ADS)
Wang, Wan-ting; Guo, Jin; Fang, Chu; Jiang, Zhen-hua; Wang, Ting-feng
2016-11-01
To solve the rate-dependent hysteresis compensation problem in fast steering mirror (FSM) systems, an improved Prandtl-Ishlinskii (P-I) model is proposed in this paper. The proposed model is formulated by employing a linear density function into the STOP operator. By this way, the proposed model has a relatively simple mathematic format, which can be applied to compensate the rate-dependent hysteresis directly. Adaptive differential evolution algorithm is utilized to obtain the accurate parameters of the proposed model. A fast steering mirror control system is established to demonstrate the validity and feasibility of the improved P-I model. Comparative experiments with different input signals are performed and analyzed, and the results show that the proposed model not only suppresses the rate-dependent hysteresis effectively, but also obtains high tracking precision.
Du, Pei; Zhuang, Lifang; Wang, Yanzhi; Yuan, Li; Wang, Qing; Wang, Danrui; Dawadondup; Tan, Lijun; Shen, Jian; Xu, Haibin; Zhao, Han; Chu, Chenggen; Qi, Zengjun
2017-02-01
In comparison with general FISH for preparing probes in terms of time and cost, synthesized oligonucleotide (oligo hereafter) probes for FISH have many advantages such as ease of design, synthesis, and labeling. Low cost and high sensitivity and resolution of oligo probes greatly simplify the FISH procedure as a simple, fast, and efficient method of chromosome identification. In this study, we developed new oligo and oligo multiplex probes to accurately and efficiently distinguish wheat (Triticum aestivum, 2n = 6x, AABBDD) and Thinopyrum bessarabicum (2n = 2x = 14, JJ) chromosomes. The oligo probes contained more nucleotides or more repeat units that produced stronger signals for more efficient chromosome painting. Four Th. bessarabicum-specific oligo probes were developed based on genomic DNA sequences of Th. bessarabicum chromosome arm 4JL, and one of them (oligo DP4J27982) was pooled with the oligo multiplex #1 to simultaneously detect wheat and Th. bessarabicum chromosomes for quick and accurate identification of Chinese Spring (CS) - Th. bessarabicum alien chromosome introgression lines. Oligo multiplex #4 revealed chromosome variations among CS and eight wheat cultivars by a single round of FISH analysis. This research demonstrated the high efficiency of using oligos and oligo multiplexes in chromosome identification and manipulation.
Diagnostics for Fast Ignition Science
MacPhee, A; Akli, K; Beg, F; Chen, C; Chen, H; Clarke, R; Hey, D; Freeman, R; Kemp, A; Key, M; King, J; LePape, S; Link, A; Ma, T; Nakamura, N; Offermann, D; Ovchinnikov, V; Patel, P; Phillips, T; Stephens, R; Town, R; Wei, M; VanWoerkom, L; Mackinnon, A
2008-05-06
The concept for Electron Fast Ignition Inertial Confinement Fusion demands sufficient laser energy be transferred from the ignitor pulse to the assembled fuel core via {approx}MeV electrons. We have assembled a suite of diagnostics to characterize such transfer. Recent experiments have simultaneously fielded absolutely calibrated extreme ultraviolet multilayer imagers at 68 and 256eV; spherically bent crystal imagers at 4 and 8keV; multi-keV crystal spectrometers; MeV x-ray bremmstrahlung and electron and proton spectrometers (along the same line of sight); nuclear activation samples and a picosecond optical probe based interferometer. These diagnostics allow careful measurement of energy transport and deposition during and following laser-plasma interactions at extremely high intensities in both planar and conical targets. Augmented with accurate on-shot laser focal spot and pre-pulse characterization, these measurements are yielding new insight into energy coupling and are providing critical data for validating numerical PIC and hybrid PIC simulation codes in an area that is crucial for many applications, particularly fast ignition. Novel aspects of these diagnostics and how they are combined to extract quantitative data on ultra high intensity laser plasma interactions are discussed, together with implications for full-scale fast ignition experiments.
Fast Offset Laser Phase-Locking System
NASA Technical Reports Server (NTRS)
Shaddock, Daniel; Ware, Brent
2008-01-01
Figure 1 shows a simplified block diagram of an improved optoelectronic system for locking the phase of one laser to that of another laser with an adjustable offset frequency specified by the user. In comparison with prior systems, this system exhibits higher performance (including higher stability) and is much easier to use. The system is based on a field-programmable gate array (FPGA) and operates almost entirely digitally; hence, it is easily adaptable to many different systems. The system achieves phase stability of less than a microcycle. It was developed to satisfy the phase-stability requirement for a planned spaceborne gravitational-wave-detecting heterodyne laser interferometer (LISA). The system has potential terrestrial utility in communications, lidar, and other applications. The present system includes a fast phasemeter that is a companion to the microcycle-accurate one described in High-Accuracy, High-Dynamic-Range Phase-Measurement System (NPO-41927), NASA Tech Briefs, Vol. 31, No. 6 (June 2007), page 22. In the present system (as in the previously reported one), beams from the two lasers (here denoted the master and slave lasers) interfere on a photodiode. The heterodyne photodiode output is digitized and fed to the fast phasemeter, which produces suitably conditioned, low-latency analog control signals which lock the phase of the slave laser to that of the master laser. These control signals are used to drive a thermal and a piezoelectric transducer that adjust the frequency and phase of the slave-laser output. The output of the photodiode is a heterodyne signal at the difference between the frequencies of the two lasers. (The difference is currently required to be less than 20 MHz due to the Nyquist limit of the current sampling rate. We foresee few problems in doubling this limit using current equipment.) Within the phasemeter, the photodiode-output signal is digitized to 15 bits at a sampling frequency of 40 MHz by use of the same analog
Integrative Physiology of Fasting.
Secor, Stephen M; Carey, Hannah V
2016-03-15
Extended bouts of fasting are ingrained in the ecology of many organisms, characterizing aspects of reproduction, development, hibernation, estivation, migration, and infrequent feeding habits. The challenge of long fasting episodes is the need to maintain physiological homeostasis while relying solely on endogenous resources. To meet that challenge, animals utilize an integrated repertoire of behavioral, physiological, and biochemical responses that reduce metabolic rates, maintain tissue structure and function, and thus enhance survival. We have synthesized in this review the integrative physiological, morphological, and biochemical responses, and their stages, that characterize natural fasting bouts. Underlying the capacity to survive extended fasts are behaviors and mechanisms that reduce metabolic expenditure and shift the dependency to lipid utilization. Hormonal regulation and immune capacity are altered by fasting; hormones that trigger digestion, elevate metabolism, and support immune performance become depressed, whereas hormones that enhance the utilization of endogenous substrates are elevated. The negative energy budget that accompanies fasting leads to the loss of body mass as fat stores are depleted and tissues undergo atrophy (i.e., loss of mass). Absolute rates of body mass loss scale allometrically among vertebrates. Tissues and organs vary in the degree of atrophy and downregulation of function, depending on the degree to which they are used during the fast. Fasting affects the population dynamics and activities of the gut microbiota, an interplay that impacts the host's fasting biology. Fasting-induced gene expression programs underlie the broad spectrum of integrated physiological mechanisms responsible for an animal's ability to survive long episodes of natural fasting.
NASA Technical Reports Server (NTRS)
Rhodes, David B.; Franke, John M.; Jones, Stephen B.; Leighty, Bradley D.
1992-01-01
Simple light-meter circuit used to position knife edge of schlieren optical system to block exactly half light. Enables operator to check quickly position of knife edge between tunnel runs to ascertain whether or not in alignment. Permanent measuring system made part of each schlieren system. If placed in unused area of image plane, or in monitoring beam from mirror knife edge, provides real-time assessment of alignment of schlieren system.
Simple Finite Jordan Pseudoalgebras
NASA Astrophysics Data System (ADS)
Kolesnikov, Pavel
2009-01-01
We consider the structure of Jordan H-pseudoalgebras which are linearly finitely generated over a Hopf algebra H. There are two cases under consideration: H = U(h) and H = U(h) # C[Γ], where h is a finite-dimensional Lie algebra over C, Γ is an arbitrary group acting on U(h) by automorphisms. We construct an analogue of the Tits-Kantor-Koecher construction for finite Jordan pseudoalgebras and describe all simple ones.
NASA Astrophysics Data System (ADS)
Kulpa, Krzysztof; Misiurewicz, Jacek; Baranowski, Piotr; Wojdołowicz, Grzegorz
2008-01-01
In this paper we present a simple SAR radar demonstrator build using commercially available (COTS) components. For the microwave analog front end, a standard police radar microwave head has been used. The Motorola DSP processor board, equipped with ADC and DAC, has been used for generating of modulating signal and for signal acquisition. The raw radar signal (I and Q components) have been recorded on 2.5" HDD. The signal processing has been performed on standard PC computer after copying the recorded data. The aim of constructing simple and relatively cheap demonstrator was to provide the students the real-life unclassified radar signals and motivate them to test and develop various kinds of SAR and ISAR algorithms, including image formation, motion compensation and autofocusing. The simple microwave frontend hardware has a lot of non-idealities, so for obtaining nice SAR image it was necessary to develop the number of correction algorithms at the calibration stage. The SAR demonstrator have been tested using car as a moving platform. The flight tests with a small airborne platform are planned for the summer.
Fast and Reliable Quantitative Peptidomics with labelpepmatch.
Verdonck, Rik; De Haes, Wouter; Cardoen, Dries; Menschaert, Gerben; Huhn, Thomas; Landuyt, Bart; Baggerman, Geert; Boonen, Kurt; Wenseleers, Tom; Schoofs, Liliane
2016-03-04
The use of stable isotope tags in quantitative peptidomics offers many advantages, but the laborious identification of matching sets of labeled peptide peaks is still a major bottleneck. Here we present labelpepmatch, an R-package for fast and straightforward analysis of LC-MS spectra of labeled peptides. This open-source tool offers fast and accurate identification of peak pairs alongside an appropriate framework for statistical inference on quantitative peptidomics data, based on techniques from other -omics disciplines. A relevant case study on the desert locust Schistocerca gregaria proves our pipeline to be a reliable tool for quick but thorough explorative analyses.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
Data assimilation on the exponentially accurate slow manifold.
Cotter, Colin
2013-05-28
I describe an approach to data assimilation making use of an explicit map that defines a coordinate system on the slow manifold in the semi-geostrophic scaling in Lagrangian coordinates, and apply the approach to a simple toy system that has previously been proposed as a low-dimensional model for the semi-geostrophic scaling. The method can be extended to Lagrangian particle methods such as Hamiltonian particle-mesh and smooth-particle hydrodynamics applied to the rotating shallow-water equations, and many of the properties will remain for more general Eulerian methods. Making use of Hamiltonian normal-form theory, it has previously been shown that, if initial conditions for the system are chosen as image points of the map, then the fast components of the system have exponentially small magnitude for exponentially long times as ε→0, and this property is preserved if one uses a symplectic integrator for the numerical time stepping. The map may then be used to parametrize initial conditions near the slow manifold, allowing data assimilation to be performed without introducing any fast degrees of motion (more generally, the precise amount of fast motion can be selected).
Beam Profile Monitor With Accurate Horizontal And Vertical Beam Profiles
Havener, Charles C [Knoxville, TN; Al-Rejoub, Riad [Oak Ridge, TN
2005-12-26
A widely used scanner device that rotates a single helically shaped wire probe in and out of a particle beam at different beamline positions to give a pair of mutually perpendicular beam profiles is modified by the addition of a second wire probe. As a result, a pair of mutually perpendicular beam profiles is obtained at a first beamline position, and a second pair of mutually perpendicular beam profiles is obtained at a second beamline position. The simple modification not only provides more accurate beam profiles, but also provides a measurement of the beam divergence and quality in a single compact device.
Accurate Energy Transaction Allocation using Path Integration and Interpolation
NASA Astrophysics Data System (ADS)
Bhide, Mandar Mohan
This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.
3D-spectral CDIs: a fast alternative to 3D inversion?
NASA Astrophysics Data System (ADS)
Macnae, James
2015-09-01
Virtually all airborne electromagnetic (AEM) data is interpreted using stitched 1D conductivity sections, derived from constrained inversion or fast but fairly accurate approximations. A small subset of this AEM data recently has been inverted using either block 3D models or thin plates, which processes have limitations in terms of cost and accuracy, and the results are in general strongly biased by the choice of starting models. Recent developments in spectral modelling have allowed fast 3D approximations of the EM response of both vortex induction and current gathering for simple geological target geometries. Fitting these spectral responses to AEM data should be sufficient to accurately locate current systems within the ground, and the behaviour of these local current systems can in theory approximately define a conductivity structure in 3D. This paper describes the results of initial testing of the algorithm in fitting vortex induction in a small target at the Forrestania test range, Western Australia, using results from a versatile time-domain electromagnetic (VTEM)-Max survey.
Evaluation of eddy-current probe signals due to cracks in ferromagnetic parts of fast reactor
NASA Astrophysics Data System (ADS)
Wu, Tao; Bowler, John R.
2017-02-01
Eddy current testing to evaluate the condition of metallic parts in a sodium cooled fast reactor under standby conditions is challenging due to the presence of liquid sodium at 250 °C. The eddy current test system should be sensitive enough to capture small signal changes and hence an advanced inspection systems is needed. We have developed new hardware and improved numerical models to predict the eddy current probe signal due to cracks in metallic fast reactor parts by using volume integral equation method. The analytical expressions are derived for the quasi-static time-harmonic electromagnetic fields of a circular eddy current coil which interacts with conductive plate. Naturally, the method of moment is used to approximate the integral equation and obtain the discrete approximation of the field in the crack domain. A simple and accurate analytical method for dealing with the hyper-singularity element evaluation is also provided. An accurate controlled experiment is carried out on the ferromagnetic stainless steel plate with precision made notch to obtain reference impedance changes for comparison with the theoretical model predictions. Good agreement between predictions and experiment is obtained.
Simple Autonomous Chaotic Circuits
NASA Astrophysics Data System (ADS)
Piper, Jessica; Sprott, J.
2010-03-01
Over the last several decades, numerous electronic circuits exhibiting chaos have been proposed. Non-autonomous circuits with as few as two components have been developed. However, the operation of such circuits relies on the non-ideal behavior of the devices used, and therefore the circuit equations can be quite complex. In this paper, we present two simple autonomous chaotic circuits using only opamps and linear passive components. The circuits each use one opamp as a comparator, to provide a signum nonlinearity. The chaotic behavior is robust, and independent of nonlinearities in the passive components. Moreover, the circuit equations are among the algebraically simplest chaotic systems yet constructed.
Fast phase unwrapping algorithm based on region partition for structured light vision measurement
NASA Astrophysics Data System (ADS)
Lu, Jun; Su, Hang
2014-04-01
Phase unwrapping is a key problem of phase-shifting profilometry vision measurement for complex object surface shapes. The simple path-following phase unwrapping algorithm is fast but has serious unwrapping error for complex shapes. The Goldstein+flood phase unwrapping algorithm can handle some complex shape object measurement; however, it is time consuming. We propose a fast phase unwrapping algorithm based on region partition according to a quality map of wrapped phase. In this algorithm, wrapped phase image is divided into several regions using partition thresholds, which are determined according to histogram of quality value. Each region is unwrapped by using a simple path-following phase algorithm and several groups with different priorities are generated. These groups are merged according to their priorities from high to low order and a final absolute phase is obtained. The proposed method is applied to wrapped phase images of three objects with and without noise. Experiments show that the proposed method is much faster, more accurate, and robust to noise than the Goldstein+flood algorithm in unwrapping complex phase image.
Goree, J.; Ono, M.; Colestock, P.; Horton, R.; McNeill, D.; Park, H.
1985-07-01
Fast wave current drive is demonstrated in the Princeton ACT-I toroidal device. The fast Alfven wave, in the range of high ion-cyclotron harmonics, produced 40 A of current from 1 kW of rf power coupled into the plasma by fast wave loop antenna. This wave excites a steady current by damping on the energetic tail of the electron distribution function in the same way as lower-hybrid current drive, except that fast wave current drive is appropriate for higher plasma densities.
Grey Ballard, Austin Benson
2014-11-26
This software provides implementations of fast matrix multiplication algorithms. These algorithms perform fewer floating point operations than the classical cubic algorithm. The software uses code generation to automatically implement the fast algorithms based on high-level descriptions. The code serves two general purposes. The first is to demonstrate that these fast algorithms can out-perform vendor matrix multiplication algorithms for modest problem sizes on a single machine. The second is to rapidly prototype many variations of fast matrix multiplication algorithms to encourage future research in this area. The implementations target sequential and shared memory parallel execution.
Extremely simple holographic projection of color images
NASA Astrophysics Data System (ADS)
Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej
2012-03-01
A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).
Schilstra, Maria J; Martin, Stephen R
2009-01-01
Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.
NASA Astrophysics Data System (ADS)
Khaled, N. E.; Attalla, E. M.; Ammar, H.; Khalil, W.
2011-12-01
This work focusses on the estimation of induced photoneutrons energy, fluence, and strength using nuclear track detector (NTD) (CR-39). Photoneutron energy was estimated for three different linear accelerators, LINACs as an example for the commonly used accelerators. For high-energy linear accelerators, neutrons are produced as a consequence of photonuclear reactions in the target nuclei, accelerator head, field-flattening filters and beam collimators, and other irradiated objects. NTD (CR-39) is used to evaluate energy and fluence of the fast neutron. Track length is used to estimate fast photoneutrons energy for linear accelerators (Elekta 10 MV, Elekta 15 MV, and Varian 15 MV). Results show that the estimated neutron energies for the three chosen examples of LINACs reveals neutron energies in the range of 1-2 MeV for 10 and 15 MV X-ray beams. The fluence of neutrons at the isocenter (Φtotal) is found to be (4×106 n cm2 Gy-1) for Elekta machine 10 MV. The neutron source strengths Q are calculated. It was found to be 0.2×1012 n Gy-1 X-ray at the isocenter. This work represents simple, low cost, and accurate methods of measuring fast neutrons dose and energies.
A Universal Fast Algorithm for Sensitivity-Based Structural Damage Detection
Yang, Q. W.; Liu, J. K.; Li, C. H.; Liang, C. F.
2013-01-01
Structural damage detection using measured response data has emerged as a new research area in civil, mechanical, and aerospace engineering communities in recent years. In this paper, a universal fast algorithm is presented for sensitivity-based structural damage detection, which can quickly improve the calculation accuracy of the existing sensitivity-based technique without any high-order sensitivity analysis or multi-iterations. The key formula of the universal fast algorithm is derived from the stiffness and flexibility matrix spectral decomposition theory. With the introduction of the key formula, the proposed method is able to quickly achieve more accurate results than that obtained by the original sensitivity-based methods, regardless of whether the damage is small or large. Three examples are used to demonstrate the feasibility and superiority of the proposed method. It has been shown that the universal fast algorithm is simple to implement and quickly gains higher accuracy over the existing sensitivity-based damage detection methods. PMID:24453815
A measurement of the fast luminescent decays of the MV-50 LED.
NASA Technical Reports Server (NTRS)
Sutton, J. F.
1972-01-01
The fast luminescent decay of the MV-50 GaAs doped Si light-emitting diode has been studied. This diode is found to provide a fast, inexpensive, bright, and convenient light source for the calibration of fast optical timing systems. A simple passive electronic module is described that allows driving the light source directly by a laboratory pulse generator.
Fast Steerable Principal Component Analysis
Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit
2016-01-01
Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801
Fast Steerable Principal Component Analysis.
Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit
2016-03-01
Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL(3) + L(4)), while existing algorithms take O(nL(4)). The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA.
A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)
2013-01-22
performance of algorithms in terms of various error metrics, speed, and robustness to noise. All the experiments are performed in Matlab 7.11 on...online version available, (2011). [17] J.-J. Moreau, Fonctions convexes duales et points proximaux dans un espace hilbertien, C.R. Acad. Sci. Paris Sér
Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm
Manzer, Samuel; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Martin
2015-01-01
Construction of the exact exchange matrix, K, is typically the rate-determining step in hybrid density functional theory, and therefore, new approaches with increased efficiency are highly desirable. We present a framework with potential for greatly improved efficiency by computing a compressed exchange matrix that yields the exact exchange energy, gradient, and direct inversion of the iterative subspace (DIIS) error vector. The compressed exchange matrix is constructed with one index in the compact molecular orbital basis and the other index in the full atomic orbital basis. To illustrate the advantages, we present a practical algorithm that uses this framework in conjunction with the resolution of the identity (RI) approximation. We demonstrate that convergence using this method, referred to hereafter as occupied orbital RI-K (occ-RI-K), in combination with the DIIS algorithm is well-behaved, that the accuracy of computed energetics is excellent (identical to conventional RI-K), and that significant speedups can be obtained over existing integral-direct and RI-K methods. For a 4400 basis function C68H22 hydrogen-terminated graphene fragment, our algorithm yields a 14 × speedup over the conventional algorithm and a speedup of 3.3 × over RI-K. PMID:26178096
Unified treatment for accurate and fast evaluation of the Fermi-Dirac functions
NASA Astrophysics Data System (ADS)
Guseinov, I. I.; Mamedov, B. A.
2010-05-01
A new analytical approach to the computation of the Fermi-Dirac (FD) functions is presented, which was suggested by previous experience with various algorithms. Using the binomial expansion theorem, these functions are expressed through the binomial coefficients and familiar incomplete Gamma functions. This simplification and the use of the memory of the computer for the calculation of binomial coefficients may extend the limits to large arguments for users and result in speedier calculation, should such limits be required in practice. Some numerical results are presented for significant mapping examples and they are briefly discussed.
A technique for fast and accurate measurement of hand volumes using Archimedes' principle.
Hughes, S; Lau, J
2008-03-01
A new technique for measuring hand volumes using Archimedes principle is described. The technique involves the immersion of a hand in a water container placed on an electronic balance. The volume is given by the change in weight divided by the density of water. This technique was compared with the more conventional technique of immersing an object in a container with an overflow spout and collecting and weighing the volume of overflow water. The hand volume of two subjects was measured. Hand volumes were 494 +/- 6 ml and 312 +/- 7 ml for the immersion method and 476 +/- 14 ml and 302 +/- 8 ml for the overflow method for the two subjects respectively. Using plastic test objects, the mean difference between the actual and measured volume was -0.3% and 2.0% for the immersion and overflow techniques respectively. This study shows that hand volumes can be obtained more quickly than the overflow method. The technique could find an application in clinics where frequent hand volumes are required.
NASA Astrophysics Data System (ADS)
Samuel, Henri
2010-05-01
Advection is one of the major processes that commonly acts on various scales in nature (core formation, mantle convective stirring, multi-phase flows in magma chambers, salt diapirism ...). While this process can be modeled numerically by solving conservation equations, various geodynamic scenarios involve advection of quantities with sharp discontinuities. Unfortunately, in these cases modeling numerically pure advection becomes very challenging, in particular because sharp discontinuities lead to numerical instabilities, which prevent the local use of high order numerical schemes. Several approaches have been used in computational geodynamics in order to overcome this difficulty, with variable amounts of success. Despite the use of correcting filters or non-oscillatory, shock-preserving schemes, Eulerian (fixed grid) techniques generally suffer from artificial numerical diffusion. Lagrangian approaches (dynamic grids or particles) tend to be more popular in computational geodynamics because they are not prone to excessive numerical diffusion. However, these approaches are generally computationally expensive, especially in 3D, and can suffer from spurious statistical noise. As an alternative to these aforementioned approaches, I have applied a relatively recent Particle Level set method [Enright et al., 2002] for modeling advection of quantities with the presence of sharp discontinuities. I have adapted this improved method, which combines the best of Eulerian and Lagrangian approaches, and I have tested it against well known benchmarks and classical Geodynamic flows. In each case the Particle Level Set method accuracy equals or is better than other Eulerian and Lagrangian methods, and leads to significantly smaller computational cost, in particular in three-dimensional flows, where the reduction of computational time for modeling advection processes is most needed.
A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging
Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun E-mail: steve.jiang@UTSouthwestern.edu; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura
2014-07-15
Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase. Conclusions: High-quality 4D-CBCT imaging based on the clinically standard 1-min 3D CBCT scanning protocol is feasible via the proposed hybrid reconstruction algorithm.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, and computational cost is examined and several numerical examples are presented to corroborate the findings.
Fast and accurate numerical method for predicting gas chromatography retention time.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-08-07
Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.
Fast, Accurate and Automatic Ancient Nucleosome and Methylation Maps with epiPALEOMIX.
Hanghøj, Kristian; Seguin-Orlando, Andaine; Schubert, Mikkel; Madsen, Tobias; Pedersen, Jakob Skou; Willerslev, Eske; Orlando, Ludovic
2016-12-01
The first epigenomes from archaic hominins (AH) and ancient anatomically modern humans (AMH) have recently been characterized, based, however, on a limited number of samples. The extent to which ancient genome-wide epigenetic landscapes can be reconstructed thus remains contentious. Here, we present epiPALEOMIX, an open-source and user-friendly pipeline that exploits post-mortem DNA degradation patterns to reconstruct ancient methylomes and nucleosome maps from shotgun and/or capture-enrichment data. Applying epiPALEOMIX to the sequence data underlying 35 ancient genomes including AMH, AH, equids and aurochs, we investigate the temporal, geographical and preservation range of ancient epigenetic signatures. We first assess the quality of inferred ancient epigenetic signatures within well-characterized genomic regions. We find that tissue-specific methylation signatures can be obtained across a wider range of DNA preparation types than previously thought, including when no particular experimental procedures have been used to remove deaminated cytosines prior to sequencing. We identify a large subset of samples for which DNA associated with nucleosomes is protected from post-mortem degradation, and nucleosome positioning patterns can be reconstructed. Finally, we describe parameters and conditions such as DNA damage levels and sequencing depth that limit the preservation of epigenetic signatures in ancient samples. When such conditions are met, we propose that epigenetic profiles of CTCF binding regions can be used to help data authentication. Our work, including epiPALEOMIX, opens for further investigations of ancient epigenomes through time especially aimed at tracking possible epigenetic changes during major evolutionary, environmental, socioeconomic, and cultural shifts.
ERIC Educational Resources Information Center
Steiger, James H.
1979-01-01
The program presented computes a chi-square statistic for testing pattern hypotheses on correlation matrices. The statistic is based on a multivariate generalization of the Fisher r-to-z transformation. This statistic has small sample performance which is superior to an analogous likelihood ratio statistic obtained via the analysis of covariance…
Accurate and Fast Localization of Prostate for External Beam Radiation Therapy
2009-03-01
reconstruction for CBCT using edge- preserving prior”, Medical Physics, vol. 36, pp. 252-260, 2009 3. L. Zhu, J. Wang, and L. Xing, “Noise suppression...1. J. Wang, A. Chai, L. Xing, “Noise correlation in CBCT projection data and its application for noise reduction in low-dose CBCT ”, poster...presentation in 2009 SPIE Medical Imaging conference, Orlando, FL 2. J. Wang, T. Li, and L. Xing, “Low-dose CBCT Imaging for External Beam Radiotherapy
QuartetS: A Fast and Accurate Algorithm for Large-Scale Orthology Detection
2011-01-01
currently underway (1). In parallel, for particular model species, experimental studies are attempting to annotate and decode vast amounts of these...shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1...orthologs. We can deter- mine if genes x and y have originated from the duplication event implied by z1 and z2 by reconstructing the evolu- tionary history
Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm
Manzer, Samuel; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Martin
2015-07-14
Construction of the exact exchange matrix, K, is typically the rate-determining step in hybrid density functional theory, and therefore, new approaches with increased efficiency are highly desirable. We present a framework with potential for greatly improved efficiency by computing a compressed exchange matrix that yields the exact exchange energy, gradient, and direct inversion of the iterative subspace (DIIS) error vector. The compressed exchange matrix is constructed with one index in the compact molecular orbital basis and the other index in the full atomic orbital basis. To illustrate the advantages, we present a practical algorithm that uses this framework in conjunction with the resolution of the identity (RI) approximation. We demonstrate that convergence using this method, referred to hereafter as occupied orbital RI-K (occ-RI-K), in combination with the DIIS algorithm is well-behaved, that the accuracy of computed energetics is excellent (identical to conventional RI-K), and that significant speedups can be obtained over existing integral-direct and RI-K methods. For a 4400 basis function C{sub 68}H{sub 22} hydrogen-terminated graphene fragment, our algorithm yields a 14 × speedup over the conventional algorithm and a speedup of 3.3 × over RI-K.
Fast and Accurate Cell Tracking by a Novel Optical-Digital Hybrid Method
NASA Astrophysics Data System (ADS)
Torres-Cisneros, M.; Aviña-Cervantes, J. G.; Pérez-Careta, E.; Ambriz-Colín, F.; Tinoco, Verónica; Ibarra-Manzano, O. G.; Plascencia-Mora, H.; Aguilera-Gómez, E.; Ibarra-Manzano, M. A.; Guzman-Cabrera, R.; Debeir, Olivier; Sánchez-Mondragón, J. J.
2013-09-01
An innovative methodology to detect and track cells using microscope images enhanced by optical cross-correlation techniques is proposed in this paper. In order to increase the tracking sensibility, image pre-processing has been implemented as a morphological operator on the microscope image. Results show that the pre-processing process allows for additional frames of cell tracking, therefore increasing its robustness. The proposed methodology can be used in analyzing different problems such as mitosis, cell collisions, and cell overlapping, ultimately designed to identify and treat illnesses and malignancies.
NIBBS-search for fast and accurate prediction of phenotype-biased metabolic systems.
Schmidt, Matthew C; Rocha, Andrea M; Padmanabhan, Kanchana; Shpanskaya, Yekaterina; Banfield, Jill; Scott, Kathleen; Mihelcic, James R; Samatova, Nagiza F
2012-01-01
Understanding of genotype-phenotype associations is important not only for furthering our knowledge on internal cellular processes, but also essential for providing the foundation necessary for genetic engineering of microorganisms for industrial use (e.g., production of bioenergy or biofuels). However, genotype-phenotype associations alone do not provide enough information to alter an organism's genome to either suppress or exhibit a phenotype. It is important to look at the phenotype-related genes in the context of the genome-scale network to understand how the genes interact with other genes in the organism. Identification of metabolic subsystems involved in the expression of the phenotype is one way of placing the phenotype-related genes in the context of the entire network. A metabolic system refers to a metabolic network subgraph; nodes are compounds and edges labels are the enzymes that catalyze the reaction. The metabolic subsystem could be part of a single metabolic pathway or span parts of multiple pathways. Arguably, comparative genome-scale metabolic network analysis is a promising strategy to identify these phenotype-related metabolic subsystems. Network Instance-Based Biased Subgraph Search (NIBBS) is a graph-theoretic method for genome-scale metabolic network comparative analysis that can identify metabolic systems that are statistically biased toward phenotype-expressing organismal networks. We set up experiments with target phenotypes like hydrogen production, TCA expression, and acid-tolerance. We show via extensive literature search that some of the resulting metabolic subsystems are indeed phenotype-related and formulate hypotheses for other systems in terms of their role in phenotype expression. NIBBS is also orders of magnitude faster than MULE, one of the most efficient maximal frequent subgraph mining algorithms that could be adjusted for this problem. Also, the set of phenotype-biased metabolic systems output by NIBBS comes very close to the set of phenotype-biased subgraphs output by an exact maximally-biased subgraph enumeration algorithm ( MBS-Enum ). The code (NIBBS and the module to visualize the identified subsystems) is available at http://freescience.org/cs/NIBBS.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
AN ACCURATE ALGORITHM FOR NONUNIFORM FAST FOURIER TRANSFORMS (NUFFT). (R825225)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU
Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.
2013-01-01
List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015
Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.
1999-10-01
We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.
BALOO: A Fast and Versatile Code for Accurate Multireference Variational/Perturbative Calculations.
Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo; Barone, Vincenzo
2015-05-12
We present the new BALOO package for performing multireference variational/perturbative computations for medium- to large-size systems. To this end we have introduced a number of conceptual and technical improvements including full parallelization of the code, use and manipulation of a large panel of reference orbitals, implementation of diagrammatic perturbation treatment, and computation of properties by density matrix perturbed to the first-order. A number of test cases are analyzed with special reference to electronic transitions and magnetic properties to show the versatility, effectiveness, and accuracy of BALOO.
Fast, accurate 2D-MR relaxation exchange spectroscopy (REXSY): Beyond compressed sensing
NASA Astrophysics Data System (ADS)
Bai, Ruiliang; Benjamini, Dan; Cheng, Jian; Basser, Peter J.
2016-10-01
Previously, we showed that compressive or compressed sensing (CS) can be used to reduce significantly the data required to obtain 2D-NMR relaxation and diffusion spectra when they are sparse or well localized. In some cases, an order of magnitude fewer uniformly sampled data were required to reconstruct 2D-MR spectra of comparable quality. Nonetheless, this acceleration may still not be sufficient to make 2D-MR spectroscopy practicable for many important applications, such as studying time-varying exchange processes in swelling gels or drying paints, in living tissue in response to various biological or biochemical challenges, and particularly for in vivo MRI applications. A recently introduced framework, marginal distributions constrained optimization (MADCO), tremendously accelerates such 2D acquisitions by using a priori obtained 1D marginal distribution as powerful constraints when 2D spectra are reconstructed. Here we exploit one important intrinsic property of the 2D-MR relaxation exchange spectra: the fact that the 1D marginal distributions of each 2D-MR relaxation exchange spectrum in both dimensions are equal and can be rapidly estimated from a single Carr-Purcell-Meiboom-Gill (CPMG) or inversion recovery prepared CPMG measurement. We extend the MADCO framework by further proposing to use the 1D marginal distributions to inform the subsequent 2D data-sampling scheme, concentrating measurements where spectral peaks are present and reducing them where they are not. In this way we achieve compression or acceleration that is an order of magnitude greater than that in our previous CS method while providing data in reconstructed 2D-MR spectral maps of comparable quality, demonstrated using several simulated and real 2D T2 - T2 experimental data. This method, which can be called "informed compressed sensing," is extendable to other 2D- and even ND-MR exchange spectroscopy.
Fast, Accurate and Automatic Ancient Nucleosome and Methylation Maps with epiPALEOMIX
Hanghøj, Kristian; Seguin-Orlando, Andaine; Schubert, Mikkel; Madsen, Tobias; Pedersen, Jakob Skou; Willerslev, Eske; Orlando, Ludovic
2016-01-01
The first epigenomes from archaic hominins (AH) and ancient anatomically modern humans (AMH) have recently been characterized, based, however, on a limited number of samples. The extent to which ancient genome-wide epigenetic landscapes can be reconstructed thus remains contentious. Here, we present epiPALEOMIX, an open-source and user-friendly pipeline that exploits post-mortem DNA degradation patterns to reconstruct ancient methylomes and nucleosome maps from shotgun and/or capture-enrichment data. Applying epiPALEOMIX to the sequence data underlying 35 ancient genomes including AMH, AH, equids and aurochs, we investigate the temporal, geographical and preservation range of ancient epigenetic signatures. We first assess the quality of inferred ancient epigenetic signatures within well-characterized genomic regions. We find that tissue-specific methylation signatures can be obtained across a wider range of DNA preparation types than previously thought, including when no particular experimental procedures have been used to remove deaminated cytosines prior to sequencing. We identify a large subset of samples for which DNA associated with nucleosomes is protected from post-mortem degradation, and nucleosome positioning patterns can be reconstructed. Finally, we describe parameters and conditions such as DNA damage levels and sequencing depth that limit the preservation of epigenetic signatures in ancient samples. When such conditions are met, we propose that epigenetic profiles of CTCF binding regions can be used to help data authentication. Our work, including epiPALEOMIX, opens for further investigations of ancient epigenomes through time especially aimed at tracking possible epigenetic changes during major evolutionary, environmental, socioeconomic, and cultural shifts. PMID:27624717
Fast and accurate probability density estimation in large high dimensional astronomical datasets
NASA Astrophysics Data System (ADS)
Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.
2015-01-01
Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.
Fast and accurate read-out of interferometric optical fiber sensors
NASA Astrophysics Data System (ADS)
Bartholsen, Ingebrigt; Hjelme, Dag R.
2016-03-01
We present results from an evaluation of phase and frequency estimation algorithms for read-out instrumentation of interferometric sensors. Tests on interrogating a micro Fabry-Perot sensor made of semi-spherical stimuli-responsive hydrogel immobilized on a single mode fiber end face, shows that an iterative quadrature demodulation technique (IQDT) implemented on a 32-bit microcontroller unit can achieve an absolute length accuracy of ±50 nm and length change accuracy of ±3 nm using an 80 nm SLED source and a grating spectrometer for interrogation. The mean absolute error for the frequency estimator is a factor 3 larger than the theoretical lower bound for a maximum likelihood estimator. The corresponding factor for the phase estimator is 1.3. The computation time for the IQDT algorithm is reduced by a factor 1000 compared to the full QDT for the same accuracy requirement.
Trueland, Jennifer
2013-12-18
The 5.2 diet involves two days of fasting each week. It is being promoted as the key to sustained weight loss, as well as wider health benefits, despite the lack of evidence on the long-term effects. Nurses need to support patients who wish to try intermittent fasting.
O'Brien, Travis A.; Kashinath, Karthik
2015-05-22
This software implements the fast, self-consistent probability density estimation described by O'Brien et al. (2014, doi: ). It uses a non-uniform fast Fourier transform technique to reduce the computational cost of an objective and self-consistent kernel density estimation method.
Graham, Peter W.; Horn, Bart; Kachru, Shamit; Rajendran, Surjeet; Torroba, Gonzalo; /Stanford U., ITP /SLAC
2011-12-14
We explore simple but novel bouncing solutions of general relativity that avoid singularities. These solutions require curvature k = +1, and are supported by a negative cosmological term and matter with -1 < w < -1 = 3. In the case of moderate bounces (where the ratio of the maximal scale factor a{sub +} to the minimal scale factor a{sub -} is {Omicron}(1)), the solutions are shown to be classically stable and cycle through an infinite set of bounces. For more extreme cases with large a{sub +} = a{sub -}, the solutions can still oscillate many times before classical instabilities take them out of the regime of validity of our approximations. In this regime, quantum particle production also leads eventually to a departure from the realm of validity of semiclassical general relativity, likely yielding a singular crunch. We briefly discuss possible applications of these models to realistic cosmology.
A Fast Radiative Transfer Parameterization Under Cloudy Condition in Solar Spectral Region
NASA Astrophysics Data System (ADS)
Yang, Q.; Liu, X.; Yang, P.; Wang, C.
2014-12-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) system, which is proposed and developed by NASA, will directly measure the Earth's thermal infrared spectrum (IR), the spectrum of solar radiation reflected by the Earth and its atmosphere (RS), and radio occultation (RO). IR, RS, and RO measurements provide information on the most critical but least understood climate forcings, responses, and feedbacks associated with the vertical distribution of atmospheric temperature and water vapor, broadband reflected and emitted radiative fluxes, cloud properties, surface albedo, and surface skin temperature. To perform Observing System Simulation Experiments (OSSE) for long term climate observations, accurate and fast radiative transfer models are needed. The principal component-based radiative transfer model (PCRTM) is one of the efforts devoted to the development of fast radiative transfer models for simulating radiances and reflecatance observed by various hyperspectral instruments. Retrieval algorithm based on PCRTM forward model has been developed for AIRS, NAST, IASI, and CrIS. It is very fast and very accurate relative to the training radiative transfer model. In this work, we are extending PCRTM to UV-VIS-near IR spectral region. To implement faster cloudy radiative transfer calculations, we carefully investigated the radiative transfer process under cloud condition. The cloud bidirectional reflectance was parameterized based on off-line 36-stream multiple scattering calculations while few other lookup tables were generated to describe the effective transmittance and reflectance of the cloud-clear-sky coupling system in solar spectral region. The bidirectional reflectance or the irradiance measured by satellite may be calculated using a simple fast radiative transfer model providing the type of cloud (ice or water), optical depth of the cloud, optical depth of both atmospheric trace gases above and below clouds, particle size of the cloud, as well
Newly developed double neural network concept for reliable fast plasma position control
NASA Astrophysics Data System (ADS)
Jeon, Young-Mu; Na, Yong-Su; Kim, Myung-Rak; Hwang, Y. S.
2001-01-01
Neural network is considered as a parameter estimation tool in plasma controls for next generation tokamak such as ITER. The neural network has been reported to be so accurate and fast for plasma equilibrium identification that it may be applied to the control of complex tokamak plasmas. For this application, the reliability of the conventional neural network needs to be improved. In this study, a new idea of double neural network is developed to achieve this. The new idea has been applied to simple plasma position identification of KSTAR tokamak for feasibility test. Characteristics of the concept show higher reliability and fault tolerance even in severe faulty conditions, which may make neural network applicable to plasma control reliably and widely in future tokamaks.
Wedge factor dependence with depth and field size for fast neutron beams.
Popescu, Alina; Risler, Ruedi
2003-07-21
The dependence of the wedge factors (WFs) on field size (FS) and depth for a fast neutron beam has been investigated. In a previous study (Popescu et al 1999 Med. Phys. 26 541), a method was presented that allows a simple and accurate way of calculating the wedge-factor dependence on FS and depth in the case of a photon beam. The validity of a similar approach is tested in the present study for neutron beam dosimetry. The clinical neutron therapy system at the University of Washington (UW) has a flattening filter assembly consisting of two filters: a small field filter and a large field filter. Despite this complication, the approach presented in Popescu et al (1999 Med. Phys. 26 541) can be used to describe the WF dependence on FS and depth (d).
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.