Tao, Guohua; Miller, William H
2011-07-14
An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.
NASA Astrophysics Data System (ADS)
Mundis, Nathan L.; Mavriplis, Dimitri J.
2017-09-01
The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.
Semi-automating the manual literature search for systematic reviews increases efficiency.
Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald
2010-03-01
To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.
A time-spectral approach to numerical weather prediction
NASA Astrophysics Data System (ADS)
Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai
2018-05-01
Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.
Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells
NASA Astrophysics Data System (ADS)
Zimmerman, A. H.
1987-09-01
The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.
Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells
NASA Technical Reports Server (NTRS)
Zimmerman, A. H.
1987-01-01
The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.
Geometric multigrid for an implicit-time immersed boundary method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.
2014-10-12
The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less
Development of efficient time-evolution method based on three-term recurrence relation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akama, Tomoko, E-mail: a.tomo---s-b-l-r@suou.waseda.jp; Kobayashi, Osamu; Nanbu, Shinkoh, E-mail: shinkoh.nanbu@sophia.ac.jp
The advantage of the real-time (RT) propagation method is a direct solution of the time-dependent Schrödinger equation which describes frequency properties as well as all dynamics of a molecular system composed of electrons and nuclei in quantum physics and chemistry. Its applications have been limited by computational feasibility, as the evaluation of the time-evolution operator is computationally demanding. In this article, a new efficient time-evolution method based on the three-term recurrence relation (3TRR) was proposed to reduce the time-consuming numerical procedure. The basic formula of this approach was derived by introducing a transformation of the operator using the arcsine function.more » Since this operator transformation causes transformation of time, we derived the relation between original and transformed time. The formula was adapted to assess the performance of the RT time-dependent Hartree-Fock (RT-TDHF) method and the time-dependent density functional theory. Compared to the commonly used fourth-order Runge-Kutta method, our new approach decreased computational time of the RT-TDHF calculation by about factor of four, showing the 3TRR formula to be an efficient time-evolution method for reducing computational cost.« less
Efficient path-based computations on pedigree graphs with compact encodings
2012-01-01
A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Stern, Frank; Spencer, Justin
Savings from electric energy efficiency measures and programs are often expressed in terms of annual energy and presented as kilowatt-hours per year (kWh/year). However, for a full assessment of the value of these savings, it is usually necessary to consider the measure or program's impact on peak demand as well as time-differentiated energy savings. This cross-cutting protocol describes methods for estimating the peak demand and time-differentiated energy impacts of measures implemented through energy efficiency programs.
NASA Astrophysics Data System (ADS)
Liao, Feng; Zhang, Luming; Wang, Shanshan
2018-02-01
In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
Feng, Shuo
2014-01-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420
Feng, Shuo; Ji, Jim
2014-04-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.
Multiresolution molecular mechanics: Implementation and efficiency
NASA Astrophysics Data System (ADS)
Biyikli, Emre; To, Albert C.
2017-01-01
Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.
Rocha, C F D; Van Sluys, M; Hatano, F H; Boquimpani-Freitas, L; Marra, R V; Marques, R V
2004-11-01
Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min.), the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog) in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23) and the breeding site survey (9.5 MSI; richness = 4; abundance = 22) were the most efficient. The visual encounter inventory (45.0 MSI) and patch sampling (65.0 MSI) methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.
NASA Astrophysics Data System (ADS)
Kwon, Deuk-Chul; Shin, Sung-Sik; Yu, Dong-Hun
2017-10-01
In order to reduce the computing time in simulation of radio frequency (rf) plasma sources, various numerical schemes were developed. It is well known that the upwind, exponential, and power-law schemes can efficiently overcome the limitation on the grid size for fluid transport simulations of high density plasma discharges. Also, the semi-implicit method is a well-known numerical scheme to overcome on the simulation time step. However, despite remarkable advances in numerical techniques and computing power over the last few decades, efficient multi-dimensional modeling of low temperature plasma discharges has remained a considerable challenge. In particular, there was a difficulty on parallelization in time for the time periodic steady state problems such as capacitively coupled plasma discharges and rf sheath dynamics because values of plasma parameters in previous time step are used to calculate new values each time step. Therefore, we present a parallelization method for the time periodic steady state problems by using period-slices. In order to evaluate the efficiency of the developed method, one-dimensional fluid simulations are conducted for describing rf sheath dynamics. The result shows that speedup can be achieved by using a multithreading method.
Efficient biprediction decision scheme for fast high efficiency video coding encoding
NASA Astrophysics Data System (ADS)
Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won
2016-11-01
An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.
Tensor-product preconditioners for higher-order space-time discontinuous Galerkin methods
NASA Astrophysics Data System (ADS)
Diosady, Laslo T.; Murman, Scott M.
2017-02-01
A space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high-order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.
Tensor-Product Preconditioners for Higher-Order Space-Time Discontinuous Galerkin Methods
NASA Technical Reports Server (NTRS)
Diosady, Laslo T.; Murman, Scott M.
2016-01-01
space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equat ions. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.
Studies on the laboratory diagnosis of human filariasis: Preliminary communication
Goldsmid, J. M.
1970-01-01
Five laboratory methods used for the recovery of microfilariae from the blood were compared for efficiency of recovery and time involved. The methods used were thin blood films, thick blood films, wet preparations, the Polyvidone technique, and the microhaematocrit technique. The last proved superior in both efficiency and saving time. Images PMID:5529998
A novel scene management technology for complex virtual battlefield environment
NASA Astrophysics Data System (ADS)
Sheng, Changchong; Jiang, Libing; Tang, Bo; Tang, Xiaoan
2018-04-01
The efficient scene management of virtual environment is an important research content of computer real-time visualization, which has a decisive influence on the efficiency of drawing. However, Traditional scene management methods do not suitable for complex virtual battlefield environments, this paper combines the advantages of traditional scene graph technology and spatial data structure method, using the idea of management and rendering separation, a loose object-oriented scene graph structure is established to manage the entity model data in the scene, and the performance-based quad-tree structure is created for traversing and rendering. In addition, the collaborative update relationship between the above two structural trees is designed to achieve efficient scene management. Compared with the previous scene management method, this method is more efficient and meets the needs of real-time visualization.
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
Spectral difference Lanczos method for efficient time propagation in quantum control theory
NASA Astrophysics Data System (ADS)
Farnum, John D.; Mazziotti, David A.
2004-04-01
Spectral difference methods represent the real-space Hamiltonian of a quantum system as a banded matrix which possesses the accuracy of the discrete variable representation (DVR) and the efficiency of finite differences. When applied to time-dependent quantum mechanics, spectral differences enhance the efficiency of propagation methods for evolving the Schrödinger equation. We develop a spectral difference Lanczos method which is computationally more economical than the sinc-DVR Lanczos method, the split-operator technique, and even the fast-Fourier-Transform Lanczos method. Application of fast propagation is made to quantum control theory where chirped laser pulses are designed to dissociate both diatomic and polyatomic molecules. The specificity of the chirped laser fields is also tested as a possible method for molecular identification and discrimination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu
Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with themore » associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.« less
Xu, Kedong; Huang, Xiaohui; Wu, Manman; Wang, Yan; Chang, Yunxia; Liu, Kun; Zhang, Ju; Zhang, Yi; Zhang, Fuli; Yi, Liming; Li, Tingting; Wang, Ruiyue; Tan, Guangxuan; Li, Chengwei
2014-01-01
Transient transformation is simpler, more efficient and economical in analyzing protein subcellular localization than stable transformation. Fluorescent fusion proteins were often used in transient transformation to follow the in vivo behavior of proteins. Onion epidermis, which has large, living and transparent cells in a monolayer, is suitable to visualize fluorescent fusion proteins. The often used transient transformation methods included particle bombardment, protoplast transfection and Agrobacterium-mediated transformation. Particle bombardment in onion epidermis was successfully established, however, it was expensive, biolistic equipment dependent and with low transformation efficiency. We developed a highly efficient in planta transient transformation method in onion epidermis by using a special agroinfiltration method, which could be fulfilled within 5 days from the pretreatment of onion bulb to the best time-point for analyzing gene expression. The transformation conditions were optimized to achieve 43.87% transformation efficiency in living onion epidermis. The developed method has advantages in cost, time-consuming, equipment dependency and transformation efficiency in contrast with those methods of particle bombardment in onion epidermal cells, protoplast transfection and Agrobacterium-mediated transient transformation in leaf epidermal cells of other plants. It will facilitate the analysis of protein subcellular localization on a large scale.
Exponential Methods for the Time Integration of Schroedinger Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
A Method of Efficient Inclination Changes for Low-thrust Spacecraft
NASA Technical Reports Server (NTRS)
Falck, Robert; Gefert, Leon
2002-01-01
The evolution of low-thrust propulsion technologies has reached a point where such systems have become an economical option for many space missions. The development of efficient, low trip time control laws has received an increasing amount of attention in recent years, though few studies have examined the subject of inclination changing maneuvers in detail. A method for performing economical inclination changes through the use of an efficiency factor is derived front Lagrange's planetary equations. The efficiency factor can be used to regulate propellant expenditure at the expense of trip time. Such a method can be used for discontinuous-thrust transfers that offer reduced propellant masses and trip-times in comparison to continuous thrust transfers, while utilizing thrusters that operate at a lower specific impulse. Performance comparisons of transfers utilizing this approach with continuous-thrust transfers are generated through trajectory simulation and are presented in this paper.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
Biological optimization systems for enhancing photosynthetic efficiency and methods of use
Hunt, Ryan W.; Chinnasamy, Senthil; Das, Keshav C.; de Mattos, Erico Rolim
2012-11-06
Biological optimization systems for enhancing photosynthetic efficiency and methods of use. Specifically, methods for enhancing photosynthetic efficiency including applying pulsed light to a photosynthetic organism, using a chlorophyll fluorescence feedback control system to determine one or more photosynthetic efficiency parameters, and adjusting one or more of the photosynthetic efficiency parameters to drive the photosynthesis by the delivery of an amount of light to optimize light absorption of the photosynthetic organism while providing enough dark time between light pulses to prevent oversaturation of the chlorophyll reaction centers are disclosed.
A sub-space greedy search method for efficient Bayesian Network inference.
Zhang, Qing; Cao, Yong; Li, Yong; Zhu, Yanming; Sun, Samuel S M; Guo, Dianjing
2011-09-01
Bayesian network (BN) has been successfully used to infer the regulatory relationships of genes from microarray dataset. However, one major limitation of BN approach is the computational cost because the calculation time grows more than exponentially with the dimension of the dataset. In this paper, we propose a sub-space greedy search method for efficient Bayesian Network inference. Particularly, this method limits the greedy search space by only selecting gene pairs with higher partial correlation coefficients. Using both synthetic and real data, we demonstrate that the proposed method achieved comparable results with standard greedy search method yet saved ∼50% of the computational time. We believe that sub-space search method can be widely used for efficient BN inference in systems biology. Copyright © 2011 Elsevier Ltd. All rights reserved.
Using the entire history in the analysis of nested case cohort samples.
Rivera, C L; Lumley, T
2016-08-15
Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kholil, Muhammad; Nurul Alfa, Bonitasari; Hariadi, Madjumsyah
2018-04-01
Network planning is one of the management techniques used to plan and control the implementation of a project, which shows the relationship between activities. The objective of this research is to arrange network planning on house construction project on CV. XYZ and to know the role of network planning in increasing the efficiency of time so that can be obtained the optimal project completion period. This research uses descriptive method, where the data collected by direct observation to the company, interview, and literature study. The result of this research is optimal time planning in project work. Based on the results of the research, it can be concluded that the use of the both methods in scheduling of house construction project gives very significant effect on the completion time of the project. The company’s CPM (Critical Path Method) method can complete the project with 131 days, PERT (Program Evaluation Review and Technique) Method takes 136 days. Based on PERT calculation obtained Z = -0.66 or 0,2546 (from normal distribution table), and also obtained the value of probability or probability is 74,54%. This means that the possibility of house construction project activities can be completed on time is high enough. While without using both methods the project completion time takes 173 days. So using the CPM method, the company can save time up to 42 days and has time efficiency by using network planning.
Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics
NASA Astrophysics Data System (ADS)
Guo, Qiang
Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of solutions of continuous time wavelet numerical methods for the nonlinear aerosol dynamics are proved by using Schauder's fixed point theorem and the variational technique. Optimal error estimates are derived for both continuous and discrete time wavelet Galerkin schemes. We further derive reliable and efficient a posteriori error estimate which is based on stable multiresolution wavelet bases and an adaptive space-time algorithm for efficient solution of linear parabolic differential equations. The adaptive space refinement strategies based on the locality of corresponding multiresolution processes are proved to converge. At last, we develop efficient numerical methods by combining the wavelet methods proposed in previous parts and the splitting technique to solve the spatial aerosol dynamic equations. Wavelet methods along the particle size direction and the upstream finite difference method along the spatial direction are alternately used in each time interval. Numerical experiments are taken to show the effectiveness of our developed methods.
Adaptive Implicit Non-Equilibrium Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philip, Bobby; Wang, Zhen; Berrill, Mark A
2013-01-01
We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Downdating a time-varying square root information filter
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.
1990-01-01
A new method to efficiently downdate an estimate and covariance generated by a discrete time Square Root Information Filter (SRIF) is presented. The method combines the QR factor downdating algorithm of Gill and the decentralized SRIF algorithm of Bierman. Efficient removal of either measurements or a priori information is possible without loss of numerical integrity. Moreover, the method includes features for detecting potential numerical degradation. Performance on a 300 parameter system with 5800 data points shows that the method can be used in real time and hence is a promising tool for interactive data analysis. Additionally, updating a time-varying SRIF filter with either additional measurements or a priori information proceeds analogously.
Jensen, Scott A; Blumberg, Sean; Browning, Megan
2017-09-01
Although time-out has been demonstrated to be effective across multiple settings, little research exists on effective methods for training others to implement time-out. The present set of studies is an exploratory analysis of a structured feedback method for training time-out using repeated role-plays. The three studies examined (a) a between-subjects comparison to more a traditional didactic/video modeling method of time-out training, (b) a within-subjects comparison to traditional didactic/video modeling training for another skill, and (c) the impact of structured feedback training on in-home time-out implementation. Though findings are only preliminary and more research is needed, the structured feedback method appears across studies to be an efficient, effective method that demonstrates good maintenance of skill up to 3 months post training. Findings suggest, though do not confirm, a benefit of the structured feedback method over a more traditional didactic/video training model. Implications and further research on the method are discussed.
Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.
Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A
2017-04-01
Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.
40 CFR Appendix D to Part 60 - Required Emission Inventory Information
Code of Federal Regulations, 2010 CFR
2010-07-01
... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...
40 CFR Appendix D to Part 60 - Required Emission Inventory Information
Code of Federal Regulations, 2011 CFR
2011-07-01
... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...
40 CFR Appendix D to Part 60 - Required Emission Inventory Information
Code of Federal Regulations, 2012 CFR
2012-07-01
... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...
40 CFR Appendix D to Part 60 - Required Emission Inventory Information
Code of Federal Regulations, 2013 CFR
2013-07-01
... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...
40 CFR Appendix D to Part 60 - Required Emission Inventory Information
Code of Federal Regulations, 2014 CFR
2014-07-01
... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...
Bürger, Raimund; Diehl, Stefan; Mejías, Camilo
2016-01-01
The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.
High-throughput real-time quantitative reverse transcription PCR.
Bookout, Angie L; Cummins, Carolyn L; Mangelsdorf, David J; Pesola, Jean M; Kramer, Martha F
2006-02-01
Extensive detail on the application of the real-time quantitative polymerase chain reaction (QPCR) for the analysis of gene expression is provided in this unit. The protocols are designed for high-throughput, 384-well-format instruments, such as the Applied Biosystems 7900HT, but may be modified to suit any real-time PCR instrument. QPCR primer and probe design and validation are discussed, and three relative quantitation methods are described: the standard curve method, the efficiency-corrected DeltaCt method, and the comparative cycle time, or DeltaDeltaCt method. In addition, a method is provided for absolute quantification of RNA in unknown samples. RNA standards are subjected to RT-PCR in the same manner as the experimental samples, thus accounting for the reaction efficiencies of both procedures. This protocol describes the production and quantitation of synthetic RNA molecules for real-time and non-real-time RT-PCR applications.
New analytical exact solutions of time fractional KdV-KZK equation by Kudryashov methods
NASA Astrophysics Data System (ADS)
S Saha, Ray
2016-04-01
In this paper, new exact solutions of the time fractional KdV-Khokhlov-Zabolotskaya-Kuznetsov (KdV-KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann-Liouville derivative is used to convert the nonlinear time fractional KdV-KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV-KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV-KZK equation.
McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin
2007-05-09
A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
NASA Astrophysics Data System (ADS)
Carraro, F.; Valiani, A.; Caleffi, V.
2018-03-01
Within the framework of the de Saint Venant equations coupled with the Exner equation for morphodynamic evolution, this work presents a new efficient implementation of the Dumbser-Osher-Toro (DOT) scheme for non-conservative problems. The DOT path-conservative scheme is a robust upwind method based on a complete Riemann solver, but it has the drawback of requiring expensive numerical computations. Indeed, to compute the non-linear time evolution in each time step, the DOT scheme requires numerical computation of the flux matrix eigenstructure (the totality of eigenvalues and eigenvectors) several times at each cell edge. In this work, an analytical and compact formulation of the eigenstructure for the de Saint Venant-Exner (dSVE) model is introduced and tested in terms of numerical efficiency and stability. Using the original DOT and PRICE-C (a very efficient FORCE-type method) as reference methods, we present a convergence analysis (error against CPU time) to study the performance of the DOT method with our new analytical implementation of eigenstructure calculations (A-DOT). In particular, the numerical performance of the three methods is tested in three test cases: a movable bed Riemann problem with analytical solution; a problem with smooth analytical solution; a test in which the water flow is characterised by subcritical and supercritical regions. For a given target error, the A-DOT method is always the most efficient choice. Finally, two experimental data sets and different transport formulae are considered to test the A-DOT model in more practical case studies.
Xu, Kedong; Huang, Xiaohui; Wu, Manman; Wang, Yan; Chang, Yunxia; Liu, Kun; Zhang, Ju; Zhang, Yi; Zhang, Fuli; Yi, Liming; Li, Tingting; Wang, Ruiyue; Tan, Guangxuan; Li, Chengwei
2014-01-01
Transient transformation is simpler, more efficient and economical in analyzing protein subcellular localization than stable transformation. Fluorescent fusion proteins were often used in transient transformation to follow the in vivo behavior of proteins. Onion epidermis, which has large, living and transparent cells in a monolayer, is suitable to visualize fluorescent fusion proteins. The often used transient transformation methods included particle bombardment, protoplast transfection and Agrobacterium-mediated transformation. Particle bombardment in onion epidermis was successfully established, however, it was expensive, biolistic equipment dependent and with low transformation efficiency. We developed a highly efficient in planta transient transformation method in onion epidermis by using a special agroinfiltration method, which could be fulfilled within 5 days from the pretreatment of onion bulb to the best time-point for analyzing gene expression. The transformation conditions were optimized to achieve 43.87% transformation efficiency in living onion epidermis. The developed method has advantages in cost, time-consuming, equipment dependency and transformation efficiency in contrast with those methods of particle bombardment in onion epidermal cells, protoplast transfection and Agrobacterium-mediated transient transformation in leaf epidermal cells of other plants. It will facilitate the analysis of protein subcellular localization on a large scale. PMID:24416168
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Feeding methods and efficiencies of selected frugivorous birds
Foster, M.S.
1987-01-01
I report on handling methods and efficiencies of 26 species of Paraguayan birds freeding on fruits of Allophyllus edulis (Sapindaceae). A bird may swallow fruits whole (Type I: pluck and swallow feeders), hold a fruit and cut the pulp from the seed with the edge of the bill, swallowing the pulp but not the seed (Type II: cut or mash feeders), or take bites of pulp from a fruit that hangs from the tree or that is held and manipulated against a branch (Type III: push and bite feeders). In terms of absolute amount of pulp obtained from a fruit, and amount obtained per unit time. Type I species are far more efficient than Type II and III species. Bill morphology influences feeding methods but is not the only important factor. Diet breadth does not appear to be significant. Consideration of feeding efficiency relative to the needs of the birds indicates that these species need to spend relatively little time feeding to meet their estimated energetic needs, and that handling time has a relatively trivial effect on the time/energy budges of the bird species observed.
Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till
2018-02-01
(1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie
2009-06-01
Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groth, R.H.; Calabro, D.S.
1969-11-01
The two methods normally used for the analysis of NO/sub x/ are the Saltzman and the phenoldisulfonic acid technique. This paper describes an evaluation of these wet chemical methods to determine their practical application to engine exhaust gas analysis. Parameters considered for the Saltzman method included bubbler collection efficiency, NO to NO/sub 2/ conversion efficiency, masking effect of other contaminants usually present in exhaust gases and the time-temperature effect of these contaminants on store developed solutions. Collection efficiency and the effects of contaminants were also considered for the phenoldisulfonic acid method. Test results indicated satisfactory collection and conversion efficiencies formore » the Saltzman method, but contaminants seriously affected the measurement accuracy particularly if the developed solution was stored for a number of hours at room temperature before analysis. Storage at 32/sup 0/F minimized effect. The standard procedure for the phenoldisulfonic acid method gave good results, but the process was found to be too time consuming for routine analysis and measured only total NO/sub x/. 3 references, 9 tables.« less
Xie, Xiurui; Qu, Hong; Yi, Zhang; Kurths, Jurgen
2017-06-01
The spiking neural network (SNN) is the third generation of neural networks and performs remarkably well in cognitive tasks, such as pattern recognition. The temporal neural encode mechanism found in biological hippocampus enables SNN to possess more powerful computation capability than networks with other encoding schemes. However, this temporal encoding approach requires neurons to process information serially on time, which reduces learning efficiency significantly. To keep the powerful computation capability of the temporal encoding mechanism and to overcome its low efficiency in the training of SNNs, a new training algorithm, the accurate synaptic-efficiency adjustment method is proposed in this paper. Inspired by the selective attention mechanism of the primate visual system, our algorithm selects only the target spike time as attention areas, and ignores voltage states of the untarget ones, resulting in a significant reduction of training time. Besides, our algorithm employs a cost function based on the voltage difference between the potential of the output neuron and the firing threshold of the SNN, instead of the traditional precise firing time distance. A normalized spike-timing-dependent-plasticity learning window is applied to assigning this error to different synapses for instructing their training. Comprehensive simulations are conducted to investigate the learning properties of our algorithm, with input neurons emitting both single spike and multiple spikes. Simulation results indicate that our algorithm possesses higher learning performance than the existing other methods and achieves the state-of-the-art efficiency in the training of SNN.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.
Word aligned bitmap compression method, data structure, and apparatus
Wu, Kesheng; Shoshani, Arie; Otoo, Ekow
2004-12-14
The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.
High-efficiency power transfer for silicon-based photonic devices
NASA Astrophysics Data System (ADS)
Son, Gyeongho; Yu, Kyoungsik
2018-02-01
We demonstrate an efficient coupling of guided light of 1550 nm from a standard single-mode optical fiber to a silicon waveguide using the finite-difference time-domain method and propose a fabrication method of tapered optical fibers for efficient power transfer to silicon-based photonic integrated circuits. Adiabatically-varying fiber core diameters with a small tapering angle can be obtained using the tube etching method with hydrofluoric acid and standard single-mode fibers covered by plastic jackets. The optical power transmission of the fundamental HE11 and TE-like modes between the fiber tapers and the inversely-tapered silicon waveguides was calculated with the finite-difference time-domain method to be more than 99% at a wavelength of 1550 nm. The proposed method for adiabatic fiber tapering can be applied in quantum optics, silicon-based photonic integrated circuits, and nanophotonics. Furthermore, efficient coupling within the telecommunication C-band is a promising approach for quantum networks in the future.
Enhanced analysis of real-time PCR data by using a variable efficiency model: FPK-PCR
Lievens, Antoon; Van Aelst, S.; Van den Bulcke, M.; Goetghebeur, E.
2012-01-01
Current methodology in real-time Polymerase chain reaction (PCR) analysis performs well provided PCR efficiency remains constant over reactions. Yet, small changes in efficiency can lead to large quantification errors. Particularly in biological samples, the possible presence of inhibitors forms a challenge. We present a new approach to single reaction efficiency calculation, called Full Process Kinetics-PCR (FPK-PCR). It combines a kinetically more realistic model with flexible adaptation to the full range of data. By reconstructing the entire chain of cycle efficiencies, rather than restricting the focus on a ‘window of application’, one extracts additional information and loses a level of arbitrariness. The maximal efficiency estimates returned by the model are comparable in accuracy and precision to both the golden standard of serial dilution and other single reaction efficiency methods. The cycle-to-cycle changes in efficiency, as described by the FPK-PCR procedure, stay considerably closer to the data than those from other S-shaped models. The assessment of individual cycle efficiencies returns more information than other single efficiency methods. It allows in-depth interpretation of real-time PCR data and reconstruction of the fluorescence data, providing quality control. Finally, by implementing a global efficiency model, reproducibility is improved as the selection of a window of application is avoided. PMID:22102586
Application of work sampling technique to analyze logging operations.
Edwin S. Miyata; Helmuth M. Steinhilb; Sharon A. Winsauer
1981-01-01
Discusses the advantages and disadvantages of various time study methods for determining efficiency and productivity in logging. The work sampling method is compared with the continuous time-study method. Gives the feasibility, capability, and limitation of the work sampling method.
Khoo, E H; Ahmed, I; Goh, R S M; Lee, K H; Hung, T G G; Li, E P
2013-03-11
The dynamic-thermal electron-quantum medium finite-difference time-domain (DTEQM-FDTD) method is used for efficient analysis of mode profile in elliptical microcavity. The resonance peak of the elliptical microcavity is studied by varying the length ratio. It is observed that at some length ratios, cavity mode is excited instead of whispering gallery mode. This depicts that mode profiles are length ratio dependent. Through the implementation of the DTEQM-FDTD on graphic processing unit (GPU), the simulation time is reduced by 300 times as compared to the CPU. This leads to an efficient optimization approach to design microcavity lasers for wide range of applications in photonic integrated circuits.
Data-Driven Benchmarking of Building Energy Efficiency Utilizing Statistical Frontier Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kavousian, A; Rajagopal, R
2014-01-01
Frontier methods quantify the energy efficiency of buildings by forming an efficient frontier (best-practice technology) and by comparing all buildings against that frontier. Because energy consumption fluctuates over time, the efficiency scores are stochastic random variables. Existing applications of frontier methods in energy efficiency either treat efficiency scores as deterministic values or estimate their uncertainty by resampling from one set of measurements. Availability of smart meter data (repeated measurements of energy consumption of buildings) enables using actual data to estimate the uncertainty in efficiency scores. Additionally, existing applications assume a linear form for an efficient frontier; i.e.,they assume that themore » best-practice technology scales up and down proportionally with building characteristics. However, previous research shows that buildings are nonlinear systems. This paper proposes a statistical method called stochastic energy efficiency frontier (SEEF) to estimate a bias-corrected efficiency score and its confidence intervals from measured data. The paper proposes an algorithm to specify the functional form of the frontier, identify the probability distribution of the efficiency score of each building using measured data, and rank buildings based on their energy efficiency. To illustrate the power of SEEF, this paper presents the results from applying SEEF on a smart meter data set of 307 residential buildings in the United States. SEEF efficiency scores are used to rank individual buildings based on energy efficiency, to compare subpopulations of buildings, and to identify irregular behavior of buildings across different time-of-use periods. SEEF is an improvement to the energy-intensity method (comparing kWh/sq.ft.): whereas SEEF identifies efficient buildings across the entire spectrum of building sizes, the energy-intensity method showed bias toward smaller buildings. The results of this research are expected to assist researchers and practitioners compare and rank (i.e.,benchmark) buildings more robustly and over a wider range of building types and sizes. Eventually, doing so is expected to result in improved resource allocation in energy-efficiency programs.« less
Sampling Methods for Detection and Monitoring of the Asian Citrus Psyllid (Hemiptera: Psyllidae).
Monzo, C; Arevalo, H A; Jones, M M; Vanaclocha, P; Croxton, S D; Qureshi, J A; Stansly, P A
2015-06-01
The Asian citrus psyllid (ACP), Diaphorina citri Kuwayama is a key pest of citrus due to its role as vector of citrus greening disease or "huanglongbing." ACP monitoring is considered an indispensable tool for management of vector and disease. In the present study, datasets collected between 2009 and 2013 from 245 citrus blocks were used to evaluate precision, sensitivity for detection, and efficiency of five sampling methods. The number of samples needed to reach a 0.25 standard error-mean ratio was estimated using Taylor's power law and used to compare precision among sampling methods. Comparison of detection sensitivity and time expenditure (cost) between stem-tap and other sampling methodologies conducted consecutively at the same location were also assessed. Stem-tap sampling was the most efficient sampling method when ACP densities were moderate to high and served as the basis for comparison with all other methods. Protocols that grouped trees near randomly selected locations across the block were more efficient than sampling trees at random across the block. Sweep net sampling was similar to stem-taps in number of captures per sampled unit, but less precise at any ACP density. Yellow sticky traps were 14 times more sensitive than stem-taps but much more time consuming and thus less efficient except at very low population densities. Visual sampling was efficient for detecting and monitoring ACP at low densities. Suction sampling was time consuming and taxing but the most sensitive of all methods for detection of sparse populations. This information can be used to optimize ACP monitoring efforts. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2011-01-01
Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less
Hornikx, Maarten; Dragna, Didier
2015-07-01
The Fourier pseudospectral time-domain method is an efficient wave-based method to model sound propagation in inhomogeneous media. One of the limitations of the method for atmospheric sound propagation purposes is its restriction to a Cartesian grid, confining it to staircase-like geometries. A transform from the physical coordinate system to the curvilinear coordinate system has been applied to solve more arbitrary geometries. For applicability of this method near the boundaries, the acoustic velocity variables are solved for their curvilinear components. The performance of the curvilinear Fourier pseudospectral method is investigated in free field and for outdoor sound propagation over an impedance strip for various types of shapes. Accuracy is shown to be related to the maximum grid stretching ratio and deformation of the boundary shape and computational efficiency is reduced relative to the smallest grid cell in the physical domain. The applicability of the curvilinear Fourier pseudospectral time-domain method is demonstrated by investigating the effect of sound propagation over a hill in a nocturnal boundary layer. With the proposed method, accurate and efficient results for sound propagation over smoothly varying ground surfaces with high impedances can be obtained.
Unifying time evolution and optimization with matrix product states
NASA Astrophysics Data System (ADS)
Haegeman, Jutho; Lubich, Christian; Oseledets, Ivan; Vandereycken, Bart; Verstraete, Frank
2016-10-01
We show that the time-dependent variational principle provides a unifying framework for time-evolution methods and optimization methods in the context of matrix product states. In particular, we introduce a new integration scheme for studying time evolution, which can cope with arbitrary Hamiltonians, including those with long-range interactions. Rather than a Suzuki-Trotter splitting of the Hamiltonian, which is the idea behind the adaptive time-dependent density matrix renormalization group method or time-evolving block decimation, our method is based on splitting the projector onto the matrix product state tangent space as it appears in the Dirac-Frenkel time-dependent variational principle. We discuss how the resulting algorithm resembles the density matrix renormalization group (DMRG) algorithm for finding ground states so closely that it can be implemented by changing just a few lines of code and it inherits the same stability and efficiency. In particular, our method is compatible with any Hamiltonian for which ground-state DMRG can be implemented efficiently. In fact, DMRG is obtained as a special case of our scheme for imaginary time evolution with infinite time step.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W
2015-06-01
Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Efficiency and flexibility using implicit methods within atmosphere dycores
NASA Astrophysics Data System (ADS)
Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.
2016-12-01
A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.
Yao, Yao; Sun, Ke-Wei; Luo, Zhen; Ma, Haibo
2018-01-18
The accurate theoretical interpretation of ultrafast time-resolved spectroscopy experiments relies on full quantum dynamics simulations for the investigated system, which is nevertheless computationally prohibitive for realistic molecular systems with a large number of electronic and/or vibrational degrees of freedom. In this work, we propose a unitary transformation approach for realistic vibronic Hamiltonians, which can be coped with using the adaptive time-dependent density matrix renormalization group (t-DMRG) method to efficiently evolve the nonadiabatic dynamics of a large molecular system. We demonstrate the accuracy and efficiency of this approach with an example of simulating the exciton dissociation process within an oligothiophene/fullerene heterojunction, indicating that t-DMRG can be a promising method for full quantum dynamics simulation in large chemical systems. Moreover, it is also shown that the proper vibronic features in the ultrafast electronic process can be obtained by simulating the two-dimensional (2D) electronic spectrum by virtue of the high computational efficiency of the t-DMRG method.
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan
2016-04-28
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less
Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost
2016-01-01
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167
Comparison of the Efficiency of Two Flashcard Drill Methods on Children's Reading Performance
ERIC Educational Resources Information Center
Joseph, Laurice; Eveleigh, Elisha; Konrad, Moira; Neef, Nancy; Volpe, Robert
2012-01-01
The purpose of this study was to extend prior flashcard drill and practice research by holding instructional time constant and allowing learning trials to vary. Specifically, the authors aimed to determine whether an incremental rehearsal method or a traditional drill and practice method was most efficient in helping 5 first-grade children read,…
A third-order approximation method for three-dimensional wheel-rail contact
NASA Astrophysics Data System (ADS)
Negretti, Daniele
2012-03-01
Multibody train analysis is used increasingly by railway operators whenever a reliable and time-efficient method to evaluate the contact between wheel and rail is needed; particularly, the wheel-rail contact is one of the most important aspects that affects a reliable and time-efficient vehicle dynamics computation. The focus of the approach proposed here is to carry out such tasks by means of online wheel-rail elastic contact detection. In order to improve efficiency and save time, a main analytical approach is used for the definition of wheel and rail surfaces as well as for contact detection, then a final numerical evaluation is used to locate contact. The final numerical procedure consists in finding the zeros of a nonlinear function in a single variable. The overall method is based on the approximation of the wheel surface, which does not influence the contact location significantly, as shown in the paper.
Geometric Heat Engines Featuring Power that Grows with Efficiency.
Raz, O; Subaşı, Y; Pugatch, R
2016-04-22
Thermodynamics places a limit on the efficiency of heat engines, but not on their output power or on how the power and efficiency change with the engine's cycle time. In this Letter, we develop a geometrical description of the power and efficiency as a function of the cycle time, applicable to an important class of heat engine models. This geometrical description is used to design engine protocols that attain both the maximal power and maximal efficiency at the fast driving limit. Furthermore, using this method, we also prove that no protocol can exactly attain the Carnot efficiency at nonzero power.
NASA Technical Reports Server (NTRS)
Kurtz, L. A.; Smith, R. E.; Parks, C. L.; Boney, L. R.
1978-01-01
Steady state solutions to two time dependent partial differential systems have been obtained by the Method of Lines (MOL) and compared to those obtained by efficient standard finite difference methods: (1) Burger's equation over a finite space domain by a forward time central space explicit method, and (2) the stream function - vorticity form of viscous incompressible fluid flow in a square cavity by an alternating direction implicit (ADI) method. The standard techniques were far more computationally efficient when applicable. In the second example, converged solutions at very high Reynolds numbers were obtained by MOL, whereas solution by ADI was either unattainable or impractical. With regard to 'set up' time, solution by MOL is an attractive alternative to techniques with complicated algorithms, as much of the programming difficulty is eliminated.
Effect of positive pulse charge waveforms on the energy efficiency of lead-acid traction cells
NASA Technical Reports Server (NTRS)
Smithrick, J. J.
1981-01-01
The effects of four different charge methods on the energy conversion efficiency of 300 ampere hour lead acid traction cells were investigated. Three of the methods were positive pulse charge waveforms; the fourth, a constant current method, was used as a baseline of comparison. The positive pulse charge waveforms were: 120 Hz full wave rectified sinusoidal; 120 Hz silicon controlled rectified; and 1 kHz square wave. The constant current charger was set at the time average pulse current of each pulse waveform, which was 150 amps. The energy efficiency does not include charger losses. The lead acid traction cells were charged to 70 percent of rated ampere hour capacity in each case. The results of charging the cells using the three different pulse charge waveforms indicate there was no significant difference in energy conversion efficiency when compared to constant current charging at the time average pulse current value.
An Efficient G-XML Data Management Method using XML Spatial Index for Mobile Devices
NASA Astrophysics Data System (ADS)
Tamada, Takashi; Momma, Kei; Seo, Kazuo; Hijikata, Yoshinori; Nishida, Shogo
This paper presents an efficient G-XML data management method for mobile devices. G-XML is XML based encoding for the transport of geographic information. Mobile devices, such as PDA and mobile-phone, performance trail desktop machines, so some techniques are needed for processing G-XML data on mobile devices. In this method, XML-format spatial index file is used to improve an initial display time of G-XML data. This index file contains XML pointer of each feature in G-XML data and classifies these features by multi-dimensional data structures. From the experimental result, we can prove this method speed up about 3-7 times an initial display time of G-XML data on mobile devices.
Miao, Zhidong; Liu, Dake; Gong, Chen
2017-10-01
Inductive wireless power transfer (IWPT) is a promising power technology for implantable biomedical devices, where the power consumption is low and the efficiency is the most important consideration. In this paper, we propose an optimization method of impedance matching networks (IMN) to maximize the IWPT efficiency. The IMN at the load side is designed to achieve the optimal load, and the IMN at the source side is designed to deliver the required amount of power (no-more-no-less) from the power source to the load. The theoretical analyses and design procedure are given. An IWPT system for an implantable glaucoma therapeutic prototype is designed as an example. Compared with the efficiency of the resonant IWPT system, the efficiency of our optimized system increases with a factor of 1.73. Besides, the efficiency of our optimized IWPT system is 1.97 times higher than that of the IWPT system optimized by the traditional maximum power transfer method. All the discussions indicate that the optimization method proposed in this paper could achieve a high efficiency and long working time when the system is powered by a battery.
A conjugate gradient method with descent properties under strong Wolfe line search
NASA Astrophysics Data System (ADS)
Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.
2017-09-01
The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.
A time-space domain stereo finite difference method for 3D scalar wave propagation
NASA Astrophysics Data System (ADS)
Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie
2016-11-01
The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).
High accurate interpolation of NURBS tool path for CNC machine tools
NASA Astrophysics Data System (ADS)
Liu, Qiang; Liu, Huan; Yuan, Songmei
2016-09-01
Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.
NASA Astrophysics Data System (ADS)
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
Electrosprayed chitosan nanoparticles: facile and efficient approach for bacterial transformation
NASA Astrophysics Data System (ADS)
Abyadeh, Morteza; Sadroddiny, Esmaeil; Ebrahimi, Ammar; Esmaeili, Fariba; Landi, Farzaneh Saeedi; Amani, Amir
2017-12-01
A rapid and efficient procedure for DNA transformation is a key prerequisite for successful cloning and genomic studies. While there are efforts to develop a facile method, so far obtained efficiencies for alternative methods have been unsatisfactory (i.e. 105-106 CFU/μg plasmid) compared with conventional method (up to 108 CFU/μg plasmid). In this work, for the first time, we prepared chitosan/pDNA nanoparticles by electrospraying methods to improve transformation process. Electrospray method was used for chitosan/pDNA nanoparticles production to investigate the non-competent bacterial transformation efficiency; besides, the effect of chitosan molecular weight, N/P ratio and nanoparticle size on non-competent bacterial transformation efficiency was evaluated too. The results showed that transformation efficiency increased with decreasing the molecular weight, N/P ratio and nanoparticles size. In addition, transformation efficiency of 1.7 × 108 CFU/μg plasmid was obtained with chitosan molecular weight, N/P ratio and nanoparticles size values of 30 kDa, 1 and 125 nm. Chitosan/pDNA electrosprayed nanoparticles were produced and the effect of molecular weight, N/P and size of nanoparticles on transformation efficiency was evaluated. In total, we present a facile and rapid method for bacterial transformation, which has comparable efficiency with the common method.
Efficiency analysis of diffusion on T-fractals in the sense of random walks.
Peng, Junhao; Xu, Guoai
2014-04-07
Efficiently controlling the diffusion process is crucial in the study of diffusion problem in complex systems. In the sense of random walks with a single trap, mean trapping time (MTT) and mean diffusing time (MDT) are good measures of trapping efficiency and diffusion efficiency, respectively. They both vary with the location of the node. In this paper, we analyze the effects of node's location on trapping efficiency and diffusion efficiency of T-fractals measured by MTT and MDT. First, we provide methods to calculate the MTT for any target node and the MDT for any source node of T-fractals. The methods can also be used to calculate the mean first-passage time between any pair of nodes. Then, using the MTT and the MDT as the measure of trapping efficiency and diffusion efficiency, respectively, we compare the trapping efficiency and diffusion efficiency among all nodes of T-fractal and find the best (or worst) trapping sites and the best (or worst) diffusing sites. Our results show that the hub node of T-fractal is the best trapping site, but it is also the worst diffusing site; and that the three boundary nodes are the worst trapping sites, but they are also the best diffusing sites. Comparing the maximum of MTT and MDT with their minimums, we find that the maximum of MTT is almost 6 times of the minimum of MTT and the maximum of MDT is almost equal to the minimum for MDT. Thus, the location of target node has large effect on the trapping efficiency, but the location of source node almost has no effect on diffusion efficiency. We also simulate random walks on T-fractals, whose results are consistent with the derived results.
Hesselmann, Andreas; Görling, Andreas
2011-01-21
A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.
The use of Whatman-31ET paper for an efficient method for radiochemical purity test of 131I-Hippuran
NASA Astrophysics Data System (ADS)
Rezka Putra, Amal; Maskur; Sugiharto, Yono; Chairuman; Hardi Gunawan, Adang; Awaludin, Rohadi
2018-01-01
Current chromatography methods used for radiochemical purity test of 131I-Hippuran is time consuming. Therefore, in this study we explored several static and mobile phases in order to have a chromatography method which is accurate and efficient or less time consuming. In this study, stationary phases (Whatman-1, 31ET, and 3MM papers) and several mobile phases were explored to separate 131I-Hippuran from its impurity (131I iodide ion). The results of this study showed that the most efficient chromatography system for measurement of radiochemical purity of 131I-Hippuran was by using Whatman-31ET paper and n-butanol: acetic acid: water (4:1:1) as a static phase and mobile phase respectively. Developing time for this method was of approximately 75.7 ± 2.7 minutes. The result of radiochemical purity (%RCP) of 131I-Hippuran measured with this chromatography system either using Whatman-1 or Whatman-31ET paper strips was 98.7%. The short size of Whatman-31ET paper strip (1 x 8 cm) was found to have shorter developing time compared to that of long size paper. This system showed a good separation of 131I-Hippuran from its impurities and gave %RCP of 98.1% ± 0.04% with developing time approximately 44.3 ± 9.4 minutes. The short size of Whatman-31ET paper strips was found to be more efficient compared to that of Whatman-1 and Whatman-3MM paper strips in term of developing time.
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabec, Jiri; Lin, Lin; Shao, Meiyue
We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less
Efficient multidimensional regularization for Volterra series estimation
NASA Astrophysics Data System (ADS)
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
Shokoohi, Reza; Torkshavand, Zahra; Zolghadnasab, Hassan; Alikhani, Mohammad Yousef; Hemmat, Meisam Sedighi
2017-04-01
Detergents are considered one of the important pollutants in hospital wastewater. Achieving efficient and bio-friendly methods for the removal of these pollutants is considered as a concern for environmental researchers. This study aims at studying the efficiency of a moving bed biofilm reactor (MBBR) system for removing linear alkyl benzene sulfonate (LAS) from hospital wastewater with utilization of response surface methodology (RSM). The present study was carried out on a reactor with continuous hydraulic flow using media k 1 at pilot scale to remove detergent from hospital wastewater. The effect of independent variables including contact time, percentage of media filling and mixed liquor suspended solids (MLSS) concentration of 1000-3000 mg/l on the system efficiency were assessed. Methylene blue active substances (MBAS) and chemical oxygen demand (COD) 750-850 mg/l were used by closed laboratory method in order to measure the concentration of LAS. The results revealed that the removal efficiency of LAS detergent and COD using media k 1 , retention time of 24 hours, and MLSS concentration of around 3,000 mg/l were 92.3 and 95.8%, respectively. The results showed that the MBBR system as a bio-friendly compatible method has high efficiency in removing detergents from hospital wastewater and can achieve standard output effluent in acceptable time.
NASA Astrophysics Data System (ADS)
Wang, Jinting; Lu, Liqiao; Zhu, Fei
2018-01-01
Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
Spectral analysis for GNSS coordinate time series using chirp Fourier transform
NASA Astrophysics Data System (ADS)
Feng, Shengtao; Bo, Wanju; Ma, Qingzun; Wang, Zifan
2017-12-01
Spectral analysis for global navigation satellite system (GNSS) coordinate time series provides a principal tool to understand the intrinsic mechanism that affects tectonic movements. Spectral analysis methods such as the fast Fourier transform, Lomb-Scargle spectrum, evolutionary power spectrum, wavelet power spectrum, etc. are used to find periodic characteristics in time series. Among spectral analysis methods, the chirp Fourier transform (CFT) with less stringent requirements is tested with synthetic and actual GNSS coordinate time series, which proves the accuracy and efficiency of the method. With the length of series only limited to even numbers, CFT provides a convenient tool for windowed spectral analysis. The results of ideal synthetic data prove CFT accurate and efficient, while the results of actual data show that CFT is usable to derive periodic information from GNSS coordinate time series.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Violette, Daniel M.
Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua
2018-02-01
High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kiss, Gellért Zsolt; Borbély, Sándor; Nagy, Ladislau
2017-12-01
We have presented here an efficient numerical approach for the ab initio numerical solution of the time-dependent Schrödinger Equation describing diatomic molecules, which interact with ultrafast laser pulses. During the construction of the model we have assumed a frozen nuclear configuration and a single active electron. In order to increase efficiency our system was described using prolate spheroidal coordinates, where the wave function was discretized using the finite-element discrete variable representation (FE-DVR) method. The discretized wave functions were efficiently propagated in time using the short-iterative Lanczos algorithm. As a first test we have studied here how the laser induced bound state dynamics in H2+ is influenced by the strength of the driving laser field.
Singh, Gurpreet; Ravi, Koustuban; Wang, Qian; Ho, Seng-Tiong
2012-06-15
A complex-envelope (CE) alternating-direction-implicit (ADI) finite-difference time-domain (FDTD) approach to treat light-matter interaction self-consistently with electromagnetic field evolution for efficient simulations of active photonic devices is presented for the first time (to our best knowledge). The active medium (AM) is modeled using an efficient multilevel system of carrier rate equations to yield the correct carrier distributions, suitable for modeling semiconductor/solid-state media accurately. To include the AM in the CE-ADI-FDTD method, a first-order differential system involving CE fields in the AM is first set up. The system matrix that includes AM parameters is then split into two time-dependent submatrices that are then used in an efficient ADI splitting formula. The proposed CE-ADI-FDTD approach with AM takes 22% of the time as the approach of the corresponding explicit FDTD, as validated by semiconductor microdisk laser simulations.
NASA Technical Reports Server (NTRS)
Kowalski, Marc Edward
2009-01-01
A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.
NASA Astrophysics Data System (ADS)
Apreyan, R. A.; Fleck, M.; Atanesyan, A. K.; Sukiasyan, R. P.; Petrosyan, A. M.
2015-12-01
L-Nitroargininium picrate has been obtained from an aqueous solution containing equimolar quantities of L-nitroarginine and picric acid by slow evaporation. Single crystal was grown by evaporation method. Crystal structure was determined at room temperature. The salt crystallizes in monoclinic crystal system (space group P21). Vibrational spectra and thermal properties were studied. Second harmonic generation efficiency measured by powder method is found to be four times higher than in L-nitroarginine, which in turn is ten times more efficient than KDP (KH2PO4).
NASA Technical Reports Server (NTRS)
White, C. W.
1981-01-01
The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.
[Economic efficiency of computer monitoring of health].
Il'icheva, N P; Stazhadze, L L
2001-01-01
Presents the method of computer monitoring of health, based on utilization of modern information technologies in public health. The method helps organize preventive activities of an outpatient clinic at a high level and essentially decrease the time and money loss. Efficiency of such preventive measures, increased number of computer and Internet users suggests that such methods are promising and further studies in this field are needed.
Apparatus and method for investigation of energy consumption of microwave assisted drying systems.
Göllei, Attila; Vass, András; Magyar, Attila; Pallai, Elisabeth
2009-10-01
Convective, hot air drying by itself is relatively efficient for removing water from the surface environment of agricultural seed products. However, moving internal moisture to the surface needs rather a long time, as a rule. The major research aim of the authors was to decrease the processing time and processing costs, to improve the quality of the dried product, and to increase drying efficiency. For this reason their research activities focused on the development of a special drying apparatus and a method suitable for measuring of energy conditions in a hybrid (microwave and convective) dryer. Experimental investigations were made with moistened wheat as model material. Experiments were carried out in microwave, convective and hybrid drying systems. The microwave drying alone was more efficient than the convective method. The lowest energy consumption and shortest drying time were obtained by the use of a hybrid method in which the waste energy of magnetron was utilized and the temperature was controlled. In this way, it was possible to keep the temperature of the dried product at a constant and safe value and to considerably decrease the energy consumption.
An efficient unstructured WENO method for supersonic reactive flows
NASA Astrophysics Data System (ADS)
Zhao, Wen-Geng; Zheng, Hong-Wei; Liu, Feng-Jun; Shi, Xiao-Tian; Gao, Jun; Hu, Ning; Lv, Meng; Chen, Si-Cong; Zhao, Hong-Da
2018-03-01
An efficient high-order numerical method for supersonic reactive flows is proposed in this article. The reactive source term and convection term are solved separately by splitting scheme. In the reaction step, an adaptive time-step method is presented, which can improve the efficiency greatly. In the convection step, a third-order accurate weighted essentially non-oscillatory (WENO) method is adopted to reconstruct the solution in the unstructured grids. Numerical results show that our new method can capture the correct propagation speed of the detonation wave exactly even in coarse grids, while high order accuracy can be achieved in the smooth region. In addition, the proposed adaptive splitting method can reduce the computational cost greatly compared with the traditional splitting method.
NASA Astrophysics Data System (ADS)
Suzuki, Yoshinari; Sato, Hikaru; Hiyoshi, Katsuhiro; Furuta, Naoki
2012-10-01
A new calibration system for real-time determination of trace elements in airborne particulates was developed. Airborne particulates were directly introduced into an inductively coupled plasma mass spectrometer, and the concentrations of 15 trace elements were determined by means of an external calibration method. External standard solutions were nebulized by an ultrasonic nebulizer (USN) coupled with a desolvation system, and the resulting aerosol was introduced into the plasma. The efficiency of sample introduction via the USN was calculated by two methods: (1) the introduction of a Cr standard solution via the USN was compared with introduction of a Cr(CO)6 standard gas via a standard gas generator and (2) the aerosol generated by the USN was trapped on filters and then analyzed. The Cr introduction efficiencies obtained by the two methods were the same, and the introduction efficiencies of the other elements were equal to the introduction efficiency of Cr. Our results indicated that our calibration method for introduction efficiency worked well for the 15 elements (Ti, V, Cr, Mn, Co, Ni, Cu, Zn, As, Mo, Sn, Sb, Ba, Tl and Pb). The real-time data and the filter-collection data agreed well for elements with low-melting oxides (V, Co, As, Mo, Sb, Tl, and Pb). In contrast, the real-time data were smaller than the filter-collection data for elements with high-melting oxides (Ti, Cr, Mn, Ni, Cu, Zn, Sn, and Ba). This result implies that the oxides of these 8 elements were not completely fused, vaporized, atomized, and ionized in the initial radiation zone of the inductively coupled plasma. However, quantitative real-time monitoring can be realized after correction for the element recoveries which can be calculated from the ratio of real-time data/filter-collection data.
Lok, U-Wai; Li, Pai-Chi
2016-03-01
Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.
Dynamic response analysis of structure under time-variant interval process model
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao
2016-10-01
Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.
Kandasamy, Ganesan; Shaleh, Sitti Raehanah Muhamad
2018-01-01
A new approach to recover microalgae from aqueous medium using a bio-flotation method is reported. The method involves utilizing a Moringa protein extract - oil emulsion (MPOE) for flotation removal of Nannochloropsis sp. The effect of various factors has been assessed using this method, including operating parameters such as pH, MPOE dose, algae concentration and mixing time. A maximum flotation efficiency of 86.5% was achieved without changing the pH condition of algal medium. Moreover, zeta potential analysis showed a marked difference in the zeta potential values when increase the MPOE dose concentration. An optimum condition of MPOE dosage of 50ml/L, pH 8, mixing time 4min, and a flotation efficiency of greater than 86% was accomplished. The morphology of algal flocs produced by protein-oil emulsion flocculant were characterized by microscopy. This flotation method is not only simple, but also an efficient method for harvesting microalgae from culture medium. Copyright © 2017 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Enhancement of irrigation water use efficiency and water productivity in arid wine grape production regions is hindered by a lack of automated, real-time methods for monitoring and interpreting vine water status. A normalized, water stress index calculated from real-time vine canopy temperature meas...
Time and Learning Efficiency in Internet-Based Learning: A Systematic Review and Meta-Analysis
ERIC Educational Resources Information Center
Cook, David A.; Levinson, Anthony J.; Garside, Sarah
2010-01-01
Authors have claimed that Internet-based instruction promotes greater learning efficiency than non-computer methods. Objectives Determine, through a systematic synthesis of evidence in health professions education, how Internet-based instruction compares with non-computer instruction in time spent learning, and what features of Internet-based…
Tavakoli, Mohammad Mahdi; Gu, Leilei; Gao, Yuan; Reckmeier, Claas; He, Jin; Rogach, Andrey L.; Yao, Yan; Fan, Zhiyong
2015-01-01
Organometallic trihalide perovskites are promising materials for photovoltaic applications, which have demonstrated a rapid rise in photovoltaic performance in a short period of time. We report a facile one-step method to fabricate planar heterojunction perovskite solar cells by chemical vapor deposition (CVD), with a solar power conversion efficiency of up to 11.1%. We performed a systematic optimization of CVD parameters such as temperature and growth time to obtain high quality films of CH3NH3PbI3 and CH3NH3PbI3-xClx perovskite. Scanning electron microscopy and time resolved photoluminescence data showed that the perovskite films have a large grain size of more than 1 micrometer, and carrier life-times of 10 ns and 120 ns for CH3NH3PbI3 and CH3NH3PbI3-xClx, respectively. This is the first demonstration of a highly efficient perovskite solar cell using one step CVD and there is likely room for significant improvement of device efficiency. PMID:26392200
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
Dynamical analysis of the avian-human influenza epidemic model using the semi-analytical method
NASA Astrophysics Data System (ADS)
Jabbari, Azizeh; Kheiri, Hossein; Bekir, Ahmet
2015-03-01
In this work, we present a dynamic behavior of the avian-human influenza epidemic model by using efficient computational algorithm, namely the multistage differential transform method(MsDTM). The MsDTM is used here as an algorithm for approximating the solutions of the avian-human influenza epidemic model in a sequence of time intervals. In order to show the efficiency of the method, the obtained numerical results are compared with the fourth-order Runge-Kutta method (RK4M) and differential transform method(DTM) solutions. It is shown that the MsDTM has the advantage of giving an analytical form of the solution within each time interval which is not possible in purely numerical techniques like RK4M.
Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk
2017-06-27
Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
Deconvolution of mixing time series on a graph
Blocker, Alexander W.; Airoldi, Edoardo M.
2013-01-01
In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135
Front panel engineering with CAD simulation tool
NASA Astrophysics Data System (ADS)
Delacour, Jacques; Ungar, Serge; Mathieu, Gilles; Hasna, Guenther; Martinez, Pascal; Roche, Jean-Christophe
1999-04-01
THe progress made recently in display technology covers many fields of application. The specification of radiance, colorimetry and lighting efficiency creates some new challenges for designers. Photometric design is limited by the capability of correctly predicting the result of a lighting system, to save on the costs and time taken to build multiple prototypes or bread board benches. The second step of the research carried out by company OPTIS is to propose an optimization method to be applied to the lighting system, developed in the software SPEOS. The main features of the tool requires include the CAD interface, to enable fast and efficient transfer between mechanical and light design software, the source modeling, the light transfer model and an optimization tool. The CAD interface is mainly a prototype of transfer, which is not the subjects here. Photometric simulation is efficiently achieved by using the measured source encoding and a simulation by the Monte Carlo method. Today, the advantages and the limitations of the Monte Carlo method are well known. The noise reduction requires a long calculation time, which increases with the complexity of the display panel. A successful optimization is difficult to achieve, due to the long calculation time required for each optimization pass including a Monte Carlo simulation. The problem was initially defined as an engineering method of study. The experience shows that good understanding and mastering of the phenomenon of light transfer is limited by the complexity of non sequential propagation. The engineer must call for the help of a simulation and optimization tool. The main point needed to be able to perform an efficient optimization is a quick method for simulating light transfer. Much work has been done in this area and some interesting results can be observed. It must be said that the Monte Carlo method wastes time calculating some results and information which are not required for the needs of the simulation. Low efficiency transfer system cost a lot of lost time. More generally, the light transfer simulation can be treated efficiently when the integrated result is composed of elementary sub results that include quick analytical calculated intersections. The first axis of research appear. The quick integration research and the quick calculation of geometric intersections. The first axis of research brings some general solutions also valid for multi-reflection systems. The second axis requires some deep thinking on the intersection calculation. An interesting way is the subdivision of space in VOXELS. This is an adapted method of 3D division of space according to the objects and their location. An experimental software has been developed to provide a validation of the method. The gain is particularly high in complex systems. An important reduction in the calculation time has been achieved.
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)
1998-01-01
Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.
NASA Astrophysics Data System (ADS)
Schwarz, Karsten; Rieger, Heiko
2013-03-01
We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.
A fast sequence assembly method based on compressed data structures.
Liang, Peifeng; Zhang, Yancong; Lin, Kui; Hu, Jinglu
2014-01-01
Assembling a large genome using next generation sequencing reads requires large computer memory and a long execution time. To reduce these requirements, a memory and time efficient assembler is presented from applying FM-index in JR-Assembler, called FMJ-Assembler, where FM stand for FMR-index derived from the FM-index and BWT and J for jumping extension. The FMJ-Assembler uses expanded FM-index and BWT to compress data of reads to save memory and jumping extension method make it faster in CPU time. An extensive comparison of the FMJ-Assembler with current assemblers shows that the FMJ-Assembler achieves a better or comparable overall assembly quality and requires lower memory use and less CPU time. All these advantages of the FMJ-Assembler indicate that the FMJ-Assembler will be an efficient assembly method in next generation sequencing technology.
Kieran, Maríosa; Cleary, Mary; De Brún, Aoife; Igoe, Aileen
2017-10-01
To improve efficiency, reduce interruptions and reduce the time taken to complete oral drug rounds. Lean Six Sigma methods were applied to improve drug round efficiency using a pre- and post-intervention design. A 20-bed orthopaedic ward in a large teaching hospital in Ireland. Pharmacy, nursing and quality improvement staff. A multifaceted intervention was designed which included changes in processes related to drug trolley organization and drug supply planning. A communications campaign aimed at reducing interruptions during nurse-led during rounds was also developed and implemented. Average number of interruptions, average drug round time and variation in time taken to complete drug round. At baseline, the oral drug round took an average of 125 min. Following application of Lean Six Sigma methods, the average drug round time decreased by 51 min. The average number of interruptions per drug round reduced from an average of 12 at baseline to 11 following intervention, with a 75% reduction in drug supply interruptions. Lean Six Sigma methodology was successfully employed to reduce interruptions and to reduce time taken to complete the oral drug round. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
New Diamond Color Center for Quantum Communication
NASA Astrophysics Data System (ADS)
Huang, Ding; Rose, Brendon; Tyryshkin, Alexei; Sangtawesin, Sorawis; Srinivasan, Srikanth; Twitchen, Daniel; Markham, Matthew; Edmonds, Andrew; Gali, Adam; Stacey, Alastair; Wang, Wuyi; D'Haenens-Johansson, Ulrika; Zaitsev, Alexandre; Lyon, Stephen; de Leon, Nathalie
2017-04-01
Color centers in diamond are attractive for quantum communication applications because of their long electron spin coherence times and efficient optical transitions. Previous demonstrations of color centers as solid-state spin qubits were primarily focused on centers that exhibit either long coherence times or highly efficient optical interfaces. Recently, we developed a method to stabilize the neutral charge state of silicon-vacancy center in diamond (SiV0) with high conversion efficiency. We observe spin relaxation times exceeding 1 minute and spin coherence times of 1 ms for SiV0 centers. Additionally, the SiV0 center also has > 90 % of its emission into its zero-phonon line and a narrow inhomogeneous optical linewidth. The combination of a long spin coherence time and efficient optical interface make the SiV0 center a promising candidate for applications in long distance quantum communication.
Spectrometer Sensitivity Investigations on the Spectrometric Oil Analysis Program.
1983-04-22
31 H. ACID DISSOLUTION METHOD (ADM) ........... 90 31 I. ANALYSIS OF SAMPLES............................ 31 jJ. PARTICLE TRANSPORT EFFICIENCY OF...THE ROTATING *DISK.................................... 32 I .K. A/E35U-3 ACID DISSOLUTION METHOD.................. 32 L. BURN TIME... ACID DISSOLUTION METHOD ......... ,...,....... 95 3. EFFECT OF BURN TIME ............ 95 4. DIRECT SAMPLE INTRODUCTION .......................... 95
Gauss-Seidel Iterative Method as a Real-Time Pile-Up Solver of Scintillation Pulses
NASA Astrophysics Data System (ADS)
Novak, Roman; Vencelj, Matja¿
2009-12-01
The pile-up rejection in nuclear spectroscopy has been confronted recently by several pile-up correction schemes that compensate for distortions of the signal and subsequent energy spectra artifacts as the counting rate increases. We study here a real-time capability of the event-by-event correction method, which at the core translates to solving many sets of linear equations. Tight time limits and constrained front-end electronics resources make well-known direct solvers inappropriate. We propose a novel approach based on the Gauss-Seidel iterative method, which turns out to be a stable and cost-efficient solution to improve spectroscopic resolution in the front-end electronics. We show the method convergence properties for a class of matrices that emerge in calorimetric processing of scintillation detector signals and demonstrate the ability of the method to support the relevant resolutions. The sole iteration-based error component can be brought below the sliding window induced errors in a reasonable number of iteration steps, thus allowing real-time operation. An area-efficient hardware implementation is proposed that fully utilizes the method's inherent parallelism.
Strategies for improved efficiency when implementing plant vitrification techniques
USDA-ARS?s Scientific Manuscript database
Cryopreservation technologies allow vegetatively propagated genetic resources to be preserved for extended lengths of time. Once successful methods have been established, there is a significant time investment to cryopreserve gene bank collections. Our research seeks to identify methods that could i...
Ren, Qiang; Nagar, Jogender; Kang, Lei; Bian, Yusheng; Werner, Ping; Werner, Douglas H
2017-05-18
A highly efficient numerical approach for simulating the wideband optical response of nano-architectures comprised of Drude-Critical Points (DCP) media (e.g., gold and silver) is proposed and validated through comparing with commercial computational software. The kernel of this algorithm is the subdomain level discontinuous Galerkin time domain (DGTD) method, which can be viewed as a hybrid of the spectral-element time-domain method (SETD) and the finite-element time-domain (FETD) method. An hp-refinement technique is applied to decrease the Degrees-of-Freedom (DoFs) and computational requirements. The collocated E-J scheme facilitates solving the auxiliary equations by converting the inversions of matrices to simpler vector manipulations. A new hybrid time stepping approach, which couples the Runge-Kutta and Newmark methods, is proposed to solve the temporal auxiliary differential equations (ADEs) with a high degree of efficiency. The advantages of this new approach, in terms of computational resource overhead and accuracy, are validated through comparison with well-known commercial software for three diverse cases, which cover both near-field and far-field properties with plane wave and lumped port sources. The presented work provides the missing link between DCP dispersive models and FETD and/or SETD based algorithms. It is a competitive candidate for numerically studying the wideband plasmonic properties of DCP media.
NASA Astrophysics Data System (ADS)
Sulc, Miroslav; Hernandez, Henar; Martinez, Todd J.; Vanicek, Jiri
2014-03-01
We recently showed that the Dephasing Representation (DR) provides an efficient tool for computing ultrafast electronic spectra and that cellularization yields further acceleration [M. Šulc and J. Vaníček, Mol. Phys. 110, 945 (2012)]. Here we focus on increasing its accuracy by first implementing an exact Gaussian basis method (GBM) combining the accuracy of quantum dynamics and efficiency of classical dynamics. The DR is then derived together with ten other methods for computing time-resolved spectra with intermediate accuracy and efficiency. These include the Gaussian DR (GDR), an exact generalization of the DR, in which trajectories are replaced by communicating frozen Gaussians evolving classically with an average Hamiltonian. The methods are tested numerically on time correlation functions and time-resolved stimulated emission spectra in the harmonic potential, pyrazine S0 /S1 model, and quartic oscillator. Both the GBM and the GDR are shown to increase the accuracy of the DR. Surprisingly, in chaotic systems the GDR can outperform the presumably more accurate GBM, in which the two bases evolve separately. This research was supported by the Swiss NSF Grant No. 200021_124936/1 and NCCR Molecular Ultrafast Science & Technology (MUST), and by the EPFL.
Fluorescence imaging of the nanoparticles modified with indocyanine green
NASA Astrophysics Data System (ADS)
Gareev, K. G.; Babikova, K. Y.; Postnov, V. N.; Naumisheva, E. B.; Korolev, D. V.
2017-11-01
The comparative research of silica, the magnetite and magnetite-silica nanoparticles modified with fluorescent dyes using gas-phase and liquid-phase methods was conducted. At the content of fluorescent dye comparable in size a particular spectrophotometric method, nanoparticles with fluorescein have up to 1000 times larger overall luminous efficiency. It is revealed that magnetic nanoparticles are characterized by a smaller light efficiency in comparison with silica particles, at the same time particles of a magnetite are most effective at modification with fluorescein, and magnetite-silica particles - at modification with indocyanine green.
Convergence Acceleration of a Navier-Stokes Solver for Efficient Static Aeroelastic Computations
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru; Guruswamy, Guru P.
1995-01-01
New capabilities have been developed for a Navier-Stokes solver to perform steady-state simulations more efficiently. The flow solver for solving the Navier-Stokes equations is based on a combination of the lower-upper factored symmetric Gauss-Seidel implicit method and the modified Harten-Lax-van Leer-Einfeldt upwind scheme. A numerically stable and efficient pseudo-time-marching method is also developed for computing steady flows over flexible wings. Results are demonstrated for transonic flows over rigid and flexible wings.
Implementation of Preconditioned Dual-Time Procedures in OVERFLOW
NASA Technical Reports Server (NTRS)
Pandya, Shishir A.; Venkateswaran, Sankaran; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
Preconditioning methods have become the method of choice for the solution of flowfields involving the simultaneous presence of low Mach and transonic regions. It is well known that these methods are important for insuring accurate numerical discretization as well as convergence efficiency over various operating conditions such as low Mach number, low Reynolds number and high Strouhal numbers. For unsteady problems, the preconditioning is introduced within a dual-time framework wherein the physical time-derivatives are used to march the unsteady equations and the preconditioned time-derivatives are used for purposes of numerical discretization and iterative solution. In this paper, we describe the implementation of the preconditioned dual-time methodology in the OVERFLOW code. To demonstrate the performance of the method, we employ both simple and practical unsteady flowfields, including vortex propagation in a low Mach number flow, flowfield of an impulsively started plate (Stokes' first problem) arid a cylindrical jet in a low Mach number crossflow with ground effect. All the results demonstrate that the preconditioning algorithm is responsible for improvements to both numerical accuracy and convergence efficiency and, thereby, enables low Mach number unsteady computations to be performed at a fraction of the cost of traditional time-marching methods.
Mixed Reality Meets Pharmaceutical Development.
Forrest, William P; Mackey, Megan A; Shah, Vivek M; Hassell, Kerry M; Shah, Prashant; Wylie, Jennifer L; Gopinath, Janakiraman; Balderhaar, Henning; Li, Li; Wuelfing, W Peter; Helmy, Roy
2017-12-01
As science evolves, the need for more efficient and innovative knowledge transfer capabilities becomes evident. Advances in drug discovery and delivery sciences have directly impacted the pharmaceutical industry, though the added complexities have not shortened the development process. These added complexities also make it difficult for scientists to rapidly and effectively transfer knowledge to offset the lengthened drug development timelines. While webcams, camera phones, and iPads have been explored as potential new methods of real-time information sharing, the non-"hands-free" nature and lack of viewer and observer point-of-view render them unsuitable for the R&D laboratory or manufacturing setting. As an alternative solution, the Microsoft HoloLens mixed-reality headset was evaluated as a more efficient, hands-free method of knowledge transfer and information sharing. After completing a traditional method transfer between 3 R&D sites (Rahway, NJ; West Point, PA and Schnachen, Switzerland), a retrospective analysis of efficiency gain was performed through the comparison of a mock method transfer between NJ and PA sites using the HoloLens. The results demonstrated a minimum 10-fold gain in efficiency, weighing in from a savings in time, cost, and the ability to have real-time data analysis and discussion. In addition, other use cases were evaluated involving vendor and contract research/manufacturing organizations. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Methods of Efficient Study Habits and Physics Learning
NASA Astrophysics Data System (ADS)
Zettili, Nouredine
2010-02-01
We want to discuss the methods of efficient study habits and how they can be used by students to help them improve learning physics. In particular, we deal with the most efficient techniques needed to help students improve their study skills. We focus on topics such as the skills of how to develop long term memory, how to improve concentration power, how to take class notes, how to prepare for and take exams, how to study scientific subjects such as physics. We argue that the students who conscientiously use the methods of efficient study habits achieve higher results than those students who do not; moreover, a student equipped with the proper study skills will spend much less time to learn a subject than a student who has no good study habits. The underlying issue here is not the quantity of time allocated to the study efforts by the students, but the efficiency and quality of actions so that the student can function at peak efficiency. These ideas were developed as part of Project IMPACTSEED (IMproving Physics And Chemistry Teaching in SEcondary Education), an outreach grant funded by the Alabama Commission on Higher Education. This project is motivated by a major pressing local need: A large number of high school physics teachers teach out of field. )
Stable and Spectrally Accurate Schemes for the Navier-Stokes Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Jun; Liu, Jie
2011-01-01
In this paper, we present an accurate, efficient and stable numerical method for the incompressible Navier-Stokes equations (NSEs). The method is based on (1) an equivalent pressure Poisson equation formulation of the NSE with proper pressure boundary conditions, which facilitates the design of high-order and stable numerical methods, and (2) the Krylov deferred correction (KDC) accelerated method of lines transpose (mbox MoL{sup T}), which is very stable, efficient, and of arbitrary order in time. Numerical tests with known exact solutions in three dimensions show that the new method is spectrally accurate in time, and a numerical order of convergence 9more » was observed. Two-dimensional computational results of flow past a cylinder and flow in a bifurcated tube are also reported.« less
Characterization of Fissile Assemblies Using Low-Efficiency Detection Systems
Chapline, George F.; Verbeke, Jerome M.
2017-02-02
Here, we have investigated the possibility that the amount, chemical form, multiplication, and shape of the fissile material in an assembly can be passively assayed using scintillator detection systems by only measuring the fast neutron pulse height distribution and distribution of time intervals Δt between fast neutrons. We have previously demonstrated that the alpha-ratio can be obtained from the observed pulse height distribution for fast neutrons. In this paper we report that we report that when the distribution of time intervals is plotted as a function of logΔt, the position of the correlated neutron peak is nearly independent of detectormore » efficiency and determines the internal relaxation rate for fast neutrons. If this information is combined with knowledge of the alpha-ratio, then the position of the minimum between the correlated and uncorrelated peaks can be used to rapidly estimate the mass, multiplication, and shape of fissile material. This method does not require a priori knowledge of either the efficiency for neutron detection or the alpha-ratio. Although our method neglects 3-neutron correlations, we have used previously obtained experimental data for metallic and oxide forms of Pu to demonstrate that our method yields good estimates for multiplications as large as 2, and that the only constraint on detector efficiency/observation time is that a peak in the interval time distribution due to correlated neutrons is visible.« less
Multiscale Space-Time Computational Methods for Fluid-Structure Interactions
2015-09-13
prescribed fully or partially, is from an actual locust, extracted from high-speed, multi-camera video recordings of the locust in a wind tunnel . We use...With creative methods for coupling the fluid and structure, we can increase the scope and efficiency of the FSI modeling . Multiscale methods, which now...play an important role in computational mathematics, can also increase the accuracy and efficiency of the computer modeling techniques. The main
Distributed collaborative response surface method for mechanical dynamic assembly reliability design
NASA Astrophysics Data System (ADS)
Bai, Guangchen; Fei, Chengwei
2013-11-01
Because of the randomness of many impact factors influencing the dynamic assembly relationship of complex machinery, the reliability analysis of dynamic assembly relationship needs to be accomplished considering the randomness from a probabilistic perspective. To improve the accuracy and efficiency of dynamic assembly relationship reliability analysis, the mechanical dynamic assembly reliability(MDAR) theory and a distributed collaborative response surface method(DCRSM) are proposed. The mathematic model of DCRSM is established based on the quadratic response surface function, and verified by the assembly relationship reliability analysis of aeroengine high pressure turbine(HPT) blade-tip radial running clearance(BTRRC). Through the comparison of the DCRSM, traditional response surface method(RSM) and Monte Carlo Method(MCM), the results show that the DCRSM is not able to accomplish the computational task which is impossible for the other methods when the number of simulation is more than 100 000 times, but also the computational precision for the DCRSM is basically consistent with the MCM and improved by 0.40˜4.63% to the RSM, furthermore, the computational efficiency of DCRSM is up to about 188 times of the MCM and 55 times of the RSM under 10000 times simulations. The DCRSM is demonstrated to be a feasible and effective approach for markedly improving the computational efficiency and accuracy of MDAR analysis. Thus, the proposed research provides the promising theory and method for the MDAR design and optimization, and opens a novel research direction of probabilistic analysis for developing the high-performance and high-reliability of aeroengine.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
NASA Astrophysics Data System (ADS)
Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann
2013-06-01
In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.
Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey
2018-04-01
Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y; Huang, Z; Lo, S
2015-06-15
Purpose: To improve Gamma Knife SRS treatment efficiency for brain metastases and compare the differences of treatment time and radiobiological effects between two different planning methods of automatic filling and manual placement of shots with inverse planning. Methods: T1-weighted MRI images with gadolinium contrast from five patients with a single brain metastatic-lesion were used in this retrospective study. Among them, two were from primary breast cancer, two from primary melanoma cancer and one from primary prostate cancer. For each patient, two plans were generated in Leksell GammaPlan10.1.1 for radiosurgical treatment with a Leksell GammaKnife Perfexion machine: one with automatic filling,more » automatic sector configuration and inverse optimization (Method1); and the other with manual placement of shots, manual setup of collimator sizes, manual setup of sector blocking and inverse optimization (Method2). Dosimetric quality of the plans was evaluated with parameters of Coverage, Selectivity, Gradient-Index and DVH. Beam-on Time, Number-of-Shots and Tumor Control Probability(TCP) were compared for the two plans while keeping their dosimetric quality very similar. Relative reduction of Beam-on Time and Number-of-Shots were calculated as the ratios among the two plans and used for quantitative analysis. Results: With very similar dosimetric and radiobiological plan quality, plans created with Method 2 had significantly reduced treatment time. Relative reduction of Beam-on Time ranged from 20% to 51 % (median:29%,p=0.001), and reduction of Number-of-Shots ranged from 5% to 67% (median:40%,p=0.0002), respectively. Time of plan creation for Method1 and Method2 was similar, approximately 20 minutes, excluding the time for tumor delineation. TCP calculated for the tumors from differential DVHs did not show significant difference between the two plans (p=0.35). Conclusion: The method of manual setup combined with inverse optimization in LGP for treatment of brain metastatic lesions with the Perfexion can achieve significantly higher time efficiency without degrading treatment quality.« less
Snipas, Mindaugas; Pranevicius, Henrikas; Pranevicius, Mindaugas; Pranevicius, Osvaldas; Paulauskas, Nerijus; Bukauskas, Feliksas F
2015-01-01
The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC) of voltage gating of gap junction (GJ) channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs), which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ~20 times.
Streaming fragment assignment for real-time analysis of sequencing experiments
Roberts, Adam; Pachter, Lior
2013-01-01
We present eXpress, a software package for highly efficient probabilistic assignment of ambiguously mapping sequenced fragments. eXpress uses a streaming algorithm with linear run time and constant memory use. It can determine abundances of sequenced molecules in real time, and can be applied to ChIP-seq, metagenomics and other large-scale sequencing data. We demonstrate its use on RNA-seq data, showing greater efficiency than other quantification methods. PMID:23160280
NASA Astrophysics Data System (ADS)
Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.; Krenczyk, D.
2016-08-01
In the paper a survey of predictive and reactive scheduling methods is done in order to evaluate how the ability of prediction of reliability characteristics influences over robustness criteria. The most important reliability characteristics are: Mean Time to Failure, Mean Time of Repair. Survey analysis is done for a job shop scheduling problem. The paper answers the question: what method generates robust schedules in the case of a bottleneck failure occurrence before, at the beginning of planned maintenance actions or after planned maintenance actions? Efficiency of predictive schedules is evaluated using criteria: makespan, total tardiness, flow time, idle time. Efficiency of reactive schedules is evaluated using: solution robustness criterion and quality robustness criterion. This paper is the continuation of the research conducted in the paper [1], where the survey of predictive and reactive scheduling methods is done only for small size scheduling problems.
Development of a novel and highly efficient method of isolating bacteriophages from water.
Liu, Weili; Li, Chao; Qiu, Zhi-Gang; Jin, Min; Wang, Jing-Feng; Yang, Dong; Xiao, Zhong-Hai; Yuan, Zhao-Kang; Li, Jun-Wen; Xu, Qun-Ying; Shen, Zhi-Qiang
2017-08-01
Bacteriophages are widely used to the treatment of drug-resistant bacteria and the improvement of food safety through bacterial lysis. However, the limited investigations on bacteriophage restrict their further application. In this study, a novel and highly efficient method was developed for isolating bacteriophage from water based on the electropositive silica gel particles (ESPs) method. To optimize the ESPs method, we evaluated the eluent type, flow rate, pH, temperature, and inoculation concentration of bacteriophage using bacteriophage f2. The quantitative detection reported that the recovery of the ESPs method reached over 90%. The qualitative detection demonstrated that the ESPs method effectively isolated 70% of extremely low-concentration bacteriophage (10 0 PFU/100L). Based on the host bacteria composed of 33 standard strains and 10 isolated strains, the bacteriophages in 18 water samples collected from the three sites in the Tianjin Haihe River Basin were isolated by the ESPs and traditional methods. Results showed that the ESPs method was significantly superior to the traditional method. The ESPs method isolated 32 strains of bacteriophage, whereas the traditional method isolated 15 strains. The sample isolation efficiency and bacteriophage isolation efficiency of the ESPs method were 3.28 and 2.13 times higher than those of the traditional method. The developed ESPs method was characterized by high isolation efficiency, efficient handling of large water sample size and low requirement on water quality. Copyright © 2017. Published by Elsevier B.V.
Improving the efficiency of a chemotherapy day unit: applying a business approach to oncology.
van Lent, Wineke A M; Goedbloed, N; van Harten, W H
2009-03-01
To improve the efficiency of a hospital-based chemotherapy day unit (CDU). The CDU was benchmarked with two other CDUs to identify their attainable performance levels for efficiency, and causes for differences. Furthermore, an in-depth analysis using a business approach, called lean thinking, was performed. An integrated set of interventions was implemented, among them a new planning system. The results were evaluated using pre- and post-measurements. We observed 24% growth of treatments and bed utilisation, a 12% increase of staff member productivity and an 81% reduction of overtime. The used method improved process design and led to increased efficiency and a more timely delivery of care. Thus, the business approaches, which were adapted for healthcare, were successfully applied. The method may serve as an example for other oncology settings with problems concerning waiting times, patient flow or lack of beds.
GPU-accelerated element-free reverse-time migration with Gauss points partition
NASA Astrophysics Data System (ADS)
Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong
2018-06-01
An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.
A wavefront orientation method for precise numerical determination of tsunami travel time
NASA Astrophysics Data System (ADS)
Fine, I. V.; Thomson, R. E.
2013-04-01
We present a highly accurate and computationally efficient method (herein, the "wavefront orientation method") for determining the travel time of oceanic tsunamis. Based on Huygens principle, the method uses an eight-point grid-point pattern and the most recent information on the orientation of the advancing wave front to determine the time for a tsunami to travel to a specific oceanic location. The method is shown to provide improved accuracy and reduced anisotropy compared with the conventional multiple grid-point method presently in widespread use.
Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Mark F.; Samtaney, Ravi, E-mail: samtaney@pppl.go; Brandt, Achi
2010-09-01
Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations - so-called 'textbook' multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss-Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less
Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Mark F.; Samtaney, Ravi; Brandt, Achi
2010-09-01
Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations – so-called ‘‘textbook” multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss–Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less
Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Mark F.; Samtaney, Ravi; Brandt, Achi
2013-12-14
Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations – so-called “textbook” multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss-Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less
$n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.
Wu, Yue; Hua, Zhongyun; Zhou, Yicong
2016-11-01
Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.
T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.
Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun
2016-07-08
Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.
NASA Astrophysics Data System (ADS)
Mohebbi, Akbar
2018-02-01
In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.
Division of methods for counting helminths' eggs and the problem of efficiency of these methods.
Jaromin-Gleń, Katarzyna; Kłapeć, Teresa; Łagód, Grzegorz; Karamon, Jacek; Malicki, Jacek; Skowrońska, Agata; Bieganowski, Andrzej
2017-03-21
From the sanitary and epidemiological aspects, information concerning the developmental forms of intestinal parasites, especially the eggs of helminths present in our environment in: water, soil, sandpits, sewage sludge, crops watered with wastewater are very important. The methods described in the relevant literature may be classified in various ways, primarily according to the methodology of the preparation of samples from environmental matrices prepared for analysis, and the sole methods of counting and chambers/instruments used for this purpose. In addition, there is a possibility to perform the classification of the research methods analyzed from the aspect of the method and time of identification of the individuals counted, or the necessity for staining them. Standard methods for identification of helminths' eggs from environmental matrices are usually characterized by low efficiency, i.e. from 30% to approximately 80%. The efficiency of the method applied may be measured in a dual way, either by using the method of internal standard or the 'Split/Spike' method. While measuring simultaneously in an examined object the efficiency of the method and the number of eggs, the 'actual' number of eggs may be calculated by multiplying the obtained value of the discovered eggs of helminths by inverse efficiency.
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-12-01
We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.
Tao, Guohua; Miller, William H
2012-09-28
An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.
Time-invariant component-based normalization for a simultaneous PET-MR scanner.
Belzunce, M A; Reader, A J
2016-05-07
Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this scanner. Moreover, to extend the analysis to other scanners, we generated distributions of crystal efficiencies with greater fluctuations than those found in the Biograph mMR scanner and evaluated their impact in simulations with a wide variety of noise levels. An important finding of this work is that a regular normalization scan is not needed in scanners with photodetectors with relatively low dispersion in their efficiencies.
Time-invariant component-based normalization for a simultaneous PET-MR scanner
NASA Astrophysics Data System (ADS)
Belzunce, M. A.; Reader, A. J.
2016-05-01
Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this scanner. Moreover, to extend the analysis to other scanners, we generated distributions of crystal efficiencies with greater fluctuations than those found in the Biograph mMR scanner and evaluated their impact in simulations with a wide variety of noise levels. An important finding of this work is that a regular normalization scan is not needed in scanners with photodetectors with relatively low dispersion in their efficiencies.
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong
2016-07-01
In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.
Highly efficient method for production of radioactive silver seed cores for brachytherapy.
Cardoso, Roberta Mansini; de Souza, Carla Daruich; Rostelato, Maria Elisa Chuery Martins; Araki, Koiti
2017-02-01
A simple and highly efficient (shorter reaction time and almost no rework) method for production of iodine based radioactive silver seed cores for brachytherapy is described. The method allows almost quantitative deposition of iodine-131 on dozens of silver substrates at once, with even distribution of activity per core and insignificant amounts of liquid and solid radioactive wastes, allowing the fabrication of cheaper radioactive iodine seeds for brachytherapy. Copyright © 2016. Published by Elsevier Ltd.
A new ChainMail approach for real-time soft tissue simulation.
Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2016-07-03
This paper presents a new ChainMail method for real-time soft tissue simulation. This method enables the use of different material properties for chain elements to accommodate various materials. Based on the ChainMail bounding region, a new time-saving scheme is developed to improve computational efficiency for isotropic materials. The proposed method also conserves volume and strain energy. Experimental results demonstrate that the proposed ChainMail method can not only accommodate isotropic, anisotropic and heterogeneous materials but also model incompressibility and relaxation behaviors of soft tissues. Further, the proposed method can achieve real-time computational performance.
Wang, Chen; Ouyang, Jun; Ye, De-Kai; Xu, Jing-Juan; Chen, Hong-Yuan; Xia, Xing-Hua
2012-08-07
Fluorescence analysis has proved to be a powerful detection technique for achieving single molecule analysis. However, it usually requires the labeling of targets with bright fluorescent tags since most chemicals and biomolecules lack fluorescence. Conventional fluorescence labeling methods require a considerable quantity of biomolecule samples, long reaction times and extensive chromatographic purification procedures. Herein, a micro/nanofluidics device integrating a nanochannel in a microfluidics chip has been designed and fabricated, which achieves rapid protein concentration, fluorescence labeling, and efficient purification of product in a miniaturized and continuous manner. As a demonstration, labeling of the proteins bovine serum albumin (BSA) and IgG with fluorescein isothiocyanate (FITC) is presented. Compared to conventional methods, the present micro/nanofluidics device performs about 10(4)-10(6) times faster BSA labeling with 1.6 times higher yields due to the efficient nanoconfinement effect, improved mass, and heat transfer in the chip device. The results demonstrate that the present micro/nanofluidics device promises rapid and facile fluorescence labeling of small amount of reagents such as proteins, nucleic acids and other biomolecules with high efficiency.
An exponential time-integrator scheme for steady and unsteady inviscid flows
NASA Astrophysics Data System (ADS)
Li, Shu-Jie; Luo, Li-Shi; Wang, Z. J.; Ju, Lili
2018-07-01
An exponential time-integrator scheme of second-order accuracy based on the predictor-corrector methodology, denoted PCEXP, is developed to solve multi-dimensional nonlinear partial differential equations pertaining to fluid dynamics. The effective and efficient implementation of PCEXP is realized by means of the Krylov method. The linear stability and truncation error are analyzed through a one-dimensional model equation. The proposed PCEXP scheme is applied to the Euler equations discretized with a discontinuous Galerkin method in both two and three dimensions. The effectiveness and efficiency of the PCEXP scheme are demonstrated for both steady and unsteady inviscid flows. The accuracy and efficiency of the PCEXP scheme are verified and validated through comparisons with the explicit third-order total variation diminishing Runge-Kutta scheme (TVDRK3), the implicit backward Euler (BE) and the implicit second-order backward difference formula (BDF2). For unsteady flows, the PCEXP scheme generates a temporal error much smaller than the BDF2 scheme does, while maintaining the expected acceleration at the same time. Moreover, the PCEXP scheme is also shown to achieve the computational efficiency comparable to the implicit schemes for steady flows.
Spectral methods in time for a class of parabolic partial differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ierley, G.; Spencer, B.; Worthing, R.
1992-09-01
In this paper, we introduce a fully spectral solution for the partial differential equation u[sub t] + uu[sub x] + vu[sub xx] + [mu]u[sub xxx] + [lambda]u[sub xxxx] = O. For periodic boundary conditions in space, the use of a Fourier expansion in x admits of a particularly efficient algorithm with respect to expansion of the time dependence in a Chebyshev series. Boundary conditions other than periodic may still be treated with reasonable, though lesser, efficiency. for all cases, very high accuracy is attainable at moderate computational cost relative to the expense of variable order finite difference methods in time.more » 14 refs., 9 figs.« less
NASA Astrophysics Data System (ADS)
Zaini, H.; Abubakar, S.; Rihayat, T.; Suryani, S.
2018-03-01
Removal of heavy metal content in wastewater has been largely done by various methods. One effective and efficient method is the adsorption method. This study aims to reduce manganese (II) content in wastewater based on column adsorption method using absorbent material from bagasse. The fixed variable consisted of 50 g adsorbent, 10 liter adsorbate volume, flow rate of 7 liters / min. Independent variable of particle size with variation 10 – 30 mesh and contact time with variation 0 - 240 min and respon variable concentration of adsorbate (ppm), pH and conductivity. The results showed that the adsorption process of manganese metal is influenced by particle size and contact time. The adsorption kinetics takes place according to pseudo-second order kinetics with an equilibrium adsorption capacity (qe: mg / g) for 10 mesh adsorbent particles: 0.8947; 20 mesh adsorbent particles: 0.4332 and 30 mesh adsorbent particles: 1.0161, respectively. Highest removal efficience for 10 mesh adsorbent particles: 49.22% on contact time 60 min; 20 mesh adsorbent particles: 35,25% on contact time 180 min and particle 30 mesh adsorbent particles: 51,95% on contact time 150 min.
Computational efficiency for the surface renewal method
NASA Astrophysics Data System (ADS)
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
Nogly, Przemyslaw; Panneels, Valerie; Nelson, Garrett; ...
2016-08-22
Serial femtosecond crystallography (SFX) using X-ray free-electron laser sources is an emerging method with considerable potential for time-resolved pump-probe experiments. Here we present a lipidic cubic phase SFX structure of the light-driven proton pump bacteriorhodopsin (bR) to 2.3 Å resolution and a method to investigate protein dynamics with modest sample requirement. Time-resolved SFX (TR-SFX) with a pump-probe delay of 1 ms yields difference Fourier maps compatible with the dark to M state transition of bR. Importantly, the method is very sample efficient and reduces sample consumption to about 1 mg per collected time point. Accumulation of M intermediate within themore » crystal lattice is confirmed by time-resolved visible absorption spectroscopy. Furthermore, this study provides an important step towards characterizing the complete photocycle dynamics of retinal proteins and demonstrates the feasibility of a sample efficient viscous medium jet for TR-SFX.« less
On computing the global time-optimal motions of robotic manipulators in the presence of obstacles
NASA Technical Reports Server (NTRS)
Shiller, Zvi; Dubowsky, Steven
1991-01-01
A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nogly, Przemyslaw; Panneels, Valerie; Nelson, Garrett
Serial femtosecond crystallography (SFX) using X-ray free-electron laser sources is an emerging method with considerable potential for time-resolved pump-probe experiments. Here we present a lipidic cubic phase SFX structure of the light-driven proton pump bacteriorhodopsin (bR) to 2.3 Å resolution and a method to investigate protein dynamics with modest sample requirement. Time-resolved SFX (TR-SFX) with a pump-probe delay of 1 ms yields difference Fourier maps compatible with the dark to M state transition of bR. Importantly, the method is very sample efficient and reduces sample consumption to about 1 mg per collected time point. Accumulation of M intermediate within themore » crystal lattice is confirmed by time-resolved visible absorption spectroscopy. Furthermore, this study provides an important step towards characterizing the complete photocycle dynamics of retinal proteins and demonstrates the feasibility of a sample efficient viscous medium jet for TR-SFX.« less
Nogly, Przemyslaw; Panneels, Valerie; Nelson, Garrett; Gati, Cornelius; Kimura, Tetsunari; Milne, Christopher; Milathianaki, Despina; Kubo, Minoru; Wu, Wenting; Conrad, Chelsie; Coe, Jesse; Bean, Richard; Zhao, Yun; Båth, Petra; Dods, Robert; Harimoorthy, Rajiv; Beyerlein, Kenneth R.; Rheinberger, Jan; James, Daniel; DePonte, Daniel; Li, Chufeng; Sala, Leonardo; Williams, Garth J.; Hunter, Mark S.; Koglin, Jason E.; Berntsen, Peter; Nango, Eriko; Iwata, So; Chapman, Henry N.; Fromme, Petra; Frank, Matthias; Abela, Rafael; Boutet, Sébastien; Barty, Anton; White, Thomas A.; Weierstall, Uwe; Spence, John; Neutze, Richard; Schertler, Gebhard; Standfuss, Jörg
2016-01-01
Serial femtosecond crystallography (SFX) using X-ray free-electron laser sources is an emerging method with considerable potential for time-resolved pump-probe experiments. Here we present a lipidic cubic phase SFX structure of the light-driven proton pump bacteriorhodopsin (bR) to 2.3 Å resolution and a method to investigate protein dynamics with modest sample requirement. Time-resolved SFX (TR-SFX) with a pump-probe delay of 1 ms yields difference Fourier maps compatible with the dark to M state transition of bR. Importantly, the method is very sample efficient and reduces sample consumption to about 1 mg per collected time point. Accumulation of M intermediate within the crystal lattice is confirmed by time-resolved visible absorption spectroscopy. This study provides an important step towards characterizing the complete photocycle dynamics of retinal proteins and demonstrates the feasibility of a sample efficient viscous medium jet for TR-SFX. PMID:27545823
NASA Astrophysics Data System (ADS)
Dimitrakopoulos, Panagiotis
2018-03-01
The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.
USDA-ARS?s Scientific Manuscript database
The biolistic method is reliable for delivering genes of interest into various species. Low transformation efficiency has been a limiting factor for its application. The DNA coating agent protamine was shown to improve transformation efficiency in rice, while a reduction of plasmid DNA in the bomb...
Efficient solution of a multi objective fuzzy transportation problem
NASA Astrophysics Data System (ADS)
Vidhya, V.; Ganesan, K.
2018-04-01
In this paper we present a methodology for the solution of multi-objective fuzzy transportation problem when all the cost and time coefficients are trapezoidal fuzzy numbers and the supply and demand are crisp numbers. Using a new fuzzy arithmetic on parametric form of trapezoidal fuzzy numbers and a new ranking method all efficient solutions are obtained. The proposed method is illustrated with an example.
Jafar-Zanjani, Samad; Cheng, Jierong; Mosallaei, Hossein
2016-04-10
An efficient auxiliary differential equation method for incorporating 2D inhomogeneous dispersive impedance sheets in the finite-difference time-domain solver is presented. This unique proposed method can successfully solve optical problems of current interest involving 2D sheets. It eliminates the need for ultrafine meshing in the thickness direction, resulting in a significant reduction of computation time and memory requirements. We apply the method to characterize a novel broad-beam leaky-wave antenna created by cascading three sinusoidally modulated reactance surfaces and also to study the effect of curvature on the radiation characteristic of a conformal impedance sheet holographic antenna. Considerable improvement in the simulation time based on our technique in comparison with the traditional volumetric model is reported. Both applications are of great interest in the field of antennas and 2D sheets.
Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing
2017-03-01
Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Zaky, M. A.
2015-01-01
In this paper, we propose and analyze an efficient operational formulation of spectral tau method for multi-term time-space fractional differential equation with Dirichlet boundary conditions. The shifted Jacobi operational matrices of Riemann-Liouville fractional integral, left-sided and right-sided Caputo fractional derivatives are presented. By using these operational matrices, we propose a shifted Jacobi tau method for both temporal and spatial discretizations, which allows us to present an efficient spectral method for solving such problem. Furthermore, the error is estimated and the proposed method has reasonable convergence rates in spatial and temporal discretizations. In addition, some known spectral tau approximations can be derived as special cases from our algorithm if we suitably choose the corresponding special cases of Jacobi parameters θ and ϑ. Finally, in order to demonstrate its accuracy, we compare our method with those reported in the literature.
NASA Astrophysics Data System (ADS)
Yu, Z.; Bedig, A.; Quigley, M.; Montalto, F. A.
2017-12-01
In-situ field monitoring can help to improve the design and management of decentralized Green Infrastructure (GI) systems in urban areas. Because of the vast quantity of continuous data generated from multi-site sensor systems, cost-effective post-construction opportunities for real-time control are limited; and the physical processes that influence the observed phenomena (e.g. soil moisture) are hard to track and control. To derive knowledge efficiently from real-time monitoring data, there is currently a need to develop more efficient approaches to data quality control. In this paper, we employ dynamic time warping method to compare the similarity of two soil moisture patterns without ignoring the inherent autocorrelation. We also use a rule-based machine learning method to investigate the feasibility of detecting anomalous responses from soil moisture probes. The data was generated from both individual and clusters of probes, deployed in a GI site in Milwaukee, WI. In contrast to traditional QAQC methods, which seek to detect outliers at individual time steps, the new method presented here converts the continuous time series into event-based symbolic sequences from which unusual response patterns can be detected. Different Matching rules are developed on different physical characteristics for different seasons. The results suggest that this method could be used alternatively to detect sensor failure, to identify extreme events, and to call out abnormal change patterns, compared to intra-probe and inter-probe historical observations. Though this algorithm was developed for soil moisture probes, the same approach could easily be extended to advance QAQC efficiency for any continuous environmental datasets.
Adaptation and Promotion of Emergency Medical Service Transportation for Climate Change
Pan, Chih-Long; Chiu, Chun-Wen; Wen, Jet-Chau
2014-01-01
Abstract The purpose of this study is to find a proper prehospital transportation scenario planning of an emergency medical service (EMS) system for possible burdensome casualties resulting from extreme climate events. This project focuses on one of the worst natural catastrophic events in Taiwan, the 88 Wind-caused Disasters, caused by the Typhoon Morakot; the case of the EMS transportation in the Xiaolin village is reviewed and analyzed. The sequential-conveyance method is designed to promote the efficiency of all the ambulance services related to transportation time and distance. Initially, a proposed mobile emergency medical center (MEMC) is constructed in a safe location near the area of the disaster. The ambulances are classified into 2 categories: the first-line ambulances, which reciprocate between the MEMC and the disaster area to save time and shorten the working distances and the second-line ambulances, which transfer patients in critical condition from the MEMC to the requested hospitals for further treatment. According to the results, the sequential-conveyance method is more efficient than the conventional method for EMS transportation in a mass-casualty incident (MCI). This method improves the time efficiency by 52.15% and the distance efficiency by 56.02%. This case study concentrates on Xiaolin, a mountain village, which was heavily destroyed by a devastating mudslide during the Typhoon Morakot. The sequential-conveyance method for the EMS transportation in this research is not only more advantageous but also more rational in adaptation to climate change. Therefore, the findings are also important to all the decision-making with respect to a promoted EMS transportation, especially in an MCI. PMID:25501065
Adaptation and promotion of emergency medical service transportation for climate change.
Pan, Chih-Long; Chiu, Chun-Wen; Wen, Jet-Chau
2014-12-01
The purpose of this study is to find a proper prehospital transportation scenario planning of an emergency medical service (EMS) system for possible burdensome casualties resulting from extreme climate events. This project focuses on one of the worst natural catastrophic events in Taiwan, the 88 Wind-caused Disasters, caused by the Typhoon Morakot; the case of the EMS transportation in the Xiaolin village is reviewed and analyzed. The sequential-conveyance method is designed to promote the efficiency of all the ambulance services related to transportation time and distance. Initially, a proposed mobile emergency medical center (MEMC) is constructed in a safe location near the area of the disaster. The ambulances are classified into 2 categories: the first-line ambulances, which reciprocate between the MEMC and the disaster area to save time and shorten the working distances and the second-line ambulances, which transfer patients in critical condition from the MEMC to the requested hospitals for further treatment. According to the results, the sequential-conveyance method is more efficient than the conventional method for EMS transportation in a mass-casualty incident (MCI). This method improves the time efficiency by 52.15% and the distance efficiency by 56.02%. This case study concentrates on Xiaolin, a mountain village, which was heavily destroyed by a devastating mudslide during the Typhoon Morakot. The sequential-conveyance method for the EMS transportation in this research is not only more advantageous but also more rational in adaptation to climate change. Therefore, the findings are also important to all the decision-making with respect to a promoted EMS transportation, especially in an MCI.
Yu, Lu; Shi, Jing; Cao, Lianlian; Zhang, Guoping; Wang, Wenli; Hu, Deyu; Song, Baoan
2017-08-15
Southern rice black-streaked dwarf virus (SRBSDV) has spread from the south of China to the north of Vietnam in the past few years, and has severely influenced rice production. However, previous study of traditional SRBSDV transmission method by the natural virus vector, the white-backed planthopper (WBPH, Sogatella furcifera), in the laboratory, researchers are frequently confronted with lack of enough viral samples due to the limited life span of infected vectors and rice plants and low virus acquisition and inoculation efficiency by the vector. Meanwhile, traditional mechanical inoculation of virus only apply to dicotyledon because of the higher content of lignin in the leaves of the monocot. Therefore, establishing an efficient and persistent-transmitting model, with a shorter virus transmission time and a higher virus transmission efficiency, for screening novel anti-SRBSDV drugs is an urgent need. In this study, we firstly reported a novel method for transmitting SRBSDV in rice using the bud-cutting method. The transmission efficiency of SRBSDV in rice was investigated via the polymerase chain reaction (PCR) method and the replication of SRBSDV in rice was also investigated via the proteomics analysis. Rice infected with SRBSDV using the bud-cutting method exhibited similar symptoms to those infected by the WBPH, and the transmission efficiency (>80.00%), which was determined using the PCR method, and the virus transmission time (30 min) were superior to those achieved that transmitted by the WBPH. Proteomics analysis confirmed that SRBSDV P1, P2, P3, P4, P5-1, P5-2, P6, P8, P9-1, P9-2, and P10 proteins were present in infected rice seedlings infected via the bud-cutting method. The results showed that SRBSDV could be successfully transmitted via the bud-cutting method and plants infected SRBSDV exhibited the symptoms were similar to those transmitted by the WBPH. Therefore, the use of the bud-cutting method to generate a cheap, efficient, reliable supply of SRBSDV-infected rice seedlings should aid the development of disease control strategies. Meanwhile, this method also could provide a new idea for the other virus transmission in monocot.
Character recognition from trajectory by recurrent spiking neural networks.
Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan
2017-07-01
Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.
Application of kernel functions for accurate similarity search in large chemical databases.
Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H
2010-04-29
Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.
NASA Astrophysics Data System (ADS)
Ping, Ping; Zhang, Yu; Xu, Yixian; Chu, Risheng
2016-12-01
In order to improve the perfectly matched layer (PML) efficiency in viscoelastic media, we first propose a split multi-axial PML (M-PML) and an unsplit convolutional PML (C-PML) in the second-order viscoelastic wave equations with the displacement as the only unknown. The advantage of these formulations is that it is easy and efficient to revise the existing codes of the second-order spectral element method (SEM) or finite-element method (FEM) with absorbing boundaries in a uniform equation, as well as more economical than the auxiliary differential equations PML. Three models which are easily suffered from late time instabilities are considered to validate our approaches. Through comparison the M-PML with C-PML efficiency of absorption and stability for long time simulation, it can be concluded that: (1) for an isotropic viscoelastic medium with high Poisson's ratio, the C-PML will be a sufficient choice for long time simulation because of its weak reflections and superior stability; (2) unlike the M-PML with high-order damping profile, the M-PML with second-order damping profile loses its stability in long time simulation for an isotropic viscoelastic medium; (3) in an anisotropic viscoelastic medium, the C-PML suffers from instabilities, while the M-PML with second-order damping profile can be a better choice for its superior stability and more acceptable weak reflections than the M-PML with high-order damping profile. The comparative analysis of the developed methods offers meaningful significance for long time seismic wave modeling in second-order viscoelastic wave equations.
Reduction of capacity decay in vanadium flow batteries by an electrolyte-reflow method
NASA Astrophysics Data System (ADS)
Wang, Ke; Liu, Le; Xi, Jingyu; Wu, Zenghua; Qiu, Xinping
2017-01-01
Electrolyte imbalance is a major issue with Vanadium flow batteries (VFBs) as it has a significant impact on electrolyte utilization and cycle life over extended charge-discharge cycling. This work seeks to reduce capacity decay and prolong cycle life of VFBs by adopting a novel electrolyte-reflow method. Different current density and various start-up time of the method are investigated in the charge-discharge tests. The results show that the capacity decay rate is reduced markedly and the cycle life is prolonged substantially by this method. In addition, the coulomb efficiency, voltage efficiency and energy efficiency remain stable during the whole cycle life test, which indicates this method has little impact on the long lifetime performance of the VFBs. The method is low-cost, simple, effective, and can be applied in industrial VFB productions.
Mixed time integration methods for transient thermal analysis of structures, appendix 5
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
Mixed time integration methods for transient thermal analysis of structures are studied. An efficient solution procedure for predicting the thermal behavior of aerospace vehicle structures was developed. A 2D finite element computer program incorporating these methodologies is being implemented. The performance of these mixed time finite element algorithms can then be evaluated employing the proposed example problem.
Efficient Optimization of Low-Thrust Spacecraft Trajectories
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Fink, Wolfgang; Russell, Ryan; Terrile, Richard; Petropoulos, Anastassios; vonAllmen, Paul
2007-01-01
A paper describes a computationally efficient method of optimizing trajectories of spacecraft driven by propulsion systems that generate low thrusts and, hence, must be operated for long times. A common goal in trajectory-optimization problems is to find minimum-time, minimum-fuel, or Pareto-optimal trajectories (here, Pareto-optimality signifies that no other solutions are superior with respect to both flight time and fuel consumption). The present method utilizes genetic and simulated-annealing algorithms to search for globally Pareto-optimal solutions. These algorithms are implemented in parallel form to reduce computation time. These algorithms are coupled with either of two traditional trajectory- design approaches called "direct" and "indirect." In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. The indirect approach involves the primer-vector theory (introduced in 1963), in which the thrust control problem is transformed into a co-state control problem and the initial values of the co-state vector are optimized. In application to two example orbit-transfer problems, this method was found to generate solutions comparable to those of other state-of-the-art trajectory-optimization methods while requiring much less computation time.
Kim, Min Woo; Sun, Gwanggyu; Lee, Jung Hyuk; Kim, Byung-Gee
2018-06-01
Ribozyme (Rz) is a very attractive RNA molecule in metabolic engineering and synthetic biology fields where RNA processing is required as a control unit or ON/OFF signal for its cleavage reaction. In order to use Rz for such RNA processing, Rz must have highly active and specific catalytic activity. However, current methods for assessing the intracellular activity of Rz have limitations such as difficulty in handling and inaccuracies in the evaluation of correct cleavage activity. In this paper, we proposed a simple method to accurately measure the "intracellular cleavage efficiency" of Rz. This method deactivates unwanted activity of Rz which may consistently occur after cell lysis using DNA quenching method, and calculates the cleavage efficiency by analyzing the cleaved fraction of mRNA by Rz from the total amount of mRNA containing Rz via quantitative real-time PCR (qPCR). The proposed method was applied to measure "intracellular cleavage efficiency" of sTRSV, a representative Rz, and its mutant, and their intracellular cleavage efficiencies were calculated as 89% and 93%, respectively. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ravi Kanth, A. S. V.; Aruna, K.
2016-12-01
In this paper, we present fractional differential transform method (FDTM) and modified fractional differential transform method (MFDTM) for the solution of time fractional Black-Scholes European option pricing equation. The method finds the solution without any discretization, transformation, or restrictive assumptions with the use of appropriate initial or boundary conditions. The efficiency and exactitude of the proposed methods are tested by means of three examples.
Method for providing real-time control of a gaseous propellant rocket propulsion system
NASA Technical Reports Server (NTRS)
Morris, Brian G. (Inventor)
1991-01-01
The new and improved methods and apparatus disclosed provide effective real-time management of a spacecraft rocket engine powered by gaseous propellants. Real-time measurements representative of the engine performance are compared with predetermined standards to selectively control the supply of propellants to the engine for optimizing its performance as well as efficiently managing the consumption of propellants. A priority system is provided for achieving effective real-time management of the propulsion system by first regulating the propellants to keep the engine operating at an efficient level and thereafter regulating the consumption ratio of the propellants. A lower priority level is provided to balance the consumption of the propellants so significant quantities of unexpended propellants will not be left over at the end of the scheduled mission of the engine.
Pranevicius, Henrikas; Pranevicius, Mindaugas; Pranevicius, Osvaldas; Bukauskas, Feliksas F.
2015-01-01
The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC) of voltage gating of gap junction (GJ) channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs), which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ∼20 times. PMID:25705700
Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.
Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim
2017-12-12
In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using T PNO = T TNO with appropriate values of 10 -7 to 10 -8 for reactions and 10 -8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >10 2 ) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.
Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr
2014-12-15
In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less
A GPU-accelerated implicit meshless method for compressible flows
NASA Astrophysics Data System (ADS)
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
A new desorption method for removing organic solvents from activated carbon using surfactant
Hinoue, Mitsuo; Ishimatsu, Sumiyo; Fueta, Yukiko; Hori, Hajime
2017-01-01
Objectives: A new desorption method was investigated, which does not require toxic organic solvents. Efficient desorption of organic solvents from activated carbon was achieved with an ananionic surfactant solution, focusing on its washing and emulsion action. Methods: Isopropyl alcohol (IPA) and methyl ethyl ketone (MEK) were used as test solvents. Lauryl benzene sulfonic acid sodium salt (LAS) and sodium dodecyl sulfate (SDS) were used as the surfactant. Activated carbon (100 mg) was placed in a vial and a predetermined amount of organic solvent was added. After leaving for about 24 h, a predetermined amount of the surfactant solution was added. After leaving for another 72 h, the vial was heated in an incubator at 60°C for a predetermined time. The organic vapor concentration was then determined with a frame ionization detector (FID)-gas chromatograph and the desorption efficiency was calculated. Results: A high desorption efficiency was obtained with a 10% surfactant solution (LAS 8%, SDS 2%), 5 ml desorption solution, 60°C desorption temperature, and desorption time of over 24 h, and the desorption efficiency was 72% for IPA and 9% for MEK. Under identical conditions, the desorption efficiencies for another five organic solvents were investigated, which were 36%, 3%, 32%, 2%, and 3% for acetone, ethyl acetate, dichloromethane, toluene, and m-xylene, respectively. Conclusions: A combination of two anionic surfactants exhibited a relatively high desorption efficiency for IPA. For toluene, the desorption efficiency was low due to poor detergency and emulsification power. PMID:28132972
Survey of methods for calculating sensitivity of general eigenproblems
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Haftka, Raphael T.
1987-01-01
A survey of methods for sensitivity analysis of the algebraic eigenvalue problem for non-Hermitian matrices is presented. In addition, a modification of one method based on a better normalizing condition is proposed. Methods are classified as Direct or Adjoint and are evaluated for efficiency. Operation counts are presented in terms of matrix size, number of design variables and number of eigenvalues and eigenvectors of interest. The effect of the sparsity of the matrix and its derivatives is also considered, and typical solution times are given. General guidelines are established for the selection of the most efficient method.
Pulse charging of lead-acid traction cells
NASA Technical Reports Server (NTRS)
Smithrick, J. J.
1980-01-01
Pulse charging, as a method of rapidly and efficiently charging 300 amp-hour lead-acid traction cells for an electric vehicle application was investigated. A wide range of charge pulse current square waveforms were investigated and the results were compared to constant current charging at the time averaged pulse current values. Representative pulse current waveforms were: (1) positive waveform-peak charge pulse current of 300 amperes (amps), discharge pulse-current of zero amps, and a duty cycle of about 50%; (2) Romanov waveform-peak charge pulse current of 300 amps, peak discharge pulse current of 15 amps, and a duty of 50%; and (3) McCulloch waveform peak charge pulse current of 193 amps, peak discharge pulse current of about 575 amps, and a duty cycle of 94%. Experimental results indicate that on the basis of amp-hour efficiency, pulse charging offered no significant advantage as a method of rapidly charging 300 amp-hour lead-acid traction cells when compared to constant current charging at the time average pulse current value. There were, however, some disadvantages of pulse charging in particular a decrease in charge amp-hour and energy efficiencies and an increase in cell electrolyte temperature. The constant current charge method resulted in the best energy efficiency with no significant sacrifice of charge time or amp-hour output. Whether or not pulse charging offers an advantage over constant current charging with regard to the cell charge/discharge cycle life is unknown at this time.
Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lin; Shao, Sihong; E, Weinan
2012-11-06
We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Ptmore » $$_{2}$$, Au$$_{2}$$, TlF, and Bi$$_{2}$$Se$$_{3}$$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.« less
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Tsai, Yung-Yu; Ohashi, Takao; Kanazawa, Takenori; Polburee, Pirapan; Misaki, Ryo; Limtong, Savitree; Fujiyama, Kazuhito
2017-05-01
Rhodosporidium toruloides DMKU3-TK16 (TK16), a basidiomycetous yeast isolated in Thailand, can produce a large amount of oil corresponding to approximately 70 % of its dry cell weight. However, lack of a sufficient and efficient transformation method makes further genetic manipulation of this organism difficult. We here developed a new transformation system for R. toruloides using a lithium acetate method with the Sh ble gene as a selective marker under the control of the R. toruloides ATCC 10657 GPD1 promoter. A linear DNA fragment containing the Sh ble gene expression cassette was integrated into the genome, and its integration was confirmed by colony PCR and Southern blot. Then, we further optimized the parameters affecting the transformation efficiency, such as the amount of linear DNA, the growth phase, the incubation time in the transformation mixture, the heat shock treatment temperature, the addition of DMSO and carrier DNA, and the recovery incubation time. With the developed method, the transformation efficiency of approximately 25 transformants/μg DNA was achieved. Compared with the initial trial, transformation efficiency was enhanced 417-fold. We further demonstrated the heterologous production of EGFP in TK16 by microscopic observation and immunoblot analysis, and use the technique to disrupt the endogenous URA3 gene. The newly developed method is thus simple and time saving, making it useful for efficient introduction of an exogenous gene into R. toruloides strains. Accordingly, this new practical approach should facilitate the molecular manipulation, such as target gene introduction and deletion, of TK16 and other R. toruloides strains as a major source of biodiesel.
Efficient quantum algorithm for computing n-time correlation functions.
Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E
2014-07-11
We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
Puett, Chloe; Salpéteur, Cécile; Houngbe, Freddy; Martínez, Karen; N'Diaye, Dieynaba S; Tonguet-Papucci, Audrey
2018-01-01
This study assessed the costs and cost-efficiency of a mobile cash transfer implemented in Tapoa Province, Burkina Faso in the MAM'Out randomized controlled trial from June 2013 to December 2014, using mixed methods and taking a societal perspective by including costs to implementing partners and beneficiary households. Data were collected via interviews with implementing staff from the humanitarian agency and the private partner delivering the mobile money, focus group discussions with beneficiaries, and review of accounting databases. Costs were analyzed by input category and activity-based cost centers. cost-efficiency was analyzed by cost-transfer ratios (CTR) and cost per beneficiary. Qualitative analysis was conducted to identify themes related to implementing electronic cash transfers, and barriers to efficient implementation. The CTR was 0.82 from a societal perspective, within the same range as other humanitarian transfer programs; however the intervention did not achieve the same degree of cost-efficiency as other mobile transfer programs specifically. Challenges in coordination between humanitarian and private partners resulted in long wait times for beneficiaries, particularly in the first year of implementation. Sensitivity analyses indicated a potential 6% reduction in CTR through reducing beneficiary wait time by one-half. Actors reported that coordination challenges improved during the project, therefore inefficiencies likely would be resolved, and cost-efficiency improved, as the program passed the pilot phase. Despite the time required to establish trusting relationships among actors, and to set up a network of cash points in remote areas, this analysis showed that mobile transfers hold promise as a cost-efficient method of delivering cash in this setting. Implementation by local government would likely reduce costs greatly compared to those found in this study context, and improve cost-efficiency especially by subsidizing expansion of mobile money network coverage and increasing cash distribution points in remote areas which are unprofitable for private partners.
Supercomputing Aspects for Simulating Incompressible Flow
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kris, Cetin C.
2000-01-01
The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two methods is compared by obtaining unsteady solutions for the evolution of twin vortices behind a flat plate. Calculated results are compared with experimental and other numerical results. For an unsteady flow, which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiteration procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy the incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in present computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.
General linear methods and friends: Toward efficient solutions of multiphysics problems
NASA Astrophysics Data System (ADS)
Sandu, Adrian
2017-07-01
Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..
A Q-Ising model application for linear-time image segmentation
NASA Astrophysics Data System (ADS)
Bentrem, Frank W.
2010-10-01
A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.
Quantum and electromagnetic propagation with the conjugate symmetric Lanczos method.
Acevedo, Ramiro; Lombardini, Richard; Turner, Matthew A; Kinsey, James L; Johnson, Bruce R
2008-02-14
The conjugate symmetric Lanczos (CSL) method is introduced for the solution of the time-dependent Schrodinger equation. This remarkably simple and efficient time-domain algorithm is a low-order polynomial expansion of the quantum propagator for time-independent Hamiltonians and derives from the time-reversal symmetry of the Schrodinger equation. The CSL algorithm gives forward solutions by simply complex conjugating backward polynomial expansion coefficients. Interestingly, the expansion coefficients are the same for each uniform time step, a fact that is only spoiled by basis incompleteness and finite precision. This is true for the Krylov basis and, with further investigation, is also found to be true for the Lanczos basis, important for efficient orthogonal projection-based algorithms. The CSL method errors roughly track those of the short iterative Lanczos method while requiring fewer matrix-vector products than the Chebyshev method. With the CSL method, only a few vectors need to be stored at a time, there is no need to estimate the Hamiltonian spectral range, and only matrix-vector and vector-vector products are required. Applications using localized wavelet bases are made to harmonic oscillator and anharmonic Morse oscillator systems as well as electrodynamic pulse propagation using the Hamiltonian form of Maxwell's equations. For gold with a Drude dielectric function, the latter is non-Hermitian, requiring consideration of corrections to the CSL algorithm.
Li, Rundong; Li, Yanlong; Yang, Tianhua; Wang, Lei; Wang, Weiyun
2015-05-30
Evaluations of technologies for heavy metal control mainly examine the residual and leaching rates of a single heavy metal, such that developed evaluation method have no coordination or uniqueness and are therefore unsuitable for hazard control effect evaluation. An overall pollution toxicity index (OPTI) was established in this paper, based on the developed index, an integrated evaluation method of heavy metal pollution control was established. Application of this method in the melting and sintering of fly ash revealed the following results: The integrated control efficiency of the melting process was higher in all instances than that of the sintering process. The lowest integrated control efficiency of melting was 56.2%, and the highest integrated control efficiency of sintering was 46.6%. Using the same technology, higher integrated control efficiency conditions were all achieved with lower temperatures and shorter times. This study demonstrated the unification and consistency of this method. Copyright © 2015 Elsevier B.V. All rights reserved.
Bai, Yalong; Cui, Yan; Paoli, George C; Shi, Chunlei; Wang, Dapeng; Zhou, Min; Zhang, Lida; Shi, Xianming
2016-09-01
Magnetic separation has great advantages over traditional bio-separation methods and has become popular in the development of methods for the detection of bacterial pathogens, viruses, and transgenic crops. Functionalization of magnetic nanoparticles is a key factor for efficient capture of the target analytes. In this paper, we report the synthesis of amino-rich silica-coated magnetic nanoparticles using a one-pot method. This type of magnetic nanoparticle has a rough surface and a higher density of amino groups than the nanoparticles prepared by a post-modification method. Furthermore, the results of hydrochloric acid treatment indicated that the magnetic nanoparticles were stably coated. The developed amino-rich silica-coated magnetic nanoparticles were used to directly adsorb DNA. After magnetic separation and blocking, the magnetic nanoparticles and DNA complexes were used directly for the polymerase chain reaction (PCR), without onerous and time-consuming purification and elution steps. The results of real-time quantitative PCR showed that the nanoparticles with higher amino group density resulted in improved DNA capture efficiency. The results suggest that amino-rich silica-coated magnetic nanoparticles are of great potential for efficient bio-separation of DNA prior to detection by PCR. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Mohan Negi, Lalit; Jaggi, Manu; Talegaonkar, Sushama
2013-01-01
Development of an effective formulation involves careful optimization of a number of excipient and process variables. Sometimes the number of variables is so large that even the most efficient optimization designs require a very large number of trials which put stress on costs as well as time. A creative combination of a number of design methods leads to a smaller number of trials. This study was aimed at the development of nanostructured lipid carriers (NLCs) by using a combination of different optimization methods. A total of 11 variables were first screened using the Plackett-Burman design for their effects on formulation characteristics like size and entrapment efficiency. Four out of 11 variables were found to have insignificant effects on the formulation parameters and hence were screened out. Out of the remaining seven variables, four (concentration of tween-80, lecithin, sodium taurocholate, and total lipid) were found to have significant effects on the size of the particles while the other three (phase ratio, drug to lipid ratio, and sonication time) had a higher influence on the entrapment efficiency. The first four variables were optimized for their effect on size using the Taguchi L9 orthogonal array. The optimized values of the surfactants and lipids were kept constant for the next stage, where the sonication time, phase ratio, and drug:lipid ratio were varied using the Box-Behnken design response surface method to optimize the entrapment efficiency. Finally, by performing only 38 trials, we have optimized 11 variables for the development of NLCs with a size of 143.52 ± 1.2 nm, zeta potential of -32.6 ± 0.54 mV, and 98.22 ± 2.06% entrapment efficiency.
Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G
2013-04-30
Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Genome editing of Ralstonia eutropha using an electroporation-based CRISPR-Cas9 technique.
Xiong, Bin; Li, Zhongkang; Liu, Li; Zhao, Dongdong; Zhang, Xueli; Bi, Changhao
2018-01-01
Ralstonia eutropha is an important bacterium for the study of polyhydroxyalkanoates (PHAs) synthesis and CO 2 fixation, which makes it a potential strain for industrial PHA production and attractive host for CO 2 conversion. Although the bacterium is not recalcitrant to genetic manipulation, current methods for genome editing based on group II introns or single crossover integration of a suicide plasmid are inefficient and time-consuming, which limits the genetic engineering of this organism. Thus, developing an efficient and convenient method for R. eutropha genome editing is imperative. An efficient genome editing method for R. eutropha was developed using an electroporation-based CRISPR-Cas9 technique. In our study, the electroporation efficiency of R. eutropha was found to be limited by its restriction-modification (RM) systems. By searching the putative RM systems in R. eutropha H16 using REBASE database and comparing with that in E. coli MG1655, five putative restriction endonuclease genes which are related to the RM systems in R. eutropha were predicated and disrupted. It was found that deletion of H16_A0006 and H16_A0008 - 9 increased the electroporation efficiency 1658 and 4 times, respectively. Fructose was found to reduce the leaky expression of the arabinose-inducible pBAD promoter, which was used to optimize the expression of cas9 , enabling genome editing via homologous recombination based on CRISPR-Cas9 in R. eutropha . A total of five genes were edited with efficiencies ranging from 78.3 to 100%. The CRISPR-Cpf1 system and the non-homologous end joining mechanism were also investigated, but failed to yield edited strains. We present the first genome editing method for R. eutropha using an electroporation-based CRISPR-Cas9 approach, which significantly increased the efficiency and decreased time to manipulate this facultative chemolithoautotrophic microbe. The novel technique will facilitate more advanced researches and applications of R. eutropha for PHA production and CO 2 conversion.
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-04-25
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Ishihara, Koji; Morimoto, Jun
2018-03-01
Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
High-speed extended-term time-domain simulation for online cascading analysis of power system
NASA Astrophysics Data System (ADS)
Fu, Chuan
A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
Efficient ICCG on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Hammond, Steven W.; Schreiber, Robert
1989-01-01
Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.
Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Chen, Jen-Ping
2012-01-01
This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.
Efficiencies of joint non-local update moves in Monte Carlo simulations of coarse-grained polymers
NASA Astrophysics Data System (ADS)
Austin, Kieran S.; Marenz, Martin; Janke, Wolfhard
2018-03-01
In this study four update methods are compared in their performance in a Monte Carlo simulation of polymers in continuum space. The efficiencies of the update methods and combinations thereof are compared with the aid of the autocorrelation time with a fixed (optimal) acceptance ratio. Results are obtained for polymer lengths N = 14, 28 and 42 and temperatures below, at and above the collapse transition. In terms of autocorrelation, the optimal acceptance ratio is approximately 0.4. Furthermore, an overview of the step sizes of the update methods that correspond to this optimal acceptance ratio is given. This shall serve as a guide for future studies that rely on efficient computer simulations.
Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.
2016-01-01
Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796
Real-Time and Post-Processed Orbit Determination and Positioning
NASA Technical Reports Server (NTRS)
Harvey, Nathaniel E. (Inventor); Lu, Wenwen (Inventor); Miller, Mark A. (Inventor); Bar-Sever, Yoaz E. (Inventor); Miller, Kevin J. (Inventor); Romans, Larry J. (Inventor); Dorsey, Angela R. (Inventor); Sibthorpe, Anthony J. (Inventor); Weiss, Jan P. (Inventor); Bertiger, William I. (Inventor);
2015-01-01
Novel methods and systems for the accurate and efficient processing of real-time and latent global navigation satellite systems (GNSS) data are described. Such methods and systems can perform orbit determination of GNSS satellites, orbit determination of satellites carrying GNSS receivers, positioning of GNSS receivers, and environmental monitoring with GNSS data.
Real-Time and Post-Processed Orbit Determination and Positioning
NASA Technical Reports Server (NTRS)
Bar-Sever, Yoaz E. (Inventor); Romans, Larry J. (Inventor); Weiss, Jan P. (Inventor); Gross, Jason (Inventor); Harvey, Nathaniel E. (Inventor); Lu, Wenwen (Inventor); Dorsey, Angela R. (Inventor); Miller, Mark A. (Inventor); Sibthorpe, Anthony J. (Inventor); Bertiger, William I. (Inventor);
2016-01-01
Novel methods and systems for the accurate and efficient processing of real-time and latent global navigation satellite systems (GNSS) data are described. Such methods and systems can perform orbit determination of GNSS satellites, orbit determination of satellites carrying GNSS receivers, positioning of GNSS receivers, and environmental monitoring with GNSS data.
Non-iterative Voltage Stability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Vyakaranam, Bharat; Hou, Zhangshuan
2014-09-30
This report demonstrates promising capabilities and performance characteristics of the proposed method using several power systems models. The new method will help to develop a new generation of highly efficient tools suitable for real-time parallel implementation. The ultimate benefit obtained will be early detection of system instability and prevention of system blackouts in real time.
Thamareerat, N; Luadsong, A; Aschariyaphotha, N
2016-01-01
In this paper, we present a numerical scheme used to solve the nonlinear time fractional Navier-Stokes equations in two dimensions. We first employ the meshless local Petrov-Galerkin (MLPG) method based on a local weak formulation to form the system of discretized equations and then we will approximate the time fractional derivative interpreted in the sense of Caputo by a simple quadrature formula. The moving Kriging interpolation which possesses the Kronecker delta property is applied to construct shape functions. This research aims to extend and develop further the applicability of the truly MLPG method to the generalized incompressible Navier-Stokes equations. Two numerical examples are provided to illustrate the accuracy and efficiency of the proposed algorithm. Very good agreement between the numerically and analytically computed solutions can be observed in the verification. The present MLPG method has proved its efficiency and reliability for solving the two-dimensional time fractional Navier-Stokes equations arising in fluid dynamics as well as several other problems in science and engineering.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Murman, S. M.; Kwak, Dochan (Technical Monitor)
2002-01-01
The proposed paper will present recent extensions in the development of an efficient Euler solver for adaptively-refined Cartesian meshes with embedded boundaries. The paper will focus on extensions of the basic method to include solution adaptation, time-dependent flow simulation, and arbitrary rigid domain motion. The parallel multilevel method makes use of on-the-fly parallel domain decomposition to achieve extremely good scalability on large numbers of processors, and is coupled with an automatic coarse mesh generation algorithm for efficient processing by a multigrid smoother. Numerical results are presented demonstrating parallel speed-ups of up to 435 on 512 processors. Solution-based adaptation may be keyed off truncation error estimates using tau-extrapolation or a variety of feature detection based refinement parameters. The multigrid method is extended to for time-dependent flows through the use of a dual-time approach. The extension to rigid domain motion uses an Arbitrary Lagrangian-Eulerlarian (ALE) formulation, and results will be presented for a variety of two- and three-dimensional example problems with both simple and complex geometry.
NASA Astrophysics Data System (ADS)
Cai, Xiaohui; Liu, Yang; Ren, Zhiming
2018-06-01
Reverse-time migration (RTM) is a powerful tool for imaging geologically complex structures such as steep-dip and subsalt. However, its implementation is quite computationally expensive. Recently, as a low-cost solution, the graphic processing unit (GPU) was introduced to improve the efficiency of RTM. In the paper, we develop three ameliorative strategies to implement RTM on GPU card. First, given the high accuracy and efficiency of the adaptive optimal finite-difference (FD) method based on least squares (LS) on central processing unit (CPU), we study the optimal LS-based FD method on GPU. Second, we develop the CPU-based hybrid absorbing boundary condition (ABC) to the GPU-based one by addressing two issues of the former when introduced to GPU card: time-consuming and chaotic threads. Third, for large-scale data, the combinatorial strategy for optimal checkpointing and efficient boundary storage is introduced for the trade-off between memory and recomputation. To save the time of communication between host and disk, the portable operating system interface (POSIX) thread is utilized to create the other CPU core at the checkpoints. Applications of the three strategies on GPU with the compute unified device architecture (CUDA) programming language in RTM demonstrate their efficiency and validity.
NASA Astrophysics Data System (ADS)
Dai, Jun; Zhou, Haigang; Zhao, Shaoquan
2017-01-01
This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.
An observational assessment method for aging laboratory rats
The growth of the aging population highlights the need for laboratory animal models to study the basic biological processes ofaging and susceptibility to toxic chemicals and disease. Methods to evaluate health ofaging animals over time are needed, especially efficient methods for...
Crawford, Forrest W.; Suchard, Marc A.
2011-01-01
A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359
NASA Astrophysics Data System (ADS)
Wang, Xiaoqiang; Ju, Lili; Du, Qiang
2016-07-01
The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.
Compressive Spectral Method for the Simulation of the Nonlinear Gravity Waves
Bayındır, Cihan
2016-01-01
In this paper an approach for decreasing the computational effort required for the spectral simulations of the fully nonlinear ocean waves is introduced. The proposed approach utilizes the compressive sampling algorithm and depends on the idea of using a smaller number of spectral components compared to the classical spectral method. After performing the time integration with a smaller number of spectral components and using the compressive sampling technique, it is shown that the ocean wave field can be reconstructed with a significantly better efficiency compared to the classical spectral method. For the sparse ocean wave model in the frequency domain the fully nonlinear ocean waves with Jonswap spectrum is considered. By implementation of a high-order spectral method it is shown that the proposed methodology can simulate the linear and the fully nonlinear ocean waves with negligible difference in the accuracy and with a great efficiency by reducing the computation time significantly especially for large time evolutions. PMID:26911357
Zheng, Qianwang; Mikš-Krajnik, Marta; Yang, Yishan; Xu, Wang; Yuk, Hyun-Gyun
2014-09-01
Conventional culture detection methods are time consuming and labor-intensive. For this reason, an alternative rapid method combining real-time PCR and immunomagnetic separation (IMS) was investigated in this study to detect both healthy and heat-injured Salmonella Typhimurium on raw duck wings. Firstly, the IMS method was optimized by determining the capture efficiency of Dynabeads(®) on Salmonella cells on raw duck wings with different bead incubation (10, 30 and 60 min) and magnetic separation (3, 10 and 30 min) times. Secondly, three Taqman primer sets, Sal, invA and ttr, were evaluated to optimize the real-time PCR protocol by comparing five parameters: inclusivity, exclusivity, PCR efficiency, detection probability and limit of detection (LOD). Thirdly, the optimized real-time PCR, in combination with IMS (PCR-IMS) assay, was compared with a standard ISO and a real-time PCR (PCR) method by analyzing artificially inoculated raw duck wings with healthy and heat-injured Salmonella cells at 10(1) and 10(0) CFU/25 g. Finally, the optimized PCR-IMS assay was validated for Salmonella detection in naturally contaminated raw duck wing samples. Under optimal IMS conditions (30 min bead incubation and 3 min magnetic separation times), approximately 85 and 64% of S. Typhimurium cells were captured by Dynabeads® from pure culture and inoculated raw duck wings, respectively. Although Sal and ttr primers exhibited 100% inclusivity and exclusivity for 16 Salmonella spp. and 36 non-Salmonella strains, the Sal primer showed lower LOD (10(3) CFU/ml) and higher PCR efficiency (94.1%) than the invA and ttr primers. Moreover, for Sal and invA primers, 100% detection probability on raw duck wings suspension was observed at 10(3) and 10(4) CFU/ml with and without IMS, respectively. Thus, the Sal primer was chosen for further experiments. The optimized PCR-IMS method was significantly (P=0.0011) better at detecting healthy Salmonella cells after 7-h enrichment than traditional PCR method. However there was no significant difference between the two methods with longer enrichment time (14 h). The diagnostic accuracy of PCR-IMS was shown to be 98.3% through the validation study. These results indicate that the optimized PCR-IMS method in this study could provide a sensitive, specific and rapid detection method for Salmonella on raw duck wings, enabling 10-h detection. However, a longer enrichment time could be needed for resuscitation and reliable detection of heat-injured cells. Copyright © 2014 Elsevier B.V. All rights reserved.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.
1986-01-01
The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.
Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai
2015-02-01
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
Mondal, Nagendra Nath
2009-01-01
This study presents Monte Carlo Simulation (MCS) results of detection efficiencies, spatial resolutions and resolving powers of a time-of-flight (TOF) PET detector systems. Cerium activated Lutetium Oxyorthosilicate (Lu2SiO5: Ce in short LSO), Barium Fluoride (BaF2) and BriLanCe 380 (Cerium doped Lanthanum tri-Bromide, in short LaBr3) scintillation crystals are studied in view of their good time and energy resolutions and shorter decay times. The results of MCS based on GEANT show that spatial resolution, detection efficiency and resolving power of LSO are better than those of BaF2 and LaBr3, although it possesses inferior time and energy resolutions. Instead of the conventional position reconstruction method, newly established image reconstruction (talked about in the previous work) method is applied to produce high-tech images. Validation is a momentous step to ensure that this imaging method fulfills all purposes of motivation discussed by reconstructing images of two tumors in a brain phantom. PMID:20098551
T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors
Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun
2016-01-01
Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722
Sorokin, Anatoly; Selkov, Gene; Goryanin, Igor
2012-07-16
The volume of the experimentally measured time series data is rapidly growing, while storage solutions offering better data types than simple arrays of numbers or opaque blobs for keeping series data are sorely lacking. A number of indexing methods have been proposed to provide efficient access to time series data, but none has so far been integrated into a tried-and-proven database system. To explore the possibility of such integration, we have developed a data type for time series storage in PostgreSQL, an object-relational database system, and equipped it with an access method based on SAX (Symbolic Aggregate approXimation). This new data type has been successfully tested in a database supporting a large-scale plant gene expression experiment, and it was additionally tested on a very large set of simulated time series data. Copyright © 2011 Elsevier B.V. All rights reserved.
Computer-based learning: interleaving whole and sectional representation of neuroanatomy.
Pani, John R; Chariker, Julia H; Naaz, Farah
2013-01-01
The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). Copyright © 2012 American Association of Anatomists.
Computer-Based Learning: Interleaving Whole and Sectional Representation of Neuroanatomy
Pani, John R.; Chariker, Julia H.; Naaz, Farah
2015-01-01
The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). PMID:22761001
Printing method for organic light emitting device lighting
NASA Astrophysics Data System (ADS)
Ki, Hyun Chul; Kim, Seon Hoon; Kim, Doo-Gun; Kim, Tae-Un; Kim, Snag-Gi; Hong, Kyung-Jin; So, Soon-Yeol
2013-03-01
Organic Light Emitting Device (OLED) has a characteristic to change the electric energy into the light when the electric field is applied to the organic material. OLED is currently employed as a light source for the lighting tools because research has extensively progressed in the improvement of luminance, efficiency, and life time. OLED is widely used in the plate display device because of a simple manufacture process and high emitting efficiency. But most of OLED lighting projects were used the vacuum evaporator (thermal evaporator) with low molecular. Although printing method has lower efficiency and life time of OLED than vacuum evaporator method, projects of printing OLED actively are progressed because was possible to combine with flexible substrate and printing technology. Printing technology is ink-jet, screen printing and slot coating. This printing method allows for low cost and mass production techniques and large substrates. In this research, we have proposed inkjet printing for organic light-emitting devices has the dominant method of thick film deposition because of its low cost and simple processing. In this research, the fabrication of the passive matrix OLED is achieved by inkjet printing, using a polymer phosphorescent ink. We are measured optical and electrical characteristics of OLED.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc
2011-02-01
In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.
Shariat, M H; Gazor, S; Redfearn, D
2015-08-01
Atrial fibrillation (AF), the most common sustained cardiac arrhythmia, is an extremely costly public health problem. Catheter-based ablation is a common minimally invasive procedure to treat AF. Contemporary mapping methods are highly dependent on the accuracy of anatomic localization of rotor sources within the atria. In this paper, using simulated atrial intracardiac electrograms (IEGMs) during AF, we propose a computationally efficient method for localizing the tip of the electrical rotor with an Archimedean/arithmetic spiral wavefront. The proposed method deploys the locations of electrodes of a catheter and their IEGMs activation times to estimate the unknown parameters of the spiral wavefront including its tip location. The proposed method is able to localize the spiral as soon as the wave hits three electrodes of the catheter. Our simulation results show that the method can efficiently localize the spiral wavefront that rotates either clockwise or counterclockwise.
Lim, Ji Young; Kim, Mi Ja; Park, Chang Gi
2011-08-01
Time-driven activity-based costing was applied to analyze the nursing activity cost and efficiency of a medical unit. Data were collected at a medical unit of a general hospital. Nursing activities were measured using a nursing activities inventory and classified as 6 domains using Easley-Storfjell Instrument. Descriptive statistics were used to identify general characteristics of the unit, nursing activities and activity time, and stochastic frontier model was adopted to estimate true activity time. The average efficiency of the medical unit using theoretical resource capacity was 77%, however the efficiency using practical resource capacity was 96%. According to these results, the portion of non-added value time was estimated 23% and 4% each. The sums of total nursing activity costs were estimated 109,860,977 won in traditional activity-based costing and 84,427,126 won in time-driven activity-based costing. The difference in the two cost calculating methods was 25,433,851 won. These results indicate that the time-driven activity-based costing provides useful and more realistic information about the efficiency of unit operation compared to traditional activity-based costing. So time-driven activity-based costing is recommended as a performance evaluation framework for nursing departments based on cost management.
NASA Astrophysics Data System (ADS)
Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.
2017-08-01
Molecular dynamics simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules, but they are limited by the time scale barrier. That is, we may not obtain properties' efficiently because we need to run microseconds or longer simulations using femtosecond time steps. To overcome this time scale barrier, we can use the weighted ensemble (WE) method, a powerful enhanced sampling method that efficiently samples thermodynamic and kinetic properties. However, the WE method requires an appropriate partitioning of phase space into discrete macrostates, which can be problematic when we have a high-dimensional collective space or when little is known a priori about the molecular system. Hence, we developed a new WE-based method, called the "Concurrent Adaptive Sampling (CAS) algorithm," to tackle these issues. The CAS algorithm is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective variables and adaptive macrostates to enhance the sampling in the high-dimensional space. This is especially useful for systems in which we do not know what the right reaction coordinates are, in which case we can use many collective variables to sample conformations and pathways. In addition, a clustering technique based on the committor function is used to accelerate sampling the slowest process in the molecular system. In this paper, we introduce the new method and show results from two-dimensional models and bio-molecules, specifically penta-alanine and a triazine trimer.
Riesgo, Ana; Pérez-Porro, Alicia R; Carmona, Susana; Leys, Sally P; Giribet, Gonzalo
2012-03-01
Transcriptome sequencing with next-generation sequencing technologies has the potential for addressing many long-standing questions about the biology of sponges. Transcriptome sequence quality depends on good cDNA libraries, which requires high-quality mRNA. Standard protocols for preserving and isolating mRNA often require optimization for unusual tissue types. Our aim was assessing the efficiency of two preservation modes, (i) flash freezing with liquid nitrogen (LN₂) and (ii) immersion in RNAlater, for the recovery of high-quality mRNA from sponge tissues. We also tested whether the long-term storage of samples at -80 °C affects the quantity and quality of mRNA. We extracted mRNA from nine sponge species and analysed the quantity and quality (A260/230 and A260/280 ratios) of mRNA according to preservation method, storage time, and taxonomy. The quantity and quality of mRNA depended significantly on the preservation method used (LN₂) outperforming RNAlater), the sponge species, and the interaction between them. When the preservation was analysed in combination with either storage time or species, the quantity and A260/230 ratio were both significantly higher for LN₂-preserved samples. Interestingly, individual comparisons for each preservation method over time indicated that both methods performed equally efficiently during the first month, but RNAlater lost efficiency in storage times longer than 2 months compared with flash-frozen samples. In summary, we find that for long-term preservation of samples, flash freezing is the preferred method. If LN₂ is not available, RNAlater can be used, but mRNA extraction during the first month of storage is advised. © 2011 Blackwell Publishing Ltd.
A new OTDR based on probe frequency multiplexing
NASA Astrophysics Data System (ADS)
Lu, Lidong; Liang, Yun; Li, Binglin; Guo, Jinghong; Zhang, Xuping
2013-12-01
Two signal multiplexing methods are proposed and experimentally demonstrated in optical time domain reflectometry (OTDR) for fault location of optical fiber transmission line to obtain high measurement efficiency. Probe signal multiplexing is individually obtained by phase modulation for generation of multi-frequency and time sequential frequency probe pulses. The backscattered Rayleigh light of the multiplexing probe signals is transferred to corresponding heterodyne intermediate frequency (IF) through heterodyning with the single frequency local oscillator (LO). Then the IFs are simultaneously acquired by use of a data acquisition card (DAQ) with sampling rate of 100Msps, and the obtained data are processed by digital band pass filtering (BPF), digital down conversion (DDC) and digital low pass filtering (BPF) procedure. For each probe frequency of the detected signals, the extraction of the time domain reflecting signal power is performed by parallel computing method. For a comprehensive performance comparison with conventional coherent OTDR on the probe frequency multiplexing methods, the potential for enhancement of dynamic range, spatial resolution and measurement time are analyzed and discussed. Experimental results show that by use of the probe frequency multiplexing method, the measurement efficiency of coherent OTDR can be enhanced by nearly 40 times.
Automatic EEG spike detection.
Harner, Richard
2009-10-01
Since the 1970s advances in science and technology during each succeeding decade have renewed the expectation of efficient, reliable automatic epileptiform spike detection (AESD). But even when reinforced with better, faster tools, clinically reliable unsupervised spike detection remains beyond our reach. Expert-selected spike parameters were the first and still most widely used for AESD. Thresholds for amplitude, duration, sharpness, rise-time, fall-time, after-coming slow waves, background frequency, and more have been used. It is still unclear which of these wave parameters are essential, beyond peak-peak amplitude and duration. Wavelet parameters are very appropriate to AESD but need to be combined with other parameters to achieve desired levels of spike detection efficiency. Artificial Neural Network (ANN) and expert-system methods may have reached peak efficiency. Support Vector Machine (SVM) technology focuses on outliers rather than centroids of spike and nonspike data clusters and should improve AESD efficiency. An exemplary spike/nonspike database is suggested as a tool for assessing parameters and methods for AESD and is available in CSV or Matlab formats from the author at brainvue@gmail.com. Exploratory Data Analysis (EDA) is presented as a graphic method for finding better spike parameters and for the step-wise evaluation of the spike detection process.
An Efficient Pattern Mining Approach for Event Detection in Multivariate Temporal Data
Batal, Iyad; Cooper, Gregory; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos
2015-01-01
This work proposes a pattern mining approach to learn event detection models from complex multivariate temporal data, such as electronic health records. We present Recent Temporal Pattern mining, a novel approach for efficiently finding predictive patterns for event detection problems. This approach first converts the time series data into time-interval sequences of temporal abstractions. It then constructs more complex time-interval patterns backward in time using temporal operators. We also present the Minimal Predictive Recent Temporal Patterns framework for selecting a small set of predictive and non-spurious patterns. We apply our methods for predicting adverse medical events in real-world clinical data. The results demonstrate the benefits of our methods in learning accurate event detection models, which is a key step for developing intelligent patient monitoring and decision support systems. PMID:26752800
NASA Astrophysics Data System (ADS)
Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang
2017-12-01
Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.
An efficient numerical algorithm for transverse impact problems
NASA Technical Reports Server (NTRS)
Sankar, B. V.; Sun, C. T.
1985-01-01
Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.
Recommender engine for continuous-time quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Design of An Energy Efficient Hydraulic Regenerative circuit
NASA Astrophysics Data System (ADS)
Ramesh, S.; Ashok, S. Denis; Nagaraj, Shanmukha; Adithyakumar, C. R.; Reddy, M. Lohith Kumar; Naulakha, Niranjan Kumar
2018-02-01
Increasing cost and power demand, leads to evaluation of new method to increase through productivity and help to solve the power demands. Many researchers have break through to increase the efficiency of a hydraulic power pack, one of the promising methods is the concept of regenerative. The objective of this research work is to increase the efficiency of a hydraulic circuit by introducing a concept of regenerative circuit. A Regenerative circuit is a system that is used to speed up the extension stroke of the double acting single rod hydraulic cylinder. The output is connected to the input in the directional control value. By this concept, increase in velocity of the piston and decrease the cycle time. For the research, a basic hydraulic circuit and a regenerative circuit are designated and compared both with their results. The analysis was based on their time taken for extension and retraction of the piston. From the detailed analysis of both the hydraulic circuits, it is found that the efficiency by introducing hydraulic regenerative circuit increased by is 5.3%. The obtained results conclude that, implementing hydraulic regenerative circuit in a hydraulic power pack decreases power consumption, reduces cycle time and increases productivity in a longer run.
Numerical solution of the time fractional reaction-diffusion equation with a moving boundary
NASA Astrophysics Data System (ADS)
Zheng, Minling; Liu, Fawang; Liu, Qingxia; Burrage, Kevin; Simpson, Matthew J.
2017-06-01
A fractional reaction-diffusion model with a moving boundary is presented in this paper. An efficient numerical method is constructed to solve this moving boundary problem. Our method makes use of a finite difference approximation for the temporal discretization, and spectral approximation for the spatial discretization. The stability and convergence of the method is studied, and the errors of both the semi-discrete and fully-discrete schemes are derived. Numerical examples, motivated by problems from developmental biology, show a good agreement with the theoretical analysis and illustrate the efficiency of our method.
Efficient propagation of the hierarchical equations of motion using the matrix product state method
NASA Astrophysics Data System (ADS)
Shi, Qiang; Xu, Yang; Yan, Yaming; Xu, Meng
2018-05-01
We apply the matrix product state (MPS) method to propagate the hierarchical equations of motion (HEOM). It is shown that the MPS approximation works well in different type of problems, including boson and fermion baths. The MPS method based on the time-dependent variational principle is also found to be applicable to HEOM with over one thousand effective modes. Combining the flexibility of the HEOM in defining the effective modes and the efficiency of the MPS method thus may provide a promising tool in simulating quantum dynamics in condensed phases.
Efficient scatter model for simulation of ultrasound images from computed tomography data
NASA Astrophysics Data System (ADS)
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
NASA Astrophysics Data System (ADS)
Simoni, L.; Secchi, S.; Schrefler, B. A.
2008-12-01
This paper analyses the numerical difficulties commonly encountered in solving fully coupled numerical models and proposes a numerical strategy apt to overcome them. The proposed procedure is based on space refinement and time adaptivity. The latter, which in mainly studied here, is based on the use of a finite element approach in the space domain and a Discontinuous Galerkin approximation within each time span. Error measures are defined for the jump of the solution at each time station. These constitute the parameters allowing for the time adaptivity. Some care is however, needed for a useful definition of the jump measures. Numerical tests are presented firstly to demonstrate the advantages and shortcomings of the method over the more traditional use of finite differences in time, then to assess the efficiency of the proposed procedure for adapting the time step. The proposed method reveals its efficiency and simplicity to adapt the time step in the solution of coupled field problems.
SGFSC: speeding the gene functional similarity calculation based on hash tables.
Tian, Zhen; Wang, Chunyu; Guo, Maozu; Liu, Xiaoyan; Teng, Zhixia
2016-11-04
In recent years, many measures of gene functional similarity have been proposed and widely used in all kinds of essential research. These methods are mainly divided into two categories: pairwise approaches and group-wise approaches. However, a common problem with these methods is their time consumption, especially when measuring the gene functional similarities of a large number of gene pairs. The problem of computational efficiency for pairwise approaches is even more prominent because they are dependent on the combination of semantic similarity. Therefore, the efficient measurement of gene functional similarity remains a challenging problem. To speed current gene functional similarity calculation methods, a novel two-step computing strategy is proposed: (1) establish a hash table for each method to store essential information obtained from the Gene Ontology (GO) graph and (2) measure gene functional similarity based on the corresponding hash table. There is no need to traverse the GO graph repeatedly for each method with the help of the hash table. The analysis of time complexity shows that the computational efficiency of these methods is significantly improved. We also implement a novel Speeding Gene Functional Similarity Calculation tool, namely SGFSC, which is bundled with seven typical measures using our proposed strategy. Further experiments show the great advantage of SGFSC in measuring gene functional similarity on the whole genomic scale. The proposed strategy is successful in speeding current gene functional similarity calculation methods. SGFSC is an efficient tool that is freely available at http://nclab.hit.edu.cn/SGFSC . The source code of SGFSC can be downloaded from http://pan.baidu.com/s/1dFFmvpZ .
Vojkovska, H; Kubikova, I; Kralik, P
2015-03-01
Epidemiological data indicate that raw vegetables are associated with outbreaks of Listeria monocytogenes. Therefore, there is a demand for the availability of rapid and sensitive methods, such as PCR assays, for the detection and accurate discrimination of L. monocytogenes. However, the efficiency of PCR methods can be negatively affected by inhibitory compounds commonly found in vegetable matrices that may cause false-negative results. Therefore, the sample processing and DNA isolation steps must be carefully evaluated prior to the introduction of such methods into routine practice. In this study, we compared the ability of three column-based and four magnetic bead-based commercial DNA isolation kits to extract DNA of the model micro-organism L. monocytogenes from raw vegetables. The DNA isolation efficiency of all isolation kits was determined using a triplex real-time qPCR assay designed to specifically detect L. monocytogenes. The kit with best performance, the PowerSoil(™) Microbial DNA Isolation Kit, is suitable for the extraction of amplifiable DNA from L. monocytogenes cells in vegetable with efficiencies ranging between 29.6 and 70.3%. Coupled with the triplex real-time qPCR assay, this DNA isolation kit is applicable to the samples with bacterial loads of 10(3) bacterial cells per gram of L. monocytogenes. Several recent outbreaks of Listeria monocytogenes have been associated with the consumption of fruits and vegetables. Real-time PCR assays allow fast detection and accurate quantification of microbes. However, the success of real-time PCR is dependent on the success with which template DNA can be extracted. The results of this study suggest that the PowerSoil(™) Microbial DNA Isolation Kit can be used for the extraction of amplifiable DNA from L. monocytogenes cells in vegetable with efficiencies ranging between 29.6 and 70.3%. This method is applicable to samples with bacterial loads of 10(3) bacterial cells per gram of L. monocytogenes. © 2014 The Society for Applied Microbiology.
Design of A Cyclone Separator Using Approximation Method
NASA Astrophysics Data System (ADS)
Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee
2017-12-01
A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Efficient multiscale magnetic-domain analysis of iron-core material under mechanical stress
NASA Astrophysics Data System (ADS)
Nishikubo, Atsushi; Ito, Shumpei; Mifune, Takeshi; Matsuo, Tetsuji; Kaido, Chikara; Takahashi, Yasuhito; Fujiwara, Koji
2018-05-01
For an efficient analysis of magnetization, a partial-implicit solution method is improved using an assembled domain structure model with six-domain mesoscopic particles exhibiting pinning-type hysteresis. The quantitative analysis of non-oriented silicon steel succeeds in predicting the stress dependence of hysteresis loss with computation times greatly reduced by using the improved partial-implicit method. The effect of cell division along the thickness direction is also evaluated.
Hara-Kudo, Yukiko; Konishi, Noriko; Ohtsuka, Kayoko; Iwabuchi, Kaori; Kikuchi, Rie; Isobe, Junko; Yamazaki, Takumiko; Suzuki, Fumie; Nagai, Yuhki; Yamada, Hiroko; Tanouchi, Atsuko; Mori, Tetsuya; Nakagawa, Hiroshi; Ueda, Yasufumi; Terajima, Jun
2016-08-02
To establish an efficient detection method for Shiga toxin (Stx)-producing Escherichia coli (STEC) O26, O103, O111, O121, O145, and O157 in food, an interlaboratory study using all the serogroups of detection targets was firstly conducted. We employed a series of tests including enrichment, real-time PCR assays, and concentration by immunomagnetic separation, followed by plating onto selective agar media (IMS-plating methods). This study was particularly focused on the efficiencies of real-time PCR assays in detecting stx and O-antigen genes of the six serogroups and of IMS-plating methods onto selective agar media including chromogenic agar. Ground beef and radish sprouts samples were inoculated with the six STEC serogroups either at 4-6CFU/25g (low levels) or at 22-29CFU/25g (high levels). The sensitivity of stx detection in ground beef at both levels of inoculation with all six STEC serogroups was 100%. The sensitivity of stx detection was also 100% in radish sprouts at high levels of inoculation with all six STEC serogroups, and 66.7%-91.7% at low levels of inoculation. The sensitivity of detection of O-antigen genes was 100% in both ground beef and radish sprouts at high inoculation levels, while at low inoculation levels, it was 95.8%-100% in ground beef and 66.7%-91.7% in radish sprouts. The sensitivity of detection with IMS-plating was either the same or lower than those of the real-time PCR assays targeting stx and O-antigen genes. The relationship between the results of IMS-plating methods and Ct values of real-time PCR assays were firstly analyzed in detail. Ct values in most samples that tested negative in the IMS-plating method were higher than the maximum Ct values in samples that tested positive in the IMS-plating method. This study indicates that all six STEC serogroups in food contaminated with more than 29CFU/25g were detected by real-time PCR assays targeting stx and O-antigen genes and IMS-plating onto selective agar media. Therefore, screening of stx and O-antigen genes followed by isolation of STECs by IMS-plating methods may be an efficient method to detect the six STEC serogroups. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Gowans, Dakers; Telarico, Chad
The Commercial and Industrial Lighting Evaluation Protocol (the protocol) describes methods to account for gross energy savings resulting from the programmatic installation of efficient lighting equipment in large populations of commercial, industrial, and other nonresidential facilities. This protocol does not address savings resulting from changes in codes and standards, or from education and training activities. A separate Uniform Methods Project (UMP) protocol, Chapter 3: Commercial and Industrial Lighting Controls Evaluation Protocol, addresses methods for evaluating savings resulting from lighting control measures such as adding time clocks, tuning energy management system commands, and adding occupancy sensors.
Applications of multigrid software in the atmospheric sciences
NASA Technical Reports Server (NTRS)
Adams, J.; Garcia, R.; Gross, B.; Hack, J.; Haidvogel, D.; Pizzo, V.
1992-01-01
Elliptic partial differential equations from different areas in the atmospheric sciences are efficiently and easily solved utilizing the multigrid software package named MUDPACK. It is demonstrated that the multigrid method is more efficient than other commonly employed techniques, such as Gaussian elimination and fixed-grid relaxation. The efficiency relative to other techniques, both in terms of storage requirement and computational time, increases quickly with grid size.
A fast marching algorithm for the factored eikonal equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treister, Eran, E-mail: erantreister@gmail.com; Haber, Eldad, E-mail: haber@math.ubc.ca; Department of Mathematics, The University of British Columbia, Vancouver, BC
The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. Thismore » inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.« less
NASA Astrophysics Data System (ADS)
Wang, N.; Shen, Y.; Yang, D.; Bao, X.; Li, J.; Zhang, W.
2017-12-01
Accurate and efficient forward modeling methods are important for high resolution full waveform inversion. Compared with the elastic case, solving anelastic wave equation requires more computational time, because of the need to compute additional material-independent anelastic functions. A numerical scheme with a large Courant-Friedrichs-Lewy (CFL) condition number enables us to use a large time step to simulate wave propagation, which improves computational efficiency. In this work, we apply the fourth-order strong stability preserving Runge-Kutta method with an optimal CFL coeffiecient to solve the anelastic wave equation. We use a fourth order DRP/opt MacCormack scheme for the spatial discretization, and we approximate the rheological behaviors of the Earth by using the generalized Maxwell body model. With a larger CFL condition number, we find that the computational efficient is significantly improved compared with the traditional fourth-order Runge-Kutta method. Then, we apply the scattering-integral method for calculating travel time and amplitude sensitivity kernels with respect to velocity and attenuation structures. For each source, we carry out one forward simulation and save the time-dependent strain tensor. For each station, we carry out three `backward' simulations for the three components and save the corresponding strain tensors. The sensitivity kernels at each point in the medium are the convolution of the two sets of the strain tensors. Finally, we show several synthetic tests to verify the effectiveness of the strong stability preserving Runge-Kutta method in generating accurate synthetics in full waveform modeling, and in generating accurate strain tensors for calculating sensitivity kernels at regional and global scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norman, Matthew R
2014-01-01
The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
Quantification of chaotic strength and mixing in a micro fluidic system
NASA Astrophysics Data System (ADS)
Kim, Ho Jun; Beskok, Ali
2007-11-01
Comparative studies of five different techniques commonly employed to identify the chaotic strength and mixing efficiency in micro fluidic systems are presented to demonstrate the competitive advantages and shortcomings of each method. The 'chaotic electroosmotic stirrer' of Qian and Bau (2002 Anal. Chem. 74 3616-25) is utilized as the benchmark case due to its well-defined flow kinematics. Lagrangian particle tracking methods are utilized to study particle dispersion in the conceptual device using spectral element and fourth-order Runge-Kutta discretizations in space and time, respectively. Stirring efficiency is predicted using the stirring index based on the box counting method, and Poincaré sections are utilized to identify the chaotic and regular regions under various actuation conditions. Finite time Lyapunov exponents are calculated to quantify the chaotic strength, while the probability density function of the stretching field is utilized as an alternative method to demonstrate the statistical analysis of chaotic and partially chaotic cases. Mixing index inverse, based on the standard deviation of scalar species distribution, is utilized as a metric to quantify the mixing efficiency. Series of numerical simulations are performed by varying the Peclet number (Pe) at fixed kinematic conditions. The mixing time (tm) is characterized as a function of the Pe number, and tm ~ ln(Pe) scaling is demonstrated for fully chaotic cases, while tm ~ Peα scaling with α ≈ 0.33 and α = 0.5 are observed for partially chaotic and regular cases, respectively. Employing the aforementioned techniques, optimum kinematic conditions and the actuation frequency of the stirrer that result in the highest mixing/stirring efficiency are identified.
Increasing the computational efficient of digital cross correlation by a vectorization method
NASA Astrophysics Data System (ADS)
Chang, Ching-Yuan; Ma, Chien-Ching
2017-08-01
This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.
A Robust and Efficient Method for Steady State Patterns in Reaction-Diffusion Systems
Lo, Wing-Cheong; Chen, Long; Wang, Ming; Nie, Qing
2012-01-01
An inhomogeneous steady state pattern of nonlinear reaction-diffusion equations with no-flux boundary conditions is usually computed by solving the corresponding time-dependent reaction-diffusion equations using temporal schemes. Nonlinear solvers (e.g., Newton’s method) take less CPU time in direct computation for the steady state; however, their convergence is sensitive to the initial guess, often leading to divergence or convergence to spatially homogeneous solution. Systematically numerical exploration of spatial patterns of reaction-diffusion equations under different parameter regimes requires that the numerical method be efficient and robust to initial condition or initial guess, with better likelihood of convergence to an inhomogeneous pattern. Here, a new approach that combines the advantages of temporal schemes in robustness and Newton’s method in fast convergence in solving steady states of reaction-diffusion equations is proposed. In particular, an adaptive implicit Euler with inexact solver (AIIE) method is found to be much more efficient than temporal schemes and more robust in convergence than typical nonlinear solvers (e.g., Newton’s method) in finding the inhomogeneous pattern. Application of this new approach to two reaction-diffusion equations in one, two, and three spatial dimensions, along with direct comparisons to several other existing methods, demonstrates that AIIE is a more desirable method for searching inhomogeneous spatial patterns of reaction-diffusion equations in a large parameter space. PMID:22773849
NASA Astrophysics Data System (ADS)
Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang
2017-12-01
Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.
Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo
2015-07-01
Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.
Tensor-product preconditioners for a space-time discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Diosady, Laslo T.; Murman, Scott M.
2014-10-01
A space-time discontinuous Galerkin spectral element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is presented. A diagonalized alternating direction implicit preconditioner is extended to a space-time formulation using entropy variables. The effectiveness of this technique is demonstrated for the direct numerical simulation of turbulent flow in a channel.
Two-micron (Thulium) Laser Prostatectomy: An Effective Method for BPH Treatment.
Jiang, Qi; Xia, Shujie
2014-01-01
The two-micron (thulium) laser is the newest laser technique for treatment of bladder outlet obstruction resulting from benign prostatic hyperplasia (BPH). It takes less operative time than standard techniques, provides clear vision and lower blood loss as well as shorter catheterization times and hospitalization times. It has been identified to be a safe and efficient method for BPH treatment regardless of the prostate size.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.
Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y
2016-11-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling
Sauer, Bryan G.; Singh, Kanwar P.; Wagner, Barry L.; Vanden Hoek, Matthew S.; Twilley, Katherine; Cohn, Steven M.; Shami, Vanessa M.; Wang, Andrew Y.
2016-01-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience. PMID:27853739
High-efficiency non-uniformity correction for wide dynamic linear infrared radiometry system
NASA Astrophysics Data System (ADS)
Li, Zhou; Yu, Yi; Tian, Qi-Jie; Chang, Song-Tao; He, Feng-Yun; Yin, Yan-He; Qiao, Yan-Feng
2017-09-01
Several different integration times are always set for a wide dynamic linear and continuous variable integration time infrared radiometry system, therefore, traditional calibration-based non-uniformity correction (NUC) are usually conducted one by one, and furthermore, several calibration sources required, consequently makes calibration and process of NUC time-consuming. In this paper, the difference of NUC coefficients between different integration times have been discussed, and then a novel NUC method called high-efficiency NUC, which combines the traditional calibration-based non-uniformity correction, has been proposed. It obtains the correction coefficients of all integration times in whole linear dynamic rangesonly by recording three different images of a standard blackbody. Firstly, mathematical procedure of the proposed non-uniformity correction method is validated and then its performance is demonstrated by a 400 mm diameter ground-based infrared radiometry system. Experimental results show that the mean value of Normalized Root Mean Square (NRMS) is reduced from 3.78% to 0.24% by the proposed method. In addition, the results at 4 ms and 70 °C prove that this method has a higher accuracy compared with traditional calibration-based NUC. In the meantime, at other integration time and temperature there is still a good correction effect. Moreover, it greatly reduces the number of correction time and temperature sampling point, and is characterized by good real-time performance and suitable for field measurement.
Driving at Night Can Be Deadly
DOT National Transportation Integrated Search
1996-06-17
Efficient Commercial Vehicle Operations (CVO) is imperative as manufacturers and distributors move to new technologies, faster production methods and "Just in Time" delivery. CVO must offer more reliable travel times, as well as safety and flexibilit...
Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny
NASA Astrophysics Data System (ADS)
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
NASA Astrophysics Data System (ADS)
Khalqihi, K. I.; Rahayu, M.; Rendra, M.
2017-12-01
PT Perkebunan Nusantara VIII Ciater is a company produced black tea orthodox more or less 4 tons every day. At the production section, PT Perkebunan Nusantara VIII will use local exhaust ventilation specially at sortation area on sieve machine. To maintain the quality of the black tea orthodox, all machine must be scheduled for maintenance every once a month and takes time 2 hours in workhours, with additional local exhaust ventilation, it will increase time for maintenance process, if maintenance takes time more than 2 hours it will caused production process delayed. To support maintenance process in PT Perkebunan Nusantara VIII Ciater, designing local exhaust ventilation using design for assembly approach with Boothroyd and Dewhurst method, design for assembly approach is choosen to simplify maintenance process which required assembly process. There are 2 LEV designs for this research. Design 1 with 94 components, assembly time 647.88 seconds and assembly efficiency level 23.62%. Design 2 with 82 components, assembly time 567.84 seconds and assembly efficiency level 24.83%. Design 2 is choosen for this research based on DFA goals, minimum total part that use, optimization assembly time, and assembly efficiency level.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions
NASA Astrophysics Data System (ADS)
Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.
2016-09-01
Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.
How fast do stock prices adjust to market efficiency? Evidence from a detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Reboredo, Juan C.; Rivera-Castro, Miguel A.; Miranda, José G. V.; García-Rubio, Raquel
2013-04-01
In this paper we analyse price fluctuations with the aim of measuring how long the market takes to adjust prices to weak-form efficiency, i.e., how long it takes for prices to adjust to a fractional Brownian motion with a Hurst exponent of 0.5. The Hurst exponent is estimated for different time horizons using detrended fluctuation analysis-a method suitable for non-stationary series with trends-in order to identify at which time scale the Hurst exponent is consistent with the efficient market hypothesis. Using high-frequency share price, exchange rate and stock data, we show how price dynamics exhibited important deviations from efficiency for time periods of up to 15 min; thereafter, price dynamics was consistent with a geometric Brownian motion. The intraday behaviour of the series also indicated that price dynamics at trade opening and close was hardly consistent with efficiency, which would enable investors to exploit price deviations from fundamental values. This result is consistent with intraday volume, volatility and transaction time duration patterns.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
SLIC superpixels compared to state-of-the-art superpixel methods.
Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine
2012-11-01
Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.
Norzagaray-Valenzuela, Claudia D; Germán-Báez, Lourdes J; Valdez-Flores, Marco A; Hernández-Verdugo, Sergio; Shelton, Luke M; Valdez-Ortiz, Angel
2018-05-16
Microalgae are photosynthetic microorganisms widely used for the production of highly valued compounds, and recently they have been shown to be promising as a system for the heterologous expression of proteins. Several transformation methods have been successfully developed, from which the Agrobacterium tumefaciens-mediated method remains the most promising. However, microalgae transformation efficiency by A. tumefaciens is shown to vary depending on several transformation conditions. The present study aimed to establish an efficient genetic transformation system in the green microalgae Dunaliella tertiolecta using the A. tumefaciens method. The parameters assessed were the infection medium, the concentration of the A. tumefaciens and co-culture time. As a preliminary screening, the expression of the gusA gene and the viability of transformed cells were evaluated and used to calculate a novel parameter called Transformation Efficiency Index (TEI). The statistical analysis of TEI values showed five treatments with the highest gusA gene expression. To ensure stable transformation, transformed colonies were cultured on selective medium using hygromycin B and the DNA of resistant colonies were extracted after five subcultures and molecularly analyzed by PCR. Results revealed that treatments which use solid infection medium, A. tumefaciens OD 600 = 0.5 and co-culture times of 72 h exhibited the highest percentage of stable gusA expression. Overall, this study established an efficient, optimized A. tumefaciens-mediated genetic transformation of D. tertiolecta, which represents a relatively easy procedure with no expensive equipment required. This simple and efficient protocol opens the possibility for further genetic manipulation of this commercially-important microalgae for biotechnological applications. Copyright © 2018 Elsevier B.V. All rights reserved.
Oikawa, Hiroyuki; Takahashi, Takumi; Kamonprasertsuk, Supawich; Takahashi, Satoshi
2018-01-31
Single-molecule (sm) fluorescence time series measurements based on the line confocal optical system are a powerful strategy for the investigation of the structure, dynamics, and heterogeneity of biological macromolecules. This method enables the detection of more than several thousands of fluorescence photons per millisecond from single fluorophores, implying that the potential time resolution for measurements of the fluorescence resonance energy transfer (FRET) efficiency is 10 μs. However, the necessity of using imaging photodetectors in the method limits the time resolution in the FRET efficiency measurements to approximately 100 μs. In this investigation, a new photodetector called a hybrid photodetector (HPD) was incorporated into the line confocal system to improve the time resolution without sacrificing the length of the time series detection. Among several settings examined, the system based on a slit width of 10 μm and a high-speed counting device made the best of the features of the line confocal optical system and the HPD. This method achieved a time resolution of 10 μs and an observation time of approximately 5 ms in the sm-FRET time series measurements. The developed device was used for the native state of the B domain of protein A.
A "hydrokinematic" method of measuring the glide efficiency of a human swimmer.
Naemi, Roozbeh; Sanders, Ross H
2008-12-01
The aim of this study was to develop and test a method of quantifying the glide efficiency, defined as the ability of the body to maintain its velocity over time and to minimize deceleration through a rectilinear glide. The glide efficiency should be determined in a way that accounts for both the inertial and resistive characteristics of the gliding body as well as the instantaneous velocity. A displacement function (parametric curve) was obtained from the equation of motion of the body during a horizontal rectilinear glide. The values of the parameters in the displacement curve that provide the best fit to the displacement-time data of a body during a rectilinear horizontal glide represent the glide factor and the initial velocity of the particular glide interval. The glide factor is a measure of glide efficiency and indicates the ability of the body to minimize deceleration at each corresponding velocity. The glide efficiency depends on the hydrodynamic characteristic of the body, which is influenced by the body's shape as well as by the body's size. To distinguish the effects of size and shape on the glide efficiency, a size-related glide constant and a shape-related glide coefficient were determined as separate entities. The glide factor is the product of these two parameters. The goodness of fit statistics indicated that the representative displacement function found for each glide interval closely represents the real displacement data of a body in a rectilinear horizontal glide. The accuracy of the method was indicated by a relative standard error of calculation of less than 2.5%. Also the method was able to distinguish between subjects in their glide efficiency. It was found that the glide factor increased with decreasing velocity. The glide coefficient also increased with decreasing Reynolds number. The method is sufficiently accurate to distinguish between individual swimmers in terms of their glide efficiency. The separation of glide factor to a size-related glide constant and a shape-related glide coefficient enabled the effect of size and shape to be quantified.
Quadratic adaptive algorithm for solving cardiac action potential models.
Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing
2016-10-01
An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven
2018-07-01
We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.
Finite-Element Methods for Real-Time Simulation of Surgery
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay
2003-01-01
Two finite-element methods have been developed for mathematical modeling of the time-dependent behaviors of deformable objects and, more specifically, the mechanical responses of soft tissues and organs in contact with surgical tools. These methods may afford the computational efficiency needed to satisfy the requirement to obtain computational results in real time for simulating surgical procedures as described in Simulation System for Training in Laparoscopic Surgery (NPO-21192) on page 31 in this issue of NASA Tech Briefs. Simulation of the behavior of soft tissue in real time is a challenging problem because of the complexity of soft-tissue mechanics. The responses of soft tissues are characterized by nonlinearities and by spatial inhomogeneities and rate and time dependences of material properties. Finite-element methods seem promising for integrating these characteristics of tissues into computational models of organs, but they demand much central-processing-unit (CPU) time and memory, and the demand increases with the number of nodes and degrees of freedom in a given finite-element model. Hence, as finite-element models become more realistic, it becomes more difficult to compute solutions in real time. In both of the present methods, one uses approximate mathematical models trading some accuracy for computational efficiency and thereby increasing the feasibility of attaining real-time up36 NASA Tech Briefs, October 2003 date rates. The first of these methods is based on modal analysis. In this method, one reduces the number of differential equations by selecting only the most significant vibration modes of an object (typically, a suitable number of the lowest-frequency modes) for computing deformations of the object in response to applied forces.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Technical Efficiency and Organ Transplant Performance: A Mixed-Method Approach
de-Pablos-Heredero, Carmen; Fernández-Renedo, Carlos; Medina-Merodio, Jose-Amelio
2015-01-01
Mixed methods research is interesting to understand complex processes. Organ transplants are complex processes in need of improved final performance in times of budgetary restrictions. As the main objective a mixed method approach is used in this article to quantify the technical efficiency and the excellence achieved in organ transplant systems and to prove the influence of organizational structures and internal processes in the observed technical efficiency. The results show that it is possible to implement mechanisms for the measurement of the different components by making use of quantitative and qualitative methodologies. The analysis show a positive relationship between the levels related to the Baldrige indicators and the observed technical efficiency in the donation and transplant units of the 11 analyzed hospitals. Therefore it is possible to conclude that high levels in the Baldrige indexes are a necessary condition to reach an increased level of the service offered. PMID:25950653
Sparse Contextual Activation for Efficient Visual Re-Ranking.
Bai, Song; Bai, Xiang
2016-03-01
In this paper, we propose an extremely efficient algorithm for visual re-ranking. By considering the original pairwise distance in the contextual space, we develop a feature vector called sparse contextual activation (SCA) that encodes the local distribution of an image. Hence, re-ranking task can be simply accomplished by vector comparison under the generalized Jaccard metric, which has its theoretical meaning in the fuzzy set theory. In order to improve the time efficiency of re-ranking procedure, inverted index is successfully introduced to speed up the computation of generalized Jaccard metric. As a result, the average time cost of re-ranking for a certain query can be controlled within 1 ms. Furthermore, inspired by query expansion, we also develop an additional method called local consistency enhancement on the proposed SCA to improve the retrieval performance in an unsupervised manner. On the other hand, the retrieval performance using a single feature may not be satisfactory enough, which inspires us to fuse multiple complementary features for accurate retrieval. Based on SCA, a robust feature fusion algorithm is exploited that also preserves the characteristic of high time efficiency. We assess our proposed method in various visual re-ranking tasks. Experimental results on Princeton shape benchmark (3D object), WM-SRHEC07 (3D competition), YAEL data set B (face), MPEG-7 data set (shape), and Ukbench data set (image) manifest the effectiveness and efficiency of SCA.
Real-Time Stability Margin Measurements for X-38 Robustness Analysis
NASA Technical Reports Server (NTRS)
Bosworth, John T.; Stachowiak, Susan J.
2005-01-01
A method has been developed for real-time stability margin measurement calculations. The method relies on a tailored-forced excitation targeted to a specific frequency range. Computation of the frequency response is matched to the specific frequencies contained in the excitation. A recursive Fourier transformation is used to make the method compatible with real-time calculation. The method was incorporated into the X-38 nonlinear simulation and applied to an X-38 robustness test. X-38 stability margins were calculated for different variations in aerodynamic and mass properties over the vehicle flight trajectory. The new method showed results comparable to more traditional stability analysis techniques, and at the same time, this new method provided coverage that is more complete and increased efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Formanek, Martin; Vana, Martin; Houfek, Karel
2010-09-30
We compare efficiency of two methods for numerical solution of the time-dependent Schroedinger equation, namely the Chebyshev method and the recently introduced generalized Crank-Nicholson method. As a testing system the free propagation of a particle in one dimension is used. The space discretization is based on the high-order finite diferences to approximate accurately the kinetic energy operator in the Hamiltonian. We show that the choice of the more effective method depends on how many wave functions must be calculated during the given time interval to obtain relevant and reasonably accurate information about the system, i.e. on the choice of themore » time step.« less
Efficient Power Network Analysis with Modeling of Inductive Effects
NASA Astrophysics Data System (ADS)
Zeng, Shan; Yu, Wenjian; Hong, Xianlong; Cheng, Chung-Kuan
In this paper, an efficient method is proposed to accurately analyze large-scale power/ground (P/G) networks, where inductive parasitics are modeled with the partial reluctance. The method is based on frequency-domain circuit analysis and the technique of vector fitting [14], and obtains the time-domain voltage response at given P/G nodes. The frequency-domain circuit equation including partial reluctances is derived, and then solved with the GMRES algorithm with rescaling, preconditioning and recycling techniques. With the merit of sparsified reluctance matrix and iterative solving techniques for the frequency-domain circuit equations, the proposed method is able to handle large-scale P/G networks with complete inductive modeling. Numerical results show that the proposed method is orders of magnitude faster than HSPICE, several times faster than INDUCTWISE [4], and capable of handling the inductive P/G structures with more than 100, 000 wire segments.
Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.
Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007
Protein immobilization onto various surfaces using a polymer-bound isocyanate
NASA Astrophysics Data System (ADS)
Kang, Hyun-Jin; Cha, Eun Ji; Park, Hee-Deung
2015-01-01
Silane coupling agents have been widely used for immobilizing proteins onto inorganic surfaces. However, the immobilization method using silane coupling agents requires several treatment steps, and its application is limited to only surfaces containing hydroxyl groups. The aim of this study was to develop a novel method to overcome the limitations of the silane-based immobilization method using a polymer-bound isocyanate. Initially, polymer-bound isocyanate was dissolved in organic solvent and then was used to dip-coat inorganic surfaces. Proteins were then immobilized onto the dip-coated surfaces by the formation of urea bonds between the isocyanate groups of the polymer and the amine groups of the protein. The reaction was verified by FT-IR in which NCO stretching peaks disappeared, and CO and NH stretching peaks appeared after immobilization. The immobilization efficiency of the newly developed method was insensitive to reaction temperatures (4-50 °C), but the efficiency increased with reaction time and reached a maximum after 4 h. Furthermore, the method showed comparable immobilization efficiency to the silane-based immobilization method and was applicable to surfaces that cannot form hydroxyl groups. Taken together, the newly developed method provides a simple and efficient platform for immobilizing proteins onto surfaces.
Exploiting Efficient Transpacking for One-Sided Communication and MPI-IO
NASA Astrophysics Data System (ADS)
Mir, Faisal Ghias; Träff, Jesper Larsson
Based on a construction of socalled input-output datatypes that define a mapping between non-consecutive input and output buffers, we outline an efficient method for copying of structured data. We term this operation transpacking, and show how transpacking can be applied for the MPI implementation of one-sided communication and MPI-IO. For one-sided communication via shared-memory, we demonstrate the expected performance improvements by up to a factor of two. For individual MPI-IO, the time to read or write from file dominates the overall time, but even here efficient transpacking can in some scenarios reduce file I/O time considerably. The reported results have been achieved on a single NEC SX-8 vector node.
Simplified dichromated gelatin hologram recording process
NASA Technical Reports Server (NTRS)
Georgekutty, Tharayil G.; Liu, Hua-Kuang
1987-01-01
A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.
Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.
Xie, Xianming
2016-08-22
A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.
Swab Protocol for Rapid Laboratory Diagnosis of Cutaneous Anthrax
Marston, Chung K.; Bhullar, Vinod; Baker, Daniel; Rahman, Mahmudur; Hossain, M. Jahangir; Chakraborty, Apurba; Khan, Salah Uddin; Hoffmaster, Alex R.
2012-01-01
The clinical laboratory diagnosis of cutaneous anthrax is generally established by conventional microbiological methods, such as culture and directly straining smears of clinical specimens. However, these methods rely on recovery of viable Bacillus anthracis cells from swabs of cutaneous lesions and often yield negative results. This study developed a rapid protocol for detection of B. anthracis on clinical swabs. Three types of swabs, flocked-nylon, rayon, and polyester, were evaluated by 3 extraction methods, the swab extraction tube system (SETS), sonication, and vortex. Swabs were spiked with virulent B. anthracis cells, and the methods were compared for their efficiency over time by culture and real-time PCR. Viability testing indicated that the SETS yielded greater recovery of B. anthracis from 1-day-old swabs; however, reduced viability was consistent for the 3 extraction methods after 7 days and nonviability was consistent by 28 days. Real-time PCR analysis showed that the PCR amplification was not impacted by time for any swab extraction method and that the SETS method provided the lowest limit of detection. When evaluated using lesion swabs from cutaneous anthrax outbreaks, the SETS yielded culture-negative, PCR-positive results. This study demonstrated that swab extraction methods differ in their efficiency of recovery of viable B. anthracis cells. Furthermore, the results indicated that culture is not reliable for isolation of B. anthracis from swabs at ≥7 days. Thus, we recommend the use of the SETS method with subsequent testing by culture and real-time PCR for diagnosis of cutaneous anthrax from clinical swabs of cutaneous lesions. PMID:23035192
Liu, Zaizhi; Mo, Kailin; Fei, Shimin; Zu, Yuangang; Yang, Lei
2017-08-01
Proanthocyanidins were separated for the first time from Cinnamomum longepaniculatum leaves. An experiment-based extraction strategy was used to research the efficiency of an ultrasound-assisted method for proanthocyanidins extraction. The Plackett-Burman design results revealed that the ultrasonication time, ultrasonic power and liquid/solid ratio were the most significant parameters among the six variables in the extraction process. Upon further optimization of the Box-Behnken design, the optimal conditions were obtained as follows: extraction temperature, 100°C; ethanol concentration, 70%; pH 5; ultrasonication power, 660 W; ultrasonication time, 44 min; liquid/solid ratio, 20 mL/g. Under the obtained conditions, the extraction yield of the proanthocyanidins using the ultrasonic-assisted method was 7.88 ± 0.21 mg/g, which is higher than that obtained using traditional methods. The phloroglucinolysis products of the proanthocyanidins, including the terminal units and derivatives from the extension units, were tentatively identified using a liquid chromatography with tandem mass spectrometry analysis. Cinnamomum longepaniculatum proanthocyanidins have promising antioxidant and anti-nutritional properties. In summary, an ultrasound-assisted method in combination with a response surface experimental design is an efficient methodology for the sufficient isolation of proanthocyanidins from Cinnamomum longepaniculatum leaves, and this method could be used for the separation of other bioactive compounds. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Assessing performance of alternative pavement marking materials.
DOT National Transportation Integrated Search
2010-01-01
Pavement markings need to be restriped from time to time to maintain retroreflectivity. : Knowing which material provides the most economically efficient solution is important. : Currently, no agreed upon method by which to evaluate the use of altern...
Algorithms and software for nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.
1989-01-01
The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.
The time resolved SBS and SRS research in heavy water and its application in CARS
NASA Astrophysics Data System (ADS)
Liu, Jinbo; Gai, Baodong; Yuan, Hong; Sun, Jianfeng; Zhou, Xin; Liu, Di; Xia, Xusheng; Wang, Pengyuan; Hu, Shu; Chen, Ying; Guo, Jingwei; Jin, Yuqi; Sang, Fengting
2018-05-01
We present the time-resolved character of stimulated Brillouin scattering (SBS) and backward stimulated Raman scattering (BSRS) in heavy water and its application in Coherent Anti-Stokes Raman Scattering (CARS) technique. A nanosecond laser from a frequency-doubled Nd: YAG laser is introduced into a heavy water cell, to generate SBS and BSRS beams. The SBS and BSRS beams are collinear, and their time resolved characters are studied by a streak camera, experiment show that they are ideal source for an alignment-free CARS system, and the time resolved property of SBS and BSRS beams could affect the CARS efficiency significantly. By inserting a Dye cuvette to the collinear beams, the time-overlapping of SBS and BSRS could be improved, and finally the CARS efficiency is increased, even though the SBS energy is decreased. Possible methods to improve the efficiency of this CARS system are discussed too.
Recurrence time statistics: versatile tools for genomic DNA sequence analysis.
Cao, Yinhe; Tung, Wen-Wen; Gao, J B
2004-01-01
With the completion of the human and a few model organisms' genomes, and the genomes of many other organisms waiting to be sequenced, it has become increasingly important to develop faster computational tools which are capable of easily identifying the structures and extracting features from DNA sequences. One of the more important structures in a DNA sequence is repeat-related. Often they have to be masked before protein coding regions along a DNA sequence are to be identified or redundant expressed sequence tags (ESTs) are to be sequenced. Here we report a novel recurrence time based method for sequence analysis. The method can conveniently study all kinds of periodicity and exhaustively find all repeat-related features from a genomic DNA sequence. An efficient codon index is also derived from the recurrence time statistics, which has the salient features of being largely species-independent and working well on very short sequences. Efficient codon indices are key elements of successful gene finding algorithms, and are particularly useful for determining whether a suspected EST belongs to a coding or non-coding region. We illustrate the power of the method by studying the genomes of E. coli, the yeast S. cervisivae, the nematode worm C. elegans, and the human, Homo sapiens. Computationally, our method is very efficient. It allows us to carry out analysis of genomes on the whole genomic scale by a PC.
NASA Technical Reports Server (NTRS)
Oh, K. S.; Schutt-Aine, J.
1995-01-01
Modeling of interconnects and associated discontinuities with the recent advances high-speed digital circuits has gained a considerable interest over the last decade although the theoretical bases for analyzing these structures were well-established as early as the 1960s. Ongoing research at the present time is focused on devising methods which can be applied to more general geometries than the ones considered in earlier days and, at the same time, improving the computational efficiency and accuracy of these methods. In this thesis, numerically efficient methods to compute the transmission line parameters of a multiconductor system and the equivalent capacitances of various strip discontinuities are presented based on the quasi-static approximation. The presented techniques are applicable to conductors embedded in an arbitrary number of dielectric layers with two possible locations of ground planes at the top and bottom of the dielectric layers. The cross-sections of conductors can be arbitrary as long as they can be described with polygons. An integral equation approach in conjunction with the collocation method is used in the presented methods. A closed-form Green's function is derived based on weighted real images thus avoiding nested infinite summations in the exact Green's function; therefore, this closed-form Green's function is numerically more efficient than the exact Green's function. All elements associated with the moment matrix are computed using the closed-form formulas. Various numerical examples are considered to verify the presented methods, and a comparison of the computed results with other published results showed good agreement.
NASA Astrophysics Data System (ADS)
Ma, Pengcheng; Li, Daye; Li, Shuo
2016-02-01
Using one minute high-frequency data of the Shanghai Composite Index (SHCI) and the Shenzhen Composite Index (SZCI) (2007-2008), we employ the detrended fluctuation analysis (DFA) and the detrended cross correlation analysis (DCCA) with rolling window approach to observe the evolution of market efficiency and cross-correlation in pre-crisis and crisis period. Considering the fat-tail distribution of return time series, statistical test based on shuffling method is conducted to verify the null hypothesis of no long-term dependence. Our empirical research displays three main findings. First Shanghai equity market efficiency deteriorated while Shenzhen equity market efficiency improved with the advent of financial crisis. Second the highly positive dependence between SHCI and SZCI varies with time scale. Third financial crisis saw a significant increase of dependence between SHCI and SZCI at shorter time scales but a lack of significant change at longer time scales, providing evidence of contagion and absence of interdependence during crisis.
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
NASA Astrophysics Data System (ADS)
Hu, Shou-Cun; Ji, Jiang-Hui
2017-12-01
In asteroid rendezvous missions, the dynamical environment near an asteroid’s surface should be made clear prior to launch of the mission. However, most asteroids have irregular shapes, which lower the efficiency of calculating their gravitational field by adopting the traditional polyhedral method. In this work, we propose a method to partition the space near an asteroid adaptively along three spherical coordinates and use Chebyshev polynomial interpolation to represent the gravitational acceleration in each cell. Moreover, we compare four different interpolation schemes to obtain the best precision with identical initial parameters. An error-adaptive octree division is combined to improve the interpolation precision near the surface. As an example, we take the typical irregularly-shaped near-Earth asteroid 4179 Toutatis to demonstrate the advantage of this method; as a result, we show that the efficiency can be increased by hundreds to thousands of times with our method. Our results indicate that this method can be applicable to other irregularly-shaped asteroids and can greatly improve the evaluation efficiency.
The Application of Sensors on Guardrails for the Purpose of Real Time Impact Detection
2012-03-01
collection methods ; however, there are major differences in the measures of performance for policy goals and objectives (U.S. DOT, 2002). The goal here is...seriousness of this issue has motivated the US Department of Transportation and Transportation Research Board to develop and deploy new methods and... methods to integrate new sensing capabilities into existing Intelligent Transportation Systems in a time efficient and cost effective manner. In
Cesarean section using the Misgav Ladach method.
Federici, D; Lacelli, B; Muggiasca, L; Agarossi, A; Cipolla, L; Conti, M
1997-06-01
To stress the advantages of the Misgav Ladach method for cesarean section. In this study operative details and the postoperative course of 139 patients who underwent cesarean section according to the Misgav Ladach method in 1995-96 are presented. The Misgav Ladach method reduces operation time, time of child delivery, and time of recovery. The rates of febrile morbidity, wound infection and wound dehiscence are not affected by the new technique. Our study highlights the efficiency and safety of the Misgav Ladach method, and points out the speeded recovery, with early ambulation and resumption of drinking and eating, that makes the cesarean section delivery closer and closer to natural childbirth.
İbiş, Birol
2014-01-01
This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662
Light manipulation for organic optoelectronics using bio-inspired moth's eye nanostructures.
Zhou, Lei; Ou, Qing-Dong; Chen, Jing-De; Shen, Su; Tang, Jian-Xin; Li, Yan-Qing; Lee, Shuit-Tong
2014-02-10
Organic-based optoelectronic devices, including light-emitting diodes (OLEDs) and solar cells (OSCs) hold great promise as low-cost and large-area electro-optical devices and renewable energy sources. However, further improvement in efficiency remains a daunting challenge due to limited light extraction or absorption in conventional device architectures. Here we report a universal method of optical manipulation of light by integrating a dual-side bio-inspired moth's eye nanostructure with broadband anti-reflective and quasi-omnidirectional properties. Light out-coupling efficiency of OLEDs with stacked triple emission units is over 2 times that of a conventional device, resulting in drastic increase in external quantum efficiency and current efficiency to 119.7% and 366 cd A(-1) without introducing spectral distortion and directionality. Similarly, the light in-coupling efficiency of OSCs is increased 20%, yielding an enhanced power conversion efficiency of 9.33%. We anticipate this method would offer a convenient and scalable way for inexpensive and high-efficiency organic optoelectronic designs.
Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations
Southern, James A.; Plank, Gernot; Vigmond, Edward J.; Whiteley, Jonathan P.
2017-01-01
The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time whilst still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counter-intuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks it is shown that the coupled method is up to 80% faster than the conventional uncoupled method — and that parallel performance is better for the larger coupled problem. PMID:19457741
Acceleration of FDTD mode solver by high-performance computing techniques.
Han, Lin; Xi, Yanping; Huang, Wei-Ping
2010-06-21
A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.
Exact solutions to the time-fractional differential equations via local fractional derivatives
NASA Astrophysics Data System (ADS)
Guner, Ozkan; Bekir, Ahmet
2018-01-01
This article utilizes the local fractional derivative and the exp-function method to construct the exact solutions of nonlinear time-fractional differential equations (FDEs). For illustrating the validity of the method, it is applied to the time-fractional Camassa-Holm equation and the time-fractional-generalized fifth-order KdV equation. Moreover, the exact solutions are obtained for the equations which are formed by different parameter values related to the time-fractional-generalized fifth-order KdV equation. This method is an reliable and efficient mathematical tool for solving FDEs and it can be applied to other non-linear FDEs.
NASA Astrophysics Data System (ADS)
He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong
2016-09-01
We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.
Joda, Tim; Lenherr, Patrik; Dedem, Philipp; Kovaltschuk, Irina; Bragger, Urs; Zitzmann, Nicola U
2017-10-01
The aim of this randomized controlled trial was to analyze implant impression techniques applying intraoral scanning (IOS) and the conventional method according to time efficiency, difficulty, and operator's preference. One hundred participants (n = 100) with diverse levels of dental experience were included and randomly assigned to Group A performing digital scanning (TRIOS Pod) first or Group B conducting conventional impression (open tray with elastomer) first, while the second method was performed consecutively. A customized maxillary model with a bone-level-type implant in the right canine position (FDI-position 13) was mounted on a phantom training unit realizing a standardized situation for all participants. Outcome parameter was time efficiency, and potential influence of clinical experience, operator's perception of level of difficulty, applicability of each method, and subjective preferences were analyzed with Wilcoxon -Mann-Whitney and Kruskal-Wallis tests. Mean total work time varied between 5.01 ± 1.56 min (students) and 4.53 ± 1.34 min (dentists) for IOS, and between 12.03 ± 2.00 min (students) and 10.09 ± 1.15 min (dentists) for conventional impressions with significant differences between the two methods. Neither assignment to Group A or B, nor gender nor number of impression-taking procedures did influence working time. Difficulty and applicability of IOS was perceived more favorable compared to conventional impressions, and effectiveness of IOS was rated better by the majority of students (88%) and dentists (64%). While 76% of the students preferred IOS, 48% of the dentists were favoring conventional impressions, and 26% each IOS and either technique. For single-implant sites, the quadrant-like intraoral scanning (IOS) was more time efficient than the conventional full-arch impression technique in a phantom head simulating standardized optimal conditions. A high level of acceptance for IOS was observed among students and dentists. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Evaluation of Simulated Clinical Breast Exam Motion Patterns Using Marker-Less Video Tracking
Azari, David P.; Pugh, Carla M.; Laufer, Shlomi; Kwan, Calvin; Chen, Chia-Hsiung; Yen, Thomas Y.; Hu, Yu Hen; Radwin, Robert G.
2016-01-01
Objective This study investigates using marker-less video tracking to evaluate hands-on clinical skills during simulated clinical breast examinations (CBEs). Background There are currently no standardized and widely accepted CBE screening techniques. Methods Experienced physicians attending a national conference conducted simulated CBEs presenting different pathologies with distinct tumorous lesions. Single hand exam motion was recorded and analyzed using marker-less video tracking. Four kinematic measures were developed to describe temporal (time pressing and time searching) and spatial (area covered and distance explored) patterns. Results Mean differences between time pressing, area covered, and distance explored varied across the simulated lesions. Exams were objectively categorized as either sporadic, localized, thorough, or efficient for both temporal and spatial categories based on spatiotemporal characteristics. The majority of trials were temporally or spatially thorough (78% and 91%), exhibiting proportionally greater time pressing and time searching (temporally thorough) and greater area probed with greater distance explored (spatially thorough). More efficient exams exhibited proportionally more time pressing with less time searching (temporally efficient) and greater area probed with less distance explored (spatially efficient). Just two (5.9 %) of the trials exhibited both high temporal and spatial efficiency. Conclusions Marker-less video tracking was used to discriminate different examination techniques and measure when an exam changes from general searching to specific probing. The majority of participants exhibited more thorough than efficient patterns. Application Marker-less video kinematic tracking may be useful for quantifying clinical skills for training and assessment. PMID:26546381
Gram-Schmidt algorithms for covariance propagation
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1977-01-01
This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UD(transpose of U), where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and coloured process noise parameters increase mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.
Gram-Schmidt algorithms for covariance propagation
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1975-01-01
This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UDU/T/, where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and colored process noise parameters increases mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.
NASA Astrophysics Data System (ADS)
Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.
2014-09-01
Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.
Visualizing frequent patterns in large multivariate time series
NASA Astrophysics Data System (ADS)
Hao, M.; Marwah, M.; Janetzko, H.; Sharma, R.; Keim, D. A.; Dayal, U.; Patnaik, D.; Ramakrishnan, N.
2011-01-01
The detection of previously unknown, frequently occurring patterns in time series, often called motifs, has been recognized as an important task. However, it is difficult to discover and visualize these motifs as their numbers increase, especially in large multivariate time series. To find frequent motifs, we use several temporal data mining and event encoding techniques to cluster and convert a multivariate time series to a sequence of events. Then we quantify the efficiency of the discovered motifs by linking them with a performance metric. To visualize frequent patterns in a large time series with potentially hundreds of nested motifs on a single display, we introduce three novel visual analytics methods: (1) motif layout, using colored rectangles for visualizing the occurrences and hierarchical relationships of motifs in a multivariate time series, (2) motif distortion, for enlarging or shrinking motifs as appropriate for easy analysis and (3) motif merging, to combine a number of identical adjacent motif instances without cluttering the display. Analysts can interactively optimize the degree of distortion and merging to get the best possible view. A specific motif (e.g., the most efficient or least efficient motif) can be quickly detected from a large time series for further investigation. We have applied these methods to two real-world data sets: data center cooling and oil well production. The results provide important new insights into the recurring patterns.
Formulation of a dynamic analysis method for a generic family of hoop-mast antenna systems
NASA Technical Reports Server (NTRS)
Gabriele, A.; Loewy, R.
1981-01-01
Analytical studies of mast-cable-hoop-membrane type antennas were conducted using a transfer matrix numerical analysis approach. This method, by virtue of its specialization and the inherently easy compartmentalization of the formulation and numerical procedures, can be significantly more efficient in computer time required and in the time needed to review and interpret the results.
Zambrano, Eduardo; Šulc, Miroslav; Vaníček, Jiří
2013-08-07
Time-resolved electronic spectra can be obtained as the Fourier transform of a special type of time correlation function known as fidelity amplitude, which, in turn, can be evaluated approximately and efficiently with the dephasing representation. Here we improve both the accuracy of this approximation-with an amplitude correction derived from the phase-space propagator-and its efficiency-with an improved cellular scheme employing inverse Weierstrass transform and optimal scaling of the cell size. We demonstrate the advantages of the new methodology by computing dispersed time-resolved stimulated emission spectra in the harmonic potential, pyrazine, and the NCO molecule. In contrast, we show that in strongly chaotic systems such as the quartic oscillator the original dephasing representation is more appropriate than either the cellular or prefactor-corrected methods.
The influence of liquidity on informational efficiency: The case of the Thai Stock Market
NASA Astrophysics Data System (ADS)
Bariviera, Aurelio Fernández
2011-11-01
The presence of long-range memory in financial time series is a puzzling fact that challenges the established financial theory. We study the effect of liquidity on the efficiency (measured by the Hurst’s exponent) of the Thai Stock Market. According to our study, we find that: (i) the R/S method could generate spurious long-range dependence, giving the DFA method more reliable estimates of the Hurst’s exponent and (ii) there is a weak relationship between market capitalization and the efficiency of the market, and that the latter is not significantly affected by the presence of foreign investors.
NASA Astrophysics Data System (ADS)
Rodionov, A. A.; Turchin, V. I.
2017-06-01
We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.
A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method
NASA Astrophysics Data System (ADS)
Zhan, Lei; Xiong, Juntao; Liu, Feng
2016-05-01
The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.
Shape prior modeling using sparse representation and online dictionary learning.
Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N
2012-01-01
The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.
Brownian dynamics of confined rigid bodies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delong, Steven; Balboa Usabiaga, Florencio; Donev, Aleksandar, E-mail: donev@courant.nyu.edu
2015-10-14
We introduce numerical methods for simulating the diffusive motion of rigid bodies of arbitrary shape immersed in a viscous fluid. We parameterize the orientation of the bodies using normalized quaternions, which are numerically robust, space efficient, and easy to accumulate. We construct a system of overdamped Langevin equations in the quaternion representation that accounts for hydrodynamic effects, preserves the unit-norm constraint on the quaternion, and is time reversible with respect to the Gibbs-Boltzmann distribution at equilibrium. We introduce two schemes for temporal integration of the overdamped Langevin equations of motion, one based on the Fixman midpoint method and the othermore » based on a random finite difference approach, both of which ensure that the correct stochastic drift term is captured in a computationally efficient way. We study several examples of rigid colloidal particles diffusing near a no-slip boundary and demonstrate the importance of the choice of tracking point on the measured translational mean square displacement (MSD). We examine the average short-time as well as the long-time quasi-two-dimensional diffusion coefficient of a rigid particle sedimented near a bottom wall due to gravity. For several particle shapes, we find a choice of tracking point that makes the MSD essentially linear with time, allowing us to estimate the long-time diffusion coefficient efficiently using a Monte Carlo method. However, in general, such a special choice of tracking point does not exist, and numerical techniques for simulating long trajectories, such as the ones we introduce here, are necessary to study diffusion on long time scales.« less
[Mechanical Shimming Method and Implementation for Permanent Magnet of MRI System].
Xue, Tingqiang; Chen, Jinjun
2015-03-01
A mechanical shimming method and device for permanent magnet of MRI system has been developed to meet its stringent homogeneity requirement without time-consuming passive shimming on site, installation and adjustment efficiency has been increased.
NASA Astrophysics Data System (ADS)
Silaev, A. A.; Romanov, A. A.; Vvedenskii, N. V.
2018-03-01
In the numerical solution of the time-dependent Schrödinger equation by grid methods, an important problem is the reflection and wrap-around of the wave packets at the grid boundaries. Non-optimal absorption of the wave function leads to possible large artifacts in the results of numerical simulations. We propose a new method for the construction of the complex absorbing potentials for wave suppression at the grid boundaries. The method is based on the use of the multi-hump imaginary potential which contains a sequence of smooth and symmetric humps whose widths and amplitudes are optimized for wave absorption in different spectral intervals. We show that this can ensure a high efficiency of absorption in a wide range of de Broglie wavelengths, which includes wavelengths comparable to the width of the absorbing layer. Therefore, this method can be used for high-precision simulations of various phenomena where strong spreading of the wave function takes place, including the phenomena accompanying the interaction of strong fields with atoms and molecules. The efficiency of the proposed method is demonstrated in the calculation of the spectrum of high-order harmonics generated during the interaction of hydrogen atoms with an intense infrared laser pulse.
Ates, Hatice Ceren; Ozgur, Ebru; Kulah, Haluk
2018-03-23
Methods for isolation and quantification of circulating tumor cells (CTCs) are attracting more attention every day, as the data for their unprecedented clinical utility continue to grow. However, the challenge is that CTCs are extremely rare (as low as 1 in a billion of blood cells) and a highly sensitive and specific technology is required to isolate CTCs from blood cells. Methods utilizing microfluidic systems for immunoaffinity-based CTC capture are preferred, especially when purity is the prime requirement. However, antibody immobilization strategy significantly affects the efficiency of such systems. In this study, two covalent and two bioaffinity antibody immobilization methods were assessed with respect to their CTC capture efficiency and selectivity, using an anti-epithelial cell adhesion molecule (EpCAM) as the capture antibody. Surface functionalization was realized on plain SiO 2 surfaces, as well as in microfluidic channels. Surfaces functionalized with different antibody immobilization methods are physically and chemically characterized at each step of functionalization. MCF-7 breast cancer and CCRF-CEM acute lymphoblastic leukemia cell lines were used as EpCAM positive and negative cell models, respectively, to assess CTC capture efficiency and selectivity. Comparisons reveal that bioaffinity based antibody immobilization involving streptavidin attachment with glutaraldehyde linker gave the highest cell capture efficiency. On the other hand, a covalent antibody immobilization method involving direct antibody binding by N-(3-dimethylaminopropyl)-N'-ethylcarbodiimide hydrochloride (EDC)-N-hydroxysuccinimide (NHS) reaction was found to be more time and cost efficient with a similar cell capture efficiency. All methods provided very high selectivity for CTCs with EpCAM expression. It was also demonstrated that antibody immobilization via EDC-NHS reaction in a microfluidic channel leads to high capture efficiency and selectivity.
An Efficient Algorithm for the Detection of Infrequent Rapid Bursts in Time Series Data
NASA Astrophysics Data System (ADS)
Giles, A. B.
1997-01-01
Searching through data for infrequent rapid bursts is a common requirement in many areas of scientific research. In this paper, we present a powerful and flexible analysis method that, in a single pass through the data, searches for statistically significant bursts on a set of specified short timescales. The input data are binned, if necessary, and then quantified in terms of probabilities rather than rates or ratios. Using a measure-like probability makes the method relatively count rate independent. The method has been made computationally efficient by the use of lookup tables and cyclic buffers, and it is therefore particularly well suited to real-time applications. The technique has been developed specifically for use in an X-ray astronomy application to search for millisecond bursts from black hole candidates such as Cyg X-1. We briefly review the few observations of these types of features reported in the literature, as well as the variety of ways in which their statistical reliability was challenged. The developed technique, termed the burst expectation search (BES) method, is illustrated using some data simulations and archived data obtained during ground testing of the proportional counter array (PCA) experiment detectors on the Rossi X-Ray Timing Explorer (RXTE). A potential application for a real-time BES method on board RXTE is also examined.
Motor Fault Diagnosis Based on Short-time Fourier Transform and Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Wang, Li-Hua; Zhao, Xiao-Ping; Wu, Jia-Xin; Xie, Yang-Yang; Zhang, Yong-Hong
2017-11-01
With the rapid development of mechanical equipment, the mechanical health monitoring field has entered the era of big data. However, the method of manual feature extraction has the disadvantages of low efficiency and poor accuracy, when handling big data. In this study, the research object was the asynchronous motor in the drivetrain diagnostics simulator system. The vibration signals of different fault motors were collected. The raw signal was pretreated using short time Fourier transform (STFT) to obtain the corresponding time-frequency map. Then, the feature of the time-frequency map was adaptively extracted by using a convolutional neural network (CNN). The effects of the pretreatment method, and the hyper parameters of network diagnostic accuracy, were investigated experimentally. The experimental results showed that the influence of the preprocessing method is small, and that the batch-size is the main factor affecting accuracy and training efficiency. By investigating feature visualization, it was shown that, in the case of big data, the extracted CNN features can represent complex mapping relationships between signal and health status, and can also overcome the prior knowledge and engineering experience requirement for feature extraction, which is used by traditional diagnosis methods. This paper proposes a new method, based on STFT and CNN, which can complete motor fault diagnosis tasks more intelligently and accurately.
Multiple-time-stepping generalized hybrid Monte Carlo methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escribano, Bruno, E-mail: bescribano@bcamath.org; Akhmatskaya, Elena; IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC).more » The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.« less
Pre-capture multiplexing improves efficiency and cost-effectiveness of targeted genomic enrichment.
Shearer, A Eliot; Hildebrand, Michael S; Ravi, Harini; Joshi, Swati; Guiffre, Angelica C; Novak, Barbara; Happe, Scott; LeProust, Emily M; Smith, Richard J H
2012-11-14
Targeted genomic enrichment (TGE) is a widely used method for isolating and enriching specific genomic regions prior to massively parallel sequencing. To make effective use of sequencer output, barcoding and sample pooling (multiplexing) after TGE and prior to sequencing (post-capture multiplexing) has become routine. While previous reports have indicated that multiplexing prior to capture (pre-capture multiplexing) is feasible, no thorough examination of the effect of this method has been completed on a large number of samples. Here we compare standard post-capture TGE to two levels of pre-capture multiplexing: 12 or 16 samples per pool. We evaluated these methods using standard TGE metrics and determined the ability to identify several classes of genetic mutations in three sets of 96 samples, including 48 controls. Our overall goal was to maximize cost reduction and minimize experimental time while maintaining a high percentage of reads on target and a high depth of coverage at thresholds required for variant detection. We adapted the standard post-capture TGE method for pre-capture TGE with several protocol modifications, including redesign of blocking oligonucleotides and optimization of enzymatic and amplification steps. Pre-capture multiplexing reduced costs for TGE by at least 38% and significantly reduced hands-on time during the TGE protocol. We found that pre-capture multiplexing reduced capture efficiency by 23 or 31% for pre-capture pools of 12 and 16, respectively. However efficiency losses at this step can be compensated by reducing the number of simultaneously sequenced samples. Pre-capture multiplexing and post-capture TGE performed similarly with respect to variant detection of positive control mutations. In addition, we detected no instances of sample switching due to aberrant barcode identification. Pre-capture multiplexing improves efficiency of TGE experiments with respect to hands-on time and reagent use compared to standard post-capture TGE. A decrease in capture efficiency is observed when using pre-capture multiplexing; however, it does not negatively impact variant detection and can be accommodated by the experimental design.
NASA Astrophysics Data System (ADS)
Shcherbakov, V.; Ahlkrona, J.
2016-12-01
In this work we develop a highly efficient meshfree approach to ice sheet modeling. Traditionally mesh based methods such as finite element methods are employed to simulate glacier and ice sheet dynamics. These methods are mature and well developed. However, despite of numerous advantages these methods suffer from some drawbacks such as necessity to remesh the computational domain every time it changes its shape, which significantly complicates the implementation on moving domains, or a costly assembly procedure for nonlinear problems. We introduce a novel meshfree approach that frees us from all these issues. The approach is built upon a radial basis function (RBF) method that, thanks to its meshfree nature, allows for an efficient handling of moving margins and free ice surface. RBF methods are also accurate and easy to implement. Since the formulation is stated in strong form it allows for a substantial reduction of the computational cost associated with the linear system assembly inside the nonlinear solver. We implement a global RBF method that defines an approximation on the entire computational domain. This method exhibits high accuracy properties. However, it suffers from a disadvantage that the coefficient matrix is dense, and therefore the computational efficiency decreases. In order to overcome this issue we also implement a localized RBF method that rests upon a partition of unity approach to subdivide the domain into several smaller subdomains. The radial basis function partition of unity method (RBF-PUM) inherits high approximation characteristics form the global RBF method while resulting in a sparse system of equations, which essentially increases the computational efficiency. To demonstrate the usefulness of the RBF methods we model the velocity field of ice flow in the Haut Glacier d'Arolla. We assume that the flow is governed by the nonlinear Blatter-Pattyn equations. We test the methods for different basal conditions and for a free moving surface. Both RBF methods are compared with a classical finite element method in terms of accuracy and efficiency. We find that the RBF methods are more efficient than the finite element method and well suited for ice dynamics modeling, especially the partition of unity approach.
NASA Astrophysics Data System (ADS)
Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.
2017-12-01
Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.; Jacobsen, S. E.
1986-01-01
An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.
Efficient numerical method for analyzing optical bistability in photonic crystal microcavities.
Yuan, Lijun; Lu, Ya Yan
2013-05-20
Nonlinear optical effects can be enhanced by photonic crystal microcavities and be used to develop practical ultra-compact optical devices with low power requirements. The finite-difference time-domain method is the standard numerical method for simulating nonlinear optical devices, but it has limitations in terms of accuracy and efficiency. In this paper, a rigorous and efficient frequency-domain numerical method is developed for analyzing nonlinear optical devices where the nonlinear effect is concentrated in the microcavities. The method replaces the linear problem outside the microcavities by a rigorous and numerically computed boundary condition, then solves the nonlinear problem iteratively in a small region around the microcavities. Convergence of the iterative method is much easier to achieve since the size of the problem is significantly reduced. The method is presented for a specific two-dimensional photonic crystal waveguide-cavity system with a Kerr nonlinearity, using numerical methods that can take advantage of the geometric features of the structure. The method is able to calculate multiple solutions exhibiting the optical bistability phenomenon in the strongly nonlinear regime.
IMPROVEMENT OF EFFICIENCY OF CUT AND OVERLAY ASPHALT WORKS BY USING MOBILE MAPPING SYSTEM
NASA Astrophysics Data System (ADS)
Yabuki, Nobuyoshi; Nakaniwa, Kazuhide; Kidera, Hiroki; Nishi, Daisuke
When the cut-and-overlay asphalt work is done for improving road pavement, conventional road surface elevation survey with levels often requires traffic regulation and takes much time and effort. Recently, although new surveying methods using non-prismatic total stations or fixed 3D laser scanners have been proposed in industry, they have not been adopted much due to their high cost. In this research, we propose a new method using Mobile Mapping Systems (MMS) in order to increase the efficiency and to reduce the cost. In this method, small white marks are painted at the intervals of 10m along the road to identify cross sections and to modify the elevations of the white marks with accurate survey data. To verify this proposed method, we executed an experiment and compared this method with the conventional level survey method and the fixed 3D laser scanning method at a road of Osaka University. The result showed that the proposed method had a similar accuracy with other methods and it was more efficient.
NASA Astrophysics Data System (ADS)
Wang, Bin; Wu, Xinyuan
2014-11-01
In this paper we consider multi-frequency highly oscillatory second-order differential equations x″ (t) + Mx (t) = f (t , x (t) ,x‧ (t)) where high-frequency oscillations are generated by the linear part Mx (t), and M is positive semi-definite (not necessarily nonsingular). It is known that Filon-type methods are effective approach to numerically solving highly oscillatory problems. Unfortunately, however, existing Filon-type asymptotic methods fail to apply to the highly oscillatory second-order differential equations when M is singular. We study and propose an efficient improvement on the existing Filon-type asymptotic methods, so that the improved Filon-type asymptotic methods can be able to numerically solving this class of multi-frequency highly oscillatory systems with a singular matrix M. The improved Filon-type asymptotic methods are designed by combining Filon-type methods with the asymptotic methods based on the variation-of-constants formula. We also present one efficient and practical improved Filon-type asymptotic method which can be performed at lower cost. Accompanying numerical results show the remarkable efficiency.
NASA Astrophysics Data System (ADS)
Ishbulatov, Yu. M.; Karavaev, A. S.; Kiselev, A. R.; Semyachkina-Glushkovskaya, O. V.; Postnov, D. E.; Bezruchko, B. P.
2018-04-01
A method for the reconstruction of time-delayed feedback system is investigated, which is based on the detection of synchronous response of a slave time-delay system with respect to the driving from the master system under study. The structure of the driven system is similar to the structure of the studied time-delay system, but the feedback circuit is broken in the driven system. The method efficiency is tested using short and noisy data gained from an electronic chaotic oscillator with time-delayed feedback.
Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2016-01-01
Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.
NASA Astrophysics Data System (ADS)
Nemoto, Takahiro; Alexakis, Alexandros
2018-02-01
The fluctuations of turbulence intensity in a pipe flow around the critical Reynolds number is difficult to study but important because they are related to turbulent-laminar transitions. We here propose a rare-event sampling method to study such fluctuations in order to measure the time scale of the transition efficiently. The method is composed of two parts: (i) the measurement of typical fluctuations (the bulk part of an accumulative probability function) and (ii) the measurement of rare fluctuations (the tail part of the probability function) by employing dynamics where a feedback control of the Reynolds number is implemented. We apply this method to a chaotic model of turbulent puffs proposed by Barkley and confirm that the time scale of turbulence decay increases super exponentially even for high Reynolds numbers up to Re =2500 , where getting enough statistics by brute-force calculations is difficult. The method uses a simple procedure of changing Reynolds number that can be applied even to experiments.
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN). In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging. PMID:23227108
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SP(N)). In XFEM scheme of SP(N) equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging.
NASA Astrophysics Data System (ADS)
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
An efficient algorithm for the retarded time equation for noise from rotating sources
NASA Astrophysics Data System (ADS)
Loiodice, S.; Drikakis, D.; Kokkalis, A.
2018-01-01
This study concerns modelling of noise emanating from rotating sources such as helicopter rotors. We present an accurate and efficient algorithm for the solution of the retarded time equation, which can be used both in subsonic and supersonic flow regimes. A novel approach for the search of the roots of the retarded time function was developed based on considerations of the kinematics of rotating sources and of the bifurcation analysis of the retarded time function. It is shown that the proposed algorithm is faster than the classical Newton and Brent methods, especially in the presence of sources rotating supersonically.
Techniques for Increasing the Efficiency of Automation Systems in School Library Media Centers.
ERIC Educational Resources Information Center
Caffarella, Edward P.
1996-01-01
Discusses methods of managing queues (waiting lines) to optimize the use of student computer stations in school library media centers and to make searches more efficient and effective. The three major factors in queue management are arrival interval of the patrons, service time, and number of stations. (Author/LRW)
Highly efficient preparation of sphingoid bases from glucosylceramides by chemoenzymatic method[S
Gowda, Siddabasave Gowda B.; Usuki, Seigo; Hammam, Mostafa A. S.; Murai, Yuta; Igarashi, Yasuyuki; Monde, Kenji
2016-01-01
Sphingoid base derivatives have attracted increasing attention as promising chemotherapeutic candidates against lifestyle diseases such as diabetes and cancer. Natural sphingoid bases can be a potential resource instead of those derived by time-consuming total organic synthesis. In particular, glucosylceramides (GlcCers) in food plants are enriched sources of sphingoid bases, differing from those of animals. Several chemical methodologies to transform GlcCers to sphingoid bases have already investigated; however, these conventional methods using acid or alkaline hydrolysis are not efficient due to poor reaction yield, producing complex by-products and resulting in separation problems. In this study, an extremely efficient and practical chemoenzymatic transformation method has been developed using microwave-enhanced butanolysis of GlcCers and a large amount of readily available almond β-glucosidase for its deglycosylation reaction of lysoGlcCers. The method is superior to conventional acid/base hydrolysis methods in its rapidity and its reaction cleanness (no isomerization, no rearrangement) with excellent overall yield. PMID:26667669
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Ma, Ning; Lv, Chengwei
2016-08-01
Efficient water transfer and allocation are critical for disaster mitigation in drought emergencies. This is especially important when the different interests of the multiple decision makers and the fluctuating water resource supply and demand simultaneously cause space and time conflicts. To achieve more effective and efficient water transfers and allocations, this paper proposes a novel optimization method with an integrated bi-level structure and a dynamic strategy, in which the bi-level structure works to deal with space dimension conflicts in drought emergencies, and the dynamic strategy is used to deal with time dimension conflicts. Combining these two optimization methods, however, makes calculation complex, so an integrated interactive fuzzy program and a PSO-POA are combined to develop a hybrid-heuristic algorithm. The successful application of the proposed model in a real world case region demonstrates its practicality and efficiency. Dynamic cooperation between multiple reservoirs under the coordination of a global regulator reflects the model's efficiency and effectiveness in drought emergency water transfer and allocation, especially in a fluctuating environment. On this basis, some corresponding management recommendations are proposed to improve practical operations.
Pijuan-Galitó, Sara; Tamm, Christoffer; Schuster, Jens; Sobol, Maria; Forsberg, Lars; Merry, Catherine L. R.; Annerén, Cecilia
2016-01-01
Reliable, scalable and time-efficient culture methods are required to fully realize the clinical and industrial applications of human pluripotent stem (hPS) cells. Here we present a completely defined, xeno-free medium that supports long-term propagation of hPS cells on uncoated tissue culture plastic. The medium consists of the Essential 8 (E8) formulation supplemented with inter-α-inhibitor (IαI), a human serum-derived protein, recently demonstrated to activate key pluripotency pathways in mouse PS cells. IαI efficiently induces attachment and long-term growth of both embryonic and induced hPS cell lines when added as a soluble protein to the medium at seeding. IαI supplementation efficiently supports adaptation of feeder-dependent hPS cells to xeno-free conditions, clonal growth as well as single-cell survival in the absence of Rho-associated kinase inhibitor (ROCKi). This time-efficient and simplified culture method paves the way for large-scale, high-throughput hPS cell culture, and will be valuable for both basic research and commercial applications. PMID:27405751
Effective Subcritical Butane Extraction of Bifenthrin Residue in Black Tea.
Zhang, Yating; Gu, Lingbiao; Wang, Fei; Kong, Lingjun; Qin, Guangyong
2017-03-30
As a natural and healthy beverage, tea is widely enjoyed; however, the pesticide residues in tea leaves affect the quality and food safety. To develop a highly selective and efficient method for the facile removal of pesticide residues, the subcritical butane extraction (SBE) technique was employed, and three variables involving temperature, time and extraction cycles were studied. The optimum SBE conditions were found to be as follows: extraction temperature 45 °C, extraction time 30 min, number of extraction cycles 1, and in such a condition that the extraction efficiency reached as high as 92%. Further, the catechins, theanine, caffeine and aroma components, which determine the quality of the tea, fluctuated after SBE treatment. Compared with the uncrushed leaves, pesticide residues can more easily be removed from crushed leaves, and the practical extraction efficiency was 97%. These results indicate that SBE is a useful method to efficiently remove the bifenthrin, and as appearance is not relevant in the production process, tea leaves should first be crushed and then extracted in order that residual pesticides are thoroughly removed.
A transient FETI methodology for large-scale parallel implicit computations in structural mechanics
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier
1992-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.
Lumia, Margaret E.; Gentile, Charles; Gochfeld, Michael; Efthimion, Philip; Robson, Mark
2015-01-01
This study evaluates a new decontamination technique for the mitigation and abatement of hazardous particulates. The traditional decontamination methods used to clean facilities and equipment are time-consuming, prolonging workers' exposure time, may generate airborne hazards, and can be expensive. The use of removable thin film coating as a decontamination technique for surface contamination proved to be a more efficient method of decontamination. This method was tested at three different sites on different hazardous metals. One application of the coating reduced the levels of these metals 90% and had an average reduction of one magnitude. The paired t-tests that were performed for each metal demonstrated that there was a statistically significant reduction of the metal after the use of the coating: lead (p = 0.03), beryllium (p = 0.05), aluminum (p = 0.006), iron (p = 0.0001), and copper (p = 0.004). The Kendall tau-b correlation coefficient demonstrates that there was a positive correlation between the initial levels of contamination and the removal efficiency for all the samples taken from different locations on the floor for each of the three sites. This new decontamination technique worked efficiently, requiring only one application, which decreased exposure time and did not generate any airborne dust. PMID:19437305
Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.
Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao
2016-06-01
Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.
Proposal of Heuristic Algorithm for Scheduling of Print Process in Auto Parts Supplier
NASA Astrophysics Data System (ADS)
Matsumoto, Shimpei; Okuhara, Koji; Ueno, Nobuyuki; Ishii, Hiroaki
We are interested in the print process on the manufacturing processes of auto parts supplier as an actual problem. The purpose of this research is to apply our scheduling technique developed in university to the actual print process in mass customization environment. Rationalization of the print process is depending on the lot sizing. The manufacturing lead time of the print process is long, and in the present method, production is done depending on worker’s experience and intuition. The construction of an efficient production system is urgent problem. Therefore, in this paper, in order to shorten the entire manufacturing lead time and to reduce the stock, we reexamine the usual method of the lot sizing rule based on heuristic technique, and we propose the improvement method which can plan a more efficient schedule.
Efficient Kriging via Fast Matrix-Vector Products
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.
2008-01-01
Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.
Wang, Wentao; Meng, Bingjun; Lu, Xiaoxia; Liu, Yu; Tao, Shu
2007-10-29
The methods of simultaneous extraction of polycyclic aromatic hydrocarbons (PAHs) and organochlorine pesticides (OCPs) from soils using Soxhlet extraction, microwave-assisted extraction (MAE) and accelerated solvent extraction (ASE) were established, and the extraction efficiencies using the three methods were systemically compared from procedural blank, limits of detection and quantification, method recovery and reproducibility, method chromatogram and other factors. In addition, soils with different total organic carbon contents were used to test the extraction efficiencies of the three methods. The results showed that the values obtained in this study were comparable with the values reported by other studies. In some respects such as method recovery and reproducibility, there were no significant differences among the three methods for the extraction of PAHs and OCPs. In some respects such as procedural blank and limits of detection and quantification, there were significant differences among the three methods. Overall, ASE had the best extraction efficiency compared to MAE and Soxhlet extraction, and the extraction efficiencies of MAE and Soxhlet extraction were comparable to each other depending on the property such as TOC content of the studied soil. Considering other factors such as solvent consumption and extraction time, ASE and MAE are preferable to Soxhlet extraction.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...
2018-04-17
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models
NASA Astrophysics Data System (ADS)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.
2018-04-01
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
NASA Astrophysics Data System (ADS)
Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin
2018-03-01
Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.
Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu
This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-01-01
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-04-07
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.
Comparison of algorithms to generate event times conditional on time-dependent covariates.
Sylvestre, Marie-Pierre; Abrahamowicz, Michal
2008-06-30
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.
Reuse of imputed data in microarray analysis increases imputation efficiency
Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su
2004-01-01
Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240
Efficient method for assessing channel instability near bridges
Robinson, Bret A.; Thompson, R.E.
1993-01-01
Efficient methods for data collection and processing are required to complete channel-instability assessments at 5,600 bridge sites in Indiana at an affordable cost and within a reasonable time frame while maintaining the quality of the assessments. To provide this needed efficiency and quality control, a data-collection form was developed that specifies the data to be collected and the order of data collection. This form represents a modification of previous forms that grouped variables according to type rather than by order of collection. Assessments completed during two field seasons showed that greater efficiency was achieved by using a fill-in-the-blank form that organizes the data to be recorded in a specified order: in the vehicle, from the roadway, in the upstream channel, under the bridge, and in the downstream channel.
ERIC Educational Resources Information Center
Hopwood, Christopher J.; Morey, Leslie C.; Edelen, Maria Orlando; Shea, M. Tracie; Grilo, Carlos M.; Sanislow, Charles A.; McGlashan, Thomas H.; Daversa, Maria T.; Gunderson, John G.; Zanarini, Mary C.; Markowitz, John C.; Skodol, Andrew E.
2008-01-01
Interview methods are widely regarded as the standard for the diagnosis of borderline personality disorder (BPD), whereas self-report methods are considered a time-efficient alternative. However, the relative validity of these methods has not been sufficiently tested. The current study used data from the Collaborative Longitudinal Personality…
Umoquit, Muriah J; Dobrow, Mark J; Lemieux-Charles, Louise; Ritvo, Paul G; Urbach, David R; Wodchis, Walter P
2008-08-08
This paper focuses on measuring the efficiency and effectiveness of two diagramming methods employed in key informant interviews with clinicians and health care administrators. The two methods are 'participatory diagramming', where the respondent creates a diagram that assists in their communication of answers, and 'graphic elicitation', where a researcher-prepared diagram is used to stimulate data collection. These two diagramming methods were applied in key informant interviews and their value in efficiently and effectively gathering data was assessed based on quantitative measures and qualitative observations. Assessment of the two diagramming methods suggests that participatory diagramming is an efficient method for collecting data in graphic form, but may not generate the depth of verbal response that many qualitative researchers seek. In contrast, graphic elicitation was more intuitive, better understood and preferred by most respondents, and often provided more contemplative verbal responses, however this was achieved at the expense of more interview time. Diagramming methods are important for eliciting interview data that are often difficult to obtain through traditional verbal exchanges. Subject to the methodological limitations of the study, our findings suggest that while participatory diagramming and graphic elicitation have specific strengths and weaknesses, their combined use can provide complementary information that would not likely occur with the application of only one diagramming method. The methodological insights gained by examining the efficiency and effectiveness of these diagramming methods in our study should be helpful to other researchers considering their incorporation into qualitative research designs.
An efficient method for hybrid density functional calculation with spin-orbit coupling
NASA Astrophysics Data System (ADS)
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
Xie, Shao-Lin; Bian, Wan-Ping; Wang, Chao; Junaid, Muhammad; Zou, Ji-Xing; Pei, De-Sheng
2016-01-01
Contemporary improvements in the type II clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) system offer a convenient way for genome editing in zebrafish. However, the low efficiencies of genome editing and germline transmission require a time-intensive and laborious screening work. Here, we reported a method based on in vitro oocyte storage by injecting oocytes in advance and incubating them in oocyte storage medium to significantly improve the efficiencies of genome editing and germline transmission by in vitro fertilization (IVF) in zebrafish. Compared to conventional methods, the prior micro-injection of zebrafish oocytes improved the efficiency of genome editing, especially for the sgRNAs with low targeting efficiency. Due to high throughputs, simplicity and flexible design, this novel strategy will provide an efficient alternative to increase the speed of generating heritable mutants in zebrafish by using CRISPR/Cas9 system. PMID:27680290
Khripunov, Sergey; Kobtsev, Sergey; Radnatarov, Daba
2016-01-20
This work presents for the first time to the best of our knowledge a comparative efficiency analysis among various techniques of extra-cavity second harmonic generation (SHG) of continuous-wave single-frequency radiation in nonperiodically poled nonlinear crystals within a broad range of power levels. Efficiency of nonlinear radiation transformation at powers from 1 W to 10 kW was studied in three different configurations: with an external power-enhancement cavity and without the cavity in the case of single and double radiation pass through a nonlinear crystal. It is demonstrated that at power levels exceeding 1 kW, the efficiencies of methods with and without external power-enhancement cavities become comparable, whereas at even higher powers, SHG by a single or double pass through a nonlinear crystal becomes preferable because of the relatively high efficiency of nonlinear transformation and fairly simple implementation.
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
NASA Astrophysics Data System (ADS)
Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.
2014-04-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Time-Efficient High-Rate Data Flooding in One-Dimensional Acoustic Underwater Sensor Networks
Kwon, Jae Kyun; Seo, Bo-Min; Yun, Kyungsu; Cho, Ho-Shin
2015-01-01
Because underwater communication environments have poor characteristics, such as severe attenuation, large propagation delays and narrow bandwidths, data is normally transmitted at low rates through acoustic waves. On the other hand, as high traffic has recently been required in diverse areas, high rate transmission has become necessary. In this paper, transmission/reception timing schemes that maximize the time axis use efficiency to improve the resource efficiency for high rate transmission are proposed. The excellence of the proposed scheme is identified by examining the power distributions by node, rate bounds, power levels depending on the rates and number of nodes, and network split gains through mathematical analysis and numerical results. In addition, the simulation results show that the proposed scheme outperforms the existing packet train method. PMID:26528983
Time for Genome Editing: Next-Generation Attenuated Malaria Parasites.
Singer, Mirko; Frischknecht, Friedrich
2017-03-01
Immunization with malaria parasites that developmentally arrest in or immediately after the liver stage is the only way currently known to confer sterilizing immunity in both humans and rodent models. There are various ways to attenuate parasite development resulting in different timings of arrest, which has a significant impact on vaccination efficiency. To understand what most impacts vaccination efficiency, newly developed gain-of-function methods can now be used to generate a wide array of differently attenuated parasites. The combination of multiple attenuation approaches offers the potential to engineer efficiently attenuated Plasmodium parasites and learn about their fascinating biology at the same time. Here we discuss recent studies and the potential of targeted parasite manipulation using genome editing to develop live attenuated malaria vaccines. Copyright © 2016 Elsevier Ltd. All rights reserved.
Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.
1995-01-01
The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.
NASA Astrophysics Data System (ADS)
Kifonidis, K.; Müller, E.
2012-08-01
Aims: We describe and study a family of new multigrid iterative solvers for the multidimensional, implicitly discretized equations of hydrodynamics. Schemes of this class are free of the Courant-Friedrichs-Lewy condition. They are intended for simulations in which widely differing wave propagation timescales are present. A preferred solver in this class is identified. Applications to some simple stiff test problems that are governed by the compressible Euler equations, are presented to evaluate the convergence behavior, and the stability properties of this solver. Algorithmic areas are determined where further work is required to make the method sufficiently efficient and robust for future application to difficult astrophysical flow problems. Methods: The basic equations are formulated and discretized on non-orthogonal, structured curvilinear meshes. Roe's approximate Riemann solver and a second-order accurate reconstruction scheme are used for spatial discretization. Implicit Runge-Kutta (ESDIRK) schemes are employed for temporal discretization. The resulting discrete equations are solved with a full-coarsening, non-linear multigrid method. Smoothing is performed with multistage-implicit smoothers. These are applied here to the time-dependent equations by means of dual time stepping. Results: For steady-state problems, our results show that the efficiency of the present approach is comparable to the best implicit solvers for conservative discretizations of the compressible Euler equations that can be found in the literature. The use of red-black as opposed to symmetric Gauss-Seidel iteration in the multistage-smoother is found to have only a minor impact on multigrid convergence. This should enable scalable parallelization without having to seriously compromise the method's algorithmic efficiency. For time-dependent test problems, our results reveal that the multigrid convergence rate degrades with increasing Courant numbers (i.e. time step sizes). Beyond a Courant number of nine thousand, even complete multigrid breakdown is observed. Local Fourier analysis indicates that the degradation of the convergence rate is associated with the coarse-grid correction algorithm. An implicit scheme for the Euler equations that makes use of the present method was, nevertheless, able to outperform a standard explicit scheme on a time-dependent problem with a Courant number of order 1000. Conclusions: For steady-state problems, the described approach enables the construction of parallelizable, efficient, and robust implicit hydrodynamics solvers. The applicability of the method to time-dependent problems is presently restricted to cases with moderately high Courant numbers. This is due to an insufficient coarse-grid correction of the employed multigrid algorithm for large time steps. Further research will be required to help us to understand and overcome the observed multigrid convergence difficulties for time-dependent problems.
On computational methods for crashworthiness
NASA Technical Reports Server (NTRS)
Belytschko, T.
1992-01-01
The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.
Real-Time Measurement of Machine Efficiency during Inertia Friction Welding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tung, Daniel Joseph; Mahaffey, David; Senkov, Oleg
Process efficiency is a crucial parameter for inertia friction welding (IFW) that is largely unknown at the present time. A new method has been developed to determine the transient profile of the IFW process efficiency by comparing the workpiece torque used to heat and deform the joint region to the total torque. Particularly, the former is measured by a torque load cell attached to the non-rotating workpiece while the latter is calculated from the deceleration rate of flywheel rotation. The experimentally-measured process efficiency for IFW of AISI 1018 steel rods is validated independently by the upset length estimated from anmore » analytical equation of heat balance and the flash profile calculated from a finite element based thermal stress model. The transient behaviors of torque and efficiency during IFW are discussed based on the energy loss to machine bearings and the bond formation at the joint interface.« less
A robust component mode synthesis method for stochastic damped vibroacoustics
NASA Astrophysics Data System (ADS)
Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine
2010-01-01
In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.
Real-Time Frequency Response Estimation Using Joined-Wing SensorCraft Aeroelastic Wind-Tunnel Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A; Heeg, Jennifer; Morelli, Eugene A
2012-01-01
A new method is presented for estimating frequency responses and their uncertainties from wind-tunnel data in real time. The method uses orthogonal phase-optimized multi- sine excitation inputs and a recursive Fourier transform with a least-squares estimator. The method was first demonstrated with an F-16 nonlinear flight simulation and results showed that accurate short period frequency responses were obtained within 10 seconds. The method was then applied to wind-tunnel data from a previous aeroelastic test of the Joined- Wing SensorCraft. Frequency responses describing bending strains from simultaneous control surface excitations were estimated in a time-efficient manner.
Alam, Pravej; Khan, Zainul Abdeen; Abdin, Malik Zainul; Khan, Jawaid A; Ahmad, Parvaiz; Elkholy, Shereen F; Sharaf-Eldin, Mahmoud A
2017-05-01
Catharanthus roseus is an important medicinal plant known for its pharmacological qualities such as antimicrobial, anticancerous, antifeedant, antisterility, antidiabetic activities. More than 130 bioactive compounds like vinblastine, vindoline and vincristine have been synthesized in this plant. Extensive studies have been carried out for optimization regeneration and transformation protocols. Most of the protocol described are laborious and time-consuming. Due to sophisticated protocol of regeneration and genetic transformation, the production of these bioactive molecules is less and not feasible to be commercialized worldwide. Here we have optimized the efficient protocol for regeneration and transformation to minimize the time scale and enhance the transformation frequency through Agrobacterium and sonication-assisted transformation (SAAT) method. In this study, hypocotyl explants responded best for maximal production of transformed shoots. The callus percentage were recorded 52% with 1.0 mg L -1 (BAP) and 0.5 mg L -1 (NAA) while 80% shoot percentage obtained with 4.0 mg L -1 (BAP) and 0.05 mg L -1 (NAA). The microscopic studies revealed that the expression of GFP was clearly localized in leaf tissue of the C. roseus after transformation of pRepGFP0029 construct. Consequently, transformation efficiency was revealed on the basis of GFP localization. The transformation efficiency of SAAT method was 6.0% comparable to 3.5% as conventional method. Further, PCR analysis confirmed the integration of the nptII gene in the transformed plantlets of C. roseus.
A high-order Lagrangian-decoupling method for the incompressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ho, Lee-Wing; Maday, Yvon; Patera, Anthony T.; Ronquist, Einar M.
1989-01-01
A high-order Lagrangian-decoupling method is presented for the unsteady convection-diffusion and incompressible Navier-Stokes equations. The method is based upon: (1) Lagrangian variational forms that reduce the convection-diffusion equation to a symmetric initial value problem; (2) implicit high-order backward-differentiation finite-difference schemes for integration along characteristics; (3) finite element or spectral element spatial discretizations; and (4) mesh-invariance procedures and high-order explicit time-stepping schemes for deducing function values at convected space-time points. The method improves upon previous finite element characteristic methods through the systematic and efficient extension to high order accuracy, and the introduction of a simple structure-preserving characteristic-foot calculation procedure which is readily implemented on modern architectures. The new method is significantly more efficient than explicit-convection schemes for the Navier-Stokes equations due to the decoupling of the convection and Stokes operators and the attendant increase in temporal stability. Numerous numerical examples are given for the convection-diffusion and Navier-Stokes equations for the particular case of a spectral element spatial discretization.
A Brightness-Referenced Star Identification Algorithm for APS Star Trackers
Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning
2014-01-01
Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4∼5 times that of the pyramid method and 35∼37 times that of the geometric method. PMID:25299950
A brightness-referenced star identification algorithm for APS star trackers.
Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning
2014-10-08
Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4~5 times that of the pyramid method and 35~37 times that of the geometric method.
NASA Astrophysics Data System (ADS)
Berthias, F.; Feketeová, L.; Della Negra, R.; Dupasquier, T.; Fillol, R.; Abdoul-Carime, H.; Farizon, B.; Farizon, M.; Märk, T. D.
2018-01-01
The combination of the Dispositif d'Irradiation d'Agrégats Moléculaire with the correlated ion and neutral time of flight-velocity map imaging technique provides a new way to explore processes occurring subsequent to the excitation of charged nano-systems. The present contribution describes in detail the methods developed for the quantitative measurement of branching ratios and cross sections for collision-induced dissociation processes of water cluster nano-systems. These methods are based on measurements of the detection efficiency of neutral fragments produced in these dissociation reactions. Moreover, measured detection efficiencies are used here to extract the number of neutral fragments produced for a given charged fragment.
Towards developing robust algorithms for solving partial differential equations on MIMD machines
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Naik, Vijay K.
1988-01-01
Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.
Towards developing robust algorithms for solving partial differential equations on MIMD machines
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Naik, V. K.
1985-01-01
Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.
Highly efficient and autocatalytic H2₂ dissociation for CO₂ reduction into formic acid with zinc.
Jin, Fangming; Zeng, Xu; Liu, Jianke; Jin, Yujia; Wang, Lunying; Zhong, Heng; Yao, Guodong; Huo, Zhibao
2014-03-28
Artificial photosynthesis, specifically H2O dissociation for CO2 reduction with solar energy, is regarded as one of the most promising methods for sustainable energy and utilisation of environmental resources. However, a highly efficient conversion still remains extremely challenging. The hydrogenation of CO2 is regarded as the most commercially feasible method, but this method requires either exotic catalysts or high-purity hydrogen and hydrogen storage, which are regarded as an energy-intensive process. Here we report a highly efficient method of H2O dissociation for reducing CO2 into chemicals with Zn powder that produces formic acid with a high yield of approximately 80%, and this reaction is revealed for the first time as an autocatalytic process in which an active intermediate, ZnH(-) complex, serves as the active hydrogen. The proposed process can assist in developing a new concept for improving artificial photosynthetic efficiency by coupling geochemistry, specifically the metal-based reduction of H2O and CO2, with solar-driven thermochemistry for reducing metal oxide into metal.
Highly efficient and autocatalytic H2O dissociation for CO2 reduction into formic acid with zinc
Jin, Fangming; Zeng, Xu; Liu, Jianke; Jin, Yujia; Wang, Lunying; Zhong, Heng; Yao, Guodong; Huo, Zhibao
2014-01-01
Artificial photosynthesis, specifically H2O dissociation for CO2 reduction with solar energy, is regarded as one of the most promising methods for sustainable energy and utilisation of environmental resources. However, a highly efficient conversion still remains extremely challenging. The hydrogenation of CO2 is regarded as the most commercially feasible method, but this method requires either exotic catalysts or high-purity hydrogen and hydrogen storage, which are regarded as an energy-intensive process. Here we report a highly efficient method of H2O dissociation for reducing CO2 into chemicals with Zn powder that produces formic acid with a high yield of approximately 80%, and this reaction is revealed for the first time as an autocatalytic process in which an active intermediate, ZnH− complex, serves as the active hydrogen. The proposed process can assist in developing a new concept for improving artificial photosynthetic efficiency by coupling geochemistry, specifically the metal-based reduction of H2O and CO2, with solar-driven thermochemistry for reducing metal oxide into metal. PMID:24675820
Estimating the Efficiency of Therapy Groups in a College Counseling Center
ERIC Educational Resources Information Center
Weatherford, Ryan D.
2017-01-01
College counseling centers are facing rapidly increasing demands for services and are tasked to find efficient ways of providing adequate services while managing limited space. The use of therapy groups has been proposed as a method of managing demand. This brief report examines the clinical time savings of a traditional group therapy program in a…
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...
2017-01-04
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less
Yamada, Akira; Terakawa, Mitsuhiro
2015-04-10
We present a design method of a bull's eye structure with asymmetric grooves for focusing oblique incident light. The design method is capable of designing transmission peaks to a desired oblique angle with capability of collecting light from a wider range of angles. The bull's eye groove geometry for oblique incidence is designed based on the electric field intensity pattern around an isolated subwavelength aperture on a thin gold film at oblique incidence, calculated by the finite difference time domain method. Wide angular transmission efficiency is successfully achieved by overlapping two different bull's eye groove patterns designed with different peak angles. Our novel design method would overcome the angular limitations of the conventional methods.
Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.
2013-10-01
In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.
Research on interpolation methods in medical image processing.
Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian
2012-04-01
Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.
The calculation of viscosity of liquid n-decane and n-hexadecane by the Green-Kubo method
NASA Astrophysics Data System (ADS)
Cui, S. T.; Cummings, P. T.; Cochran, H. D.
This short commentary presents the result of long molecular dynamics simulation calculations of the shear viscosity of liquid n-decane and n-hexadecane using the Green-Kubo integration method. The relaxation time of the stress-stress correlation function is compared with those of rotation and diffusion. The rotational and diffusional relaxation times, which are easy to calculate, provide useful guides for the required simulation time in viscosity calculations. Also, the computational time required for viscosity calculations of these systems by the Green-Kubo method is compared with the time required for previous non-equilibrium molecular dynamics calculations of the same systems. The method of choice for a particular calculation is determined largely by the properties of interest, since the efficiencies of the two methods are comparable for calculation of the zero strain rate viscosity.
A new desorption method for removing organic solvents from activated carbon using surfactant.
Hinoue, Mitsuo; Ishimatsu, Sumiyo; Fueta, Yukiko; Hori, Hajime
2017-03-28
A new desorption method was investigated, which does not require toxic organic solvents. Efficient desorption of organic solvents from activated carbon was achieved with an ananionic surfactant solution, focusing on its washing and emulsion action. Isopropyl alcohol (IPA) and methyl ethyl ketone (MEK) were used as test solvents. Lauryl benzene sulfonic acid sodium salt (LAS) and sodium dodecyl sulfate (SDS) were used as the surfactant. Activated carbon (100 mg) was placed in a vial and a predetermined amount of organic solvent was added. After leaving for about 24 h, a predetermined amount of the surfactant solution was added. After leaving for another 72 h, the vial was heated in an incubator at 60°C for a predetermined time. The organic vapor concentration was then determined with a frame ionization detector (FID)-gas chromatograph and the desorption efficiency was calculated. A high desorption efficiency was obtained with a 10% surfactant solution (LAS 8%, SDS 2%), 5 ml desorption solution, 60°C desorption temperature, and desorption time of over 24 h, and the desorption efficiency was 72% for IPA and 9% for MEK. Under identical conditions, the desorption efficiencies for another five organic solvents were investigated, which were 36%, 3%, 32%, 2%, and 3% for acetone, ethyl acetate, dichloromethane, toluene, and m-xylene, respectively. A combination of two anionic surfactants exhibited a relatively high desorption efficiency for IPA. For toluene, the desorption efficiency was low due to poor detergency and emulsification power.
Technical efficiency and resources allocation in university hospitals in Tehran, 2009-2012
Rezapour, Aziz; Ebadifard Azar, Farbod; Yousef Zadeh, Negar; Roumiani, YarAllah; Bagheri Faradonbeh, Saeed
2015-01-01
Background: Assessment of hospitals’ performance in achieving its goals is a basic necessity. Measuring the efficiency of hospitals in order to boost resource productivity in healthcare organizations is extremely important. The aim of this study was to measure technical efficiency and determining status of resource allocation in some university hospitals, in Tehran, Iran. Methods: This study was conducted in 2012; the research population consisted of all hospitals affiliated to Iran and Tehran medical sciences universities of. Required data, such as human and capital resources information and also production variables (hospital outputs) were collected from data centers of studied hospitals. Data were analyzed using data envelopment analysis (DEA) method, Deap2,1 software; and the stochastic frontier analysis (SFA) method, Frontier 4,1 software. Results: According to DEA method, average of technical, management (pure) and scale efficiency of the studied hospitals during the study period were calculated 0.87, 0.971, and 0.907, respectively. All kinds of efficiency did not follow a fixed trend over the study time and were constantly changing. In the stochastic frontier's production function analysis, the technical efficiency of the studied industry during the study period was estimated to be 0.389. Conclusion: This study represented hospitals with the highest and lowest efficiency. Reference hospitals (more efficient states) were indicated for the inefficient centers. According to the findings, it was found that in the hospitals that do not operate efficiently, there is a capacity to improve the technical efficiency by removing excess inputs without changes in the level of outputs. However, by the optimal allocation of resources in most studied hospitals, very important economy of scale can be achieved. PMID:26793657
Need for speed: An optimized gridding approach for spatially explicit disease simulations.
Sellman, Stefan; Tsao, Kimberly; Tildesley, Michael J; Brommesson, Peter; Webb, Colleen T; Wennergren, Uno; Keeling, Matt J; Lindström, Tom
2018-04-01
Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power.
Need for speed: An optimized gridding approach for spatially explicit disease simulations
Tildesley, Michael J.; Brommesson, Peter; Webb, Colleen T.; Wennergren, Uno; Lindström, Tom
2018-01-01
Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power. PMID:29624574
Studies on Tasar Cocoon Cooking Using Permeation Method
NASA Astrophysics Data System (ADS)
Javali, Uday C.; Malali, Kiran B.; Ramya, H. G.; Naik, Subhas V.; Padaki, Naveen V.
2018-02-01
Cocoon cooking is an important process before reeling of tasar silk yarn. Cooking ensures loosening of the filaments in the tasar cocoons thereby easing the process of yarn withdrawal during reeling process. Tasar cocoons have very hard shell and hence these cocoons need chemical cooking process to loosen the silk filaments. Attempt has been made in this article to study the effect of using vacuum permeation chamber for tasar cocoon cooking in order to reduce the cooking time and improve the quality of tasar silk yarn. Vacuum assisted permeation cooking method has been studied in this article on tasar daba cocoons for cooking efficiency, deflossing and reelability. Its efficiency has been evaluated with respect to different cooking methods viz, traditional and open pan cooking methods. The tasar silk produced after reeling process has been tested for fineness, strength and cohesion properties. Results indicate that permeation method of tasar cooking ensures uniform cooking with higher efficiency along with better reeling performance and improved yarn properties.
One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1991-01-01
The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems.
Jeon, Dae-Woo; Jang, Lee-Woon; Jeon, Ju-Won; Park, Jae-Woo; Song, Young Ho; Jeon, Seong-Ran; Ju, Jin-Woo; Baek, Jong Hyeob; Lee, In-Hwan
2013-05-01
In this study, we have fabricated 375-nm-wavelength InGaN/AlInGaN nanopillar light emitting diodes (LED) structures on c-plane sapphire. A uniform and highly vertical nanopillar structure was fabricated using self-organized Ni/SiO2 nano-size mask by dry etching method. To minimize the dry etching damage, the samples were subjected to high temperature annealing with subsequent chemical passivation in KOH solution. Prior to annealing and passivation the UV nanopillar LEDs showed the photoluminescence (PL) efficiency about 2.5 times higher than conventional UV LED structures which is attributed to better light extraction efficiency and possibly some improvement of internal quantum efficiency due to partially relieved strain. Annealing alone further increased the PL efficiency by about 4.5 times compared to the conventional UV LEDs, while KOH passivation led to the overall PL efficiency improvement by more than 7 times. Combined results of Raman spectroscopy and X-ray photoelectron spectroscopy (XPS) suggest that annealing decreases the number of lattice defects and relieves the strain in the surface region of the nanopillars whereas KOH treatment removes the surface oxide from nanopillar surface.
Umoquit, Muriah J; Dobrow, Mark J; Lemieux-Charles, Louise; Ritvo, Paul G; Urbach, David R; Wodchis, Walter P
2008-01-01
Background This paper focuses on measuring the efficiency and effectiveness of two diagramming methods employed in key informant interviews with clinicians and health care administrators. The two methods are 'participatory diagramming', where the respondent creates a diagram that assists in their communication of answers, and 'graphic elicitation', where a researcher-prepared diagram is used to stimulate data collection. Methods These two diagramming methods were applied in key informant interviews and their value in efficiently and effectively gathering data was assessed based on quantitative measures and qualitative observations. Results Assessment of the two diagramming methods suggests that participatory diagramming is an efficient method for collecting data in graphic form, but may not generate the depth of verbal response that many qualitative researchers seek. In contrast, graphic elicitation was more intuitive, better understood and preferred by most respondents, and often provided more contemplative verbal responses, however this was achieved at the expense of more interview time. Conclusion Diagramming methods are important for eliciting interview data that are often difficult to obtain through traditional verbal exchanges. Subject to the methodological limitations of the study, our findings suggest that while participatory diagramming and graphic elicitation have specific strengths and weaknesses, their combined use can provide complementary information that would not likely occur with the application of only one diagramming method. The methodological insights gained by examining the efficiency and effectiveness of these diagramming methods in our study should be helpful to other researchers considering their incorporation into qualitative research designs. PMID:18691410
Preparation of biodiesel with the help of ultrasonic and hydrodynamic cavitation.
Ji, Jianbing; Wang, Jianli; Li, Yongchao; Yu, Yunliang; Xu, Zhichao
2006-12-22
An alkali-catalyzed biodiesel production method with power ultrasonic (19.7 kHz) has been developed that allows a short reaction time and high yield because of emulsification and cavitation of the liquid-liquid immiscible system. Orthogonality experiments were employed to evaluate the effects of synthesis parameters. Furthermore, hydrodynamic cavitation was used for biodiesel production in comparison to ultrasonic method. Both methods were proved to be efficient, and time and energy saving for the preparation of biodiesel by transesterification of soybean oil.
NASA Technical Reports Server (NTRS)
Rosenfeld, Moshe
1990-01-01
The main goals are the development, validation, and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems. A solution method that combines a finite volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.
Estimating the number of people in crowded scenes
NASA Astrophysics Data System (ADS)
Kim, Minjin; Kim, Wonjun; Kim, Changick
2011-01-01
This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.
Method of chaotic mixing and improved stirred tank reactors
Muzzio, F.J.; Lamberto, D.J.
1999-07-13
The invention provides a method and apparatus for efficiently achieving a homogeneous mixture of fluid components by introducing said components having a Reynolds number of between about [le]1 to about 500 into a vessel and continuously perturbing the mixing flow by altering the flow speed and mixing time until homogeneity is reached. This method prevents the components from aggregating into non-homogeneous segregated regions within said vessel during mixing and substantially reduces the time the admixed components reach homogeneity. 19 figs.
Method of chaotic mixing and improved stirred tank reactors
Muzzio, Fernando J.; Lamberto, David J.
1999-01-01
The invention provides a method and apparatus for efficiently achieving a homogeneous mixture of fluid components by introducing said components having a Reynolds number of between about .ltoreq.1 to about 500 into a vessel and continuously perturbing the mixing flow by altering the flow speed and mixing time until homogeniety is reached. This method prevents the components from aggregating into non-homogeneous segregated regions within said vessel during mixing and substantially reduces the time the admixed components reach homogeneity.
Zhou, Lianjie; Chen, Nengcheng; Yuan, Sai; Chen, Zeqiang
2016-10-29
The efficient sharing of spatio-temporal trajectory data is important to understand traffic congestion in mass data. However, the data volumes of bus networks in urban cities are growing rapidly, reaching daily volumes of one hundred million datapoints. Accessing and retrieving mass spatio-temporal trajectory data in any field is hard and inefficient due to limited computational capabilities and incomplete data organization mechanisms. Therefore, we propose an optimized and efficient spatio-temporal trajectory data retrieval method based on the Cloudera Impala query engine, called ESTRI, to enhance the efficiency of mass data sharing. As an excellent query tool for mass data, Impala can be applied for mass spatio-temporal trajectory data sharing. In ESTRI we extend the spatio-temporal trajectory data retrieval function of Impala and design a suitable data partitioning method. In our experiments, the Taiyuan BeiDou (BD) bus network is selected, containing 2300 buses with BD positioning sensors, producing 20 million records every day, resulting in two difficulties as described in the Introduction section. In addition, ESTRI and MongoDB are applied in experiments. The experiments show that ESTRI achieves the most efficient data retrieval compared to retrieval using MongoDB for data volumes of fifty million, one hundred million, one hundred and fifty million, and two hundred million. The performance of ESTRI is approximately seven times higher than that of MongoDB. The experiments show that ESTRI is an effective method for retrieving mass spatio-temporal trajectory data. Finally, bus distribution mapping in Taiyuan city is achieved, describing the buses density in different regions at different times throughout the day, which can be applied in future studies of transport, such as traffic scheduling, traffic planning and traffic behavior management in intelligent public transportation systems.
Meng, Yuguang; Lei, Hao
2010-06-01
An efficient iterative gridding reconstruction method with correction of off-resonance artifacts was developed, which is especially tailored for multiple-shot non-Cartesian imaging. The novelty of the method lies in that the transformation matrix for gridding (T) was constructed as the convolution of two sparse matrices, among which the former is determined by the sampling interval and the spatial distribution of the off-resonance frequencies and the latter by the sampling trajectory and the target grid in the Cartesian space. The resulting T matrix is also sparse and can be solved efficiently with the iterative conjugate gradient algorithm. It was shown that, with the proposed method, the reconstruction speed in multiple-shot non-Cartesian imaging can be improved significantly while retaining high reconstruction fidelity. More important, the method proposed allows tradeoff between the accuracy and the computation time of reconstruction, making customization of the use of such a method in different applications possible. The performance of the proposed method was demonstrated by numerical simulation and multiple-shot spiral imaging on rat brain at 4.7 T. (c) 2010 Wiley-Liss, Inc.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan
2001-01-01
Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.
NASA Astrophysics Data System (ADS)
Fulton, J. W.; Bjerklie, D. M.; Jones, J. W.; Minear, J. T.
2015-12-01
Measuring streamflow, developing, and maintaining rating curves at new streamgaging stations is both time-consuming and problematic. Hydro 21 was an initiative by the U.S. Geological Survey to provide vision and leadership to identify and evaluate new technologies and methods that had the potential to change the way in which streamgaging is conducted. Since 2014, additional trials have been conducted to evaluate some of the methods promoted by the Hydro 21 Committee. Emerging technologies such as continuous-wave radars and computationally-efficient methods such as the Probability Concept require significantly less field time, promote real-time velocity and streamflow measurements, and apply to unsteady flow conditions such as looped ratings and unsteady-flood flows. Portable and fixed-mount radars have advanced beyond the development phase, are cost effective, and readily available in the marketplace. The Probability Concept is based on an alternative velocity-distribution equation developed by C.-L. Chiu, who pioneered the concept. By measuring the surface-water velocity and correcting for environmental influences such as wind drift, radars offer a reliable alternative for measuring and computing real-time streamflow for a variety of hydraulic conditions. If successful, these tools may allow us to establish ratings more efficiently, assess unsteady flow conditions, and report real-time streamflow at new streamgaging stations.
Chen, Zhongchuan Will; Kohan, Jessica; Perkins, Sherrie L.; Hussong, Jerry W.; Salama, Mohamed E.
2014-01-01
Background: Whole slide imaging (WSI) is widely used for education and research, but is increasingly being used to streamline clinical workflow. We present our experience with regard to satisfaction and time utilization using oil immersion WSI for presentation of blood/marrow aspirate smears, core biopsies, and tissue sections in hematology/oncology tumor board/treatment planning conferences (TPC). Methods: Lymph nodes and bone marrow core biopsies were scanned at ×20 magnification and blood/marrow smears at 83X under oil immersion and uploaded to an online library with areas of interest to be displayed annotated digitally via web browser. Pathologist time required to prepare slides for scanning was compared to that required to prepare for microscope projection (MP). Time required to present cases during TPC was also compared. A 10-point evaluation survey was used to assess clinician satisfaction with each presentation method. Results: There was no significant difference in hematopathologist preparation time between WSI and MP. However, presentation time was significantly less for WSI compared to MP as selection and annotation of slides was done prior to TPC with WSI, enabling more efficient use of TPC presentation time. Survey results showed a significant increase in satisfaction by clinical attendees with regard to image quality, efficiency of presentation of pertinent findings, aid in clinical decision-making, and overall satisfaction regarding pathology presentation. A majority of respondents also noted decreased motion sickness with WSI. Conclusions: Whole slide imaging, particularly with the ability to use oil scanning, provides higher quality images compared to MP and significantly increases clinician satisfaction. WSI streamlines preparation for TPC by permitting prior slide selection, resulting in greater efficiency during TPC presentation. PMID:25379347
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamashita, G.; Nagai, M., E-mail: mnagai@mp.es.osaka-u.ac.jp, E-mail: ashida@mp.es.osaka-u.ac.jp; Ashida, M., E-mail: mnagai@mp.es.osaka-u.ac.jp, E-mail: ashida@mp.es.osaka-u.ac.jp
We estimated the carrier multiplication efficiency in the most common solar-cell material, Si, by using optical-pump/terahertz-probe spectroscopy. Through close analysis of time-resolved data, we extracted the exact number of photoexcited carriers from the sheet carrier density 10 ps after photoexcitation, excluding the influences of spatial diffusion and surface recombination in the time domain. For incident photon energies greater than 4.0 eV, we observed enhanced internal quantum efficiency due to carrier multiplication. The evaluated value of internal quantum efficiency agrees well with the results of photocurrent measurements. This optical method allows us to estimate the carrier multiplication and surface recombination of carriersmore » quantitatively, which are crucial for the design of the solar cells.« less
NASA Astrophysics Data System (ADS)
Hosseini, Kamyar; Mayeli, Peyman; Ansari, Reza
2018-07-01
Finding the exact solutions of nonlinear fractional differential equations has gained considerable attention, during the past two decades. In this paper, the conformable time-fractional Klein-Gordon equations with quadratic and cubic nonlinearities are studied. Several exact soliton solutions, including the bright (non-topological) and singular soliton solutions are formally extracted by making use of the ansatz method. Results demonstrate that the method can efficiently handle the time-fractional Klein-Gordon equations with different nonlinearities.
A maintenance time prediction method considering ergonomics through virtual reality simulation.
Zhou, Dong; Zhou, Xin-Xin; Guo, Zi-Yue; Lv, Chuan
2016-01-01
Maintenance time is a critical quantitative index in maintainability prediction. An efficient maintenance time measurement methodology plays an important role in early stage of the maintainability design. While traditional way to measure the maintenance time ignores the differences between line production and maintenance action. This paper proposes a corrective MOD method considering several important ergonomics factors to predict the maintenance time. With the help of the DELMIA analysis tools, the influence coefficient of several factors are discussed to correct the MOD value and the designers can measure maintenance time by calculating the sum of the corrective MOD time of each maintenance therbligs. Finally a case study is introduced, by maintaining the virtual prototype of APU motor starter in DELMIA, designer obtains the actual maintenance time by the proposed method, and the result verifies the effectiveness and accuracy of the proposed method.
Liu, Xu; Chen, Haiping; Xue, Chen
2018-01-01
Objectives Emergency medical system for mass casualty incidents (EMS-MCIs) is a global issue. However, China lacks such studies extremely, which cannot meet the requirement of rapid decision-support system. This study aims to realize modeling EMS-MCIs in Shanghai, to improve mass casualty incident (MCI) rescue efficiency in China, and to provide a possible method of making rapid rescue decisions during MCIs. Methods This study established a system dynamics (SD) model of EMS-MCIs using the Vensim DSS program. Intervention scenarios were designed as adjusting scales of MCIs, allocation of ambulances, allocation of emergency medical staff, and efficiency of organization and command. Results Mortality increased with the increasing scale of MCIs, medical rescue capability of hospitals was relatively good, but the efficiency of organization and command was poor, and the prehospital time was too long. Mortality declined significantly when increasing ambulances and improving the efficiency of organization and command; triage and on-site first-aid time were shortened if increasing the availability of emergency medical staff. The effect was the most evident when 2,000 people were involved in MCIs; however, the influence was very small under the scale of 5,000 people. Conclusion The keys to decrease the mortality of MCIs were shortening the prehospital time and improving the efficiency of organization and command. For small-scale MCIs, improving the utilization rate of health resources was important in decreasing the mortality. For large-scale MCIs, increasing the number of ambulances and emergency medical professionals was the core to decrease prehospital time and mortality. For super-large-scale MCIs, increasing health resources was the premise. PMID:29440876
Unconventional Hamilton-type variational principle in phase space and symplectic algorithm
NASA Astrophysics Data System (ADS)
Luo, En; Huang, Weijiang; Zhang, Hexin
2003-06-01
By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.
NASA Astrophysics Data System (ADS)
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
NASA Astrophysics Data System (ADS)
Song, Wanjun; Zhang, Hou
2017-11-01
Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.
Zhao, Peng; Zhao, Hongping
2012-09-10
The enhancement of light extraction efficiency for thin-film flip-chip (TFFC) InGaN quantum wells (QWs) light-emitting diodes (LEDs) with GaN micro-domes on n-GaN layer was studied. The light extraction efficiency of TFFC InGaN QWs LEDs with GaN micro-domes were calculated and compared to that of the conventional TFFC InGaN QWs LEDs with flat surface. The three dimensional finite difference time domain (3D-FDTD) method was used to calculate the light extraction efficiency for the InGaN QWs LEDs emitting at 460nm and 550 nm, respectively. The effects of the GaN micro-dome feature size and the p-GaN layer thickness on the light extraction efficiency were studied systematically. Studies indicate that the p-GaN layer thickness is critical for optimizing the TFFC LED light extraction efficiency. Significant enhancement of the light extraction efficiency (2.5-2.7 times for λ(peak) = 460nm and 2.7-2.8 times for λ(peak) = 550nm) is achievable from TFFC InGaN QWs LEDs with optimized GaN micro-dome diameter and height.
NASA Astrophysics Data System (ADS)
Tumakov, Dmitry A.; Telnov, Dmitry A.; Maltsev, Ilia A.; Plunien, Günter; Shabaev, Vladimir M.
2017-10-01
We develop an efficient numerical implementation of the relativistic time-dependent density functional theory (RTDDFT) to study multielectron highly-charged ions subject to intense linearly-polarized laser fields. The interaction with the electromagnetic field is described within the electric dipole approximation. The resulting time-dependent relativistic Kohn-Sham (RKS) equations possess an axial symmetry and are solved accurately and efficiently with the help of the time-dependent generalized pseudospectral method. As a case study, we calculate multiphoton ionization probabilities of the neutral argon atom and argon-like xenon ion. Relativistic effects are assessed by comparison of our present results with existing non-relativistic data.
Study of vesicle size distribution dependence on pH value based on nanopore resistive pulse method
NASA Astrophysics Data System (ADS)
Lin, Yuqing; Rudzevich, Yauheni; Wearne, Adam; Lumpkin, Daniel; Morales, Joselyn; Nemec, Kathleen; Tatulian, Suren; Lupan, Oleg; Chow, Lee
2013-03-01
Vesicles are low-micron to sub-micron spheres formed by a lipid bilayer shell and serve as potential vehicles for drug delivery. The size of vesicle is proposed to be one of the instrumental variables affecting delivery efficiency since the size is correlated to factors like circulation and residence time in blood, the rate for cell endocytosis, and efficiency in cell targeting. In this work, we demonstrate accessible and reliable detection and size distribution measurement employing a glass nanopore device based on the resistive pulse method. This novel method enables us to investigate the size distribution dependence of pH difference across the membrane of vesicles with very small sample volume and rapid speed. This provides useful information for optimizing the efficiency of drug delivery in a pH sensitive environment.
NASA Astrophysics Data System (ADS)
Hino, Hisato; Hoshino, Satoshi; Fujisawa, Tomoharu; Maruyama, Shigehisa; Ota, Jun
Currently, container ships move cargo with minimal participation from external trucks. However, there is slack time between the departure of container ships and the completion of cargo handling by container ships without the participation of external trucks; therefore, external trucks can be used to move cargo without delaying the departure time. In this paper, we propose a solution involving the control algorithms of transfer cranes (TCs) because the efficiency of yard operations depends largely on the productivity of TCs. TCs work according to heuristic rules using the forecasted arrival times of internal and external trucks. Simulation results show that the proposed method can reduce the waiting time of external trucks and meet the departure time of container ships.
Modified microplate method for rapid and efficient estimation of siderophore produced by bacteria.
Arora, Naveen Kumar; Verma, Maya
2017-12-01
In this study, siderophore production by various bacteria amongst the plant-growth-promoting rhizobacteria was quantified by a rapid and efficient method. In total, 23 siderophore-producing bacterial isolates/strains were taken to estimate their siderophore-producing ability by the standard method (chrome azurol sulphonate assay) as well as 96 well microplate method. Production of siderophore was estimated in percent siderophore unit by both the methods. It was observed that data obtained by both methods correlated positively with each other proving the correctness of microplate method. By the modified microplate method, siderophore production by several bacterial strains can be estimated both qualitatively and quantitatively at one go, saving time, chemicals, making it very less tedious, and also being cheaper in comparison with the method currently in use. The modified microtiter plate method as proposed here makes it far easier to screen the plant-growth-promoting character of plant-associated bacteria.
NASA Astrophysics Data System (ADS)
Wilde-Piorko, M.; Polkowski, M.
2016-12-01
Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
NASA Astrophysics Data System (ADS)
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
Jeong, Kyung Min; Zhao, Jing; Jin, Yan; Heo, Seong Rok; Han, Se Young; Yoo, Da Eun; Lee, Jeongmi
2015-12-01
Deep eutectic solvents (DESs) were investigated as tunable, environmentally benign, yet superior extraction media to enhance the extraction of anthocyanins from grape skin, which is usually discarded as waste. Ten DESs containing choline chloride as hydrogen bond acceptor combined with different hydrogen bond donors were screened for high extraction efficiencies based on the anthocyanin extraction yields. As a result, citric acid, D-(+)-maltose, and fructose were selected as the effective DES components, and the newly designed DES, CM-6 that is composed of citric acid and D-(+)-maltose at 4:1 molar ratio, exhibited significantly higher levels of anthocyanin extraction yields than conventional extraction solvents such as 80% aqueous methanol. The final extraction method was established based on the ultrasound-assisted extraction under conditions optimized using response surface methodology. Its extraction yields were double or even higher than those of conventional methods that are time-consuming and use volatile organic solvents. Our method is truly a green method for anthocyanin extraction with great extraction efficiency using a minimal amount of time and solvent. Moreover, this study suggested that grape skin, the by-products of grape juice processing, could serve as a valuable source for safe, natural colorants or antioxidants by use of the eco-friendly extraction solvent, CM-6.
A multigrid nonoscillatory method for computing high speed flows
NASA Technical Reports Server (NTRS)
Li, C. P.; Shieh, T. H.
1993-01-01
A multigrid method using different smoothers has been developed to solve the Euler equations discretized by a nonoscillatory scheme up to fourth order accuracy. The best smoothing property is provided by a five-stage Runge-Kutta technique with optimized coefficients, yet the most efficient smoother is a backward Euler technique in factored and diagonalized form. The singlegrid solution for a hypersonic, viscous conic flow is in excellent agreement with the solution obtained by the third order MUSCL and Roe's method. Mach 8 inviscid flow computations for a complete entry probe have shown that the accuracy is at least as good as the symmetric TVD scheme of Yee and Harten. The implicit multigrid method is four times more efficient than the explicit multigrid technique and 3.5 times faster than the single-grid implicit technique. For a Mach 8.7 inviscid flow over a blunt delta wing at 30 deg incidence, the CPU reduction factor from the three-level multigrid computation is 2.2 on a grid of 37 x 41 x 73 nodes.
Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San
2017-01-01
The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers' technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers' technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008-2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4-66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers' practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations.
Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San
2017-01-01
The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers’ technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers’ technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008–2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4–66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers’ practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations. PMID:28900435
NASA Astrophysics Data System (ADS)
Ishigaki, Tsukasa; Yamamoto, Yoshinobu; Nakamura, Yoshiyuki; Akamatsu, Motoyuki
Patients that have an health service by doctor have to wait long time at many hospitals. The long waiting time is the worst factor of patient's dissatisfaction for hospital service according to questionnaire for patients. The present paper describes an estimation method of the waiting time for each patient without an electronic medical chart system. The method applies a portable RFID system to data acquisition and robust estimation of probability distribution of the health service and test time by doctor for high-accurate waiting time estimation. We carried out an health service of data acquisition at a real hospital and verified the efficiency of the proposed method. The proposed system widely can be used as data acquisition system in various fields such as marketing service, entertainment or human behavior measurement.
Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression
Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.
2016-01-01
The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571
Efficient detection of a CW signal with a linear frequency drift
NASA Technical Reports Server (NTRS)
Swarztrauber, Paul N.; Bailey, David H.
1989-01-01
An efficient method is presented for the detection of a continuous wave (CW) signal with a frequency drift that is linear in time. Signals of this type occur in transmissions between any two locations that are accelerating relative to one another, e.g., transmissions from the Voyager spacecraft. We assume that both the frequency and the drift are unknown. We also assume that the signal is weak compared to the Gaussian noise. The signal is partitioned into subsequences whose discrete Fourier transforms provide a sequence of instantaneous spectra at equal time intervals. These spectra are then accumulated with a shift that is proportional to time. When the shift is equal to the frequency drift, the signal to noise ratio increases and detection occurs. Here, we show how to compute these accumulations for many shifts in an efficient manner using a variety of Fast Fourier Transformations (FFT). Computing time is proportional to L log L where L is the length of the time series.
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
An efficient photogrammetric stereo matching method for high-resolution images
NASA Astrophysics Data System (ADS)
Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao
2016-12-01
Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
Park, Eunyoung; Lee, Cheonghoon; Bisesi, Michael; Lee, Jiyoung
2014-03-01
The disinfection efficiency of peracetic acid (PAA) was investigated on three microbial types using three different methods (filtration-based ATP (adenosine-triphosphate) bioluminescence, quantitative polymerase chain reaction (qPCR), culture-based method). Fecal indicator bacteria (Enterococcus faecium), virus indicator (male-specific (F(+)) coliphages (coliphages)), and protozoa disinfection surrogate (Bacillus subtilis spores (spores)) were tested. The mode of action for spore disinfection was visualized using scanning electron microscopy. The results indicated that PAA concentrations of 5 ppm (contact time: 5 min), 50 ppm (10 min), and 3,000 ppm (5 min) were needed to achieve 3-log reduction of E. faecium, coliphages, and spores, respectively. Scanning electron microscopy observation showed that PAA targets the external layers of spores. The lower reduction rates of tested microbes measured with qPCR suggest that qPCR may overestimate the surviving microbes. Collectively, PAA showed broad disinfection efficiency (susceptibility: E. faecium > coliphages > spores). For E. faecium and spores, ATP bioluminescence was substantially faster (∼5 min) than culture-based method (>24 h) and qPCR (2-3 h). This study suggests PAA as an effective alternative to inactivate broad types of microbial contaminants in water. Together with the use of rapid detection methods, this approach can be useful for urgent situations when timely response is needed for ensuring water quality.
Cost-of-illness studies based on massive data: a prevalence-based, top-down regression approach.
Stollenwerk, Björn; Welchowski, Thomas; Vogl, Matthias; Stock, Stephanie
2016-04-01
Despite the increasing availability of routine data, no analysis method has yet been presented for cost-of-illness (COI) studies based on massive data. We aim, first, to present such a method and, second, to assess the relevance of the associated gain in numerical efficiency. We propose a prevalence-based, top-down regression approach consisting of five steps: aggregating the data; fitting a generalized additive model (GAM); predicting costs via the fitted GAM; comparing predicted costs between prevalent and non-prevalent subjects; and quantifying the stochastic uncertainty via error propagation. To demonstrate the method, it was applied to aggregated data in the context of chronic lung disease to German sickness funds data (from 1999), covering over 7.3 million insured. To assess the gain in numerical efficiency, the computational time of the innovative approach has been compared with corresponding GAMs applied to simulated individual-level data. Furthermore, the probability of model failure was modeled via logistic regression. Applying the innovative method was reasonably fast (19 min). In contrast, regarding patient-level data, computational time increased disproportionately by sample size. Furthermore, using patient-level data was accompanied by a substantial risk of model failure (about 80 % for 6 million subjects). The gain in computational efficiency of the innovative COI method seems to be of practical relevance. Furthermore, it may yield more precise cost estimates.
Chen, Siyuan; Epps, Julien
2014-12-01
Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.
Recombination in liquid-filled ionization chambers beyond the Boag limit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brualla-González, L.; Roselló, J.
Purpose: The high mass density and low mobilities of charge carriers can cause important recombination in liquid-filled ionization chambers (LICs). Saturation correction methods have been proposed for LICs. Correction methods for pulsed irradiation are based on Boag equation. However, Boag equation assumes that the charge ionized by one pulse is fully collected before the arrival of the next pulse. This condition does not hold in many clinical beams where the pulse repetition period may be shorter than the charge collection time, causing overlapping between charge carriers ionized by different pulses, and Boag equation is not applicable there. In this work,more » the authors present an experimental and numerical characterization of collection efficiencies in LICs beyond the Boag limit, with overlapping between charge carriers ionized by different pulses. Methods: The authors have studied recombination in a LIC array for different dose-per-pulse, pulse repetition frequency, and polarization voltage values. Measurements were performed in a Truebeam Linac using FF and FFF modalities. Dose-per-pulse and pulse repetition frequency have been obtained by monitoring the target current with an oscilloscope. Experimental collection efficiencies have been obtained by using a combination of the two-dose-rate method and ratios to the readout of a reference chamber (CC13, IBA). The authors have also used numerical simulation to complement the experimental data. Results: The authors have found that overlap significantly increases recombination in LICs, as expected. However, the functional dependence of collection efficiencies on the dose-per-pulse does not change (a linear dependence has been observed in the near-saturation region for different degrees of overlapping, the same dependence observed in the nonoverlapping scenario). On the other hand, the dependence of collection efficiencies on the polarization voltage changes in the overlapping scenario and does not follow that of Boag equation, the reason being that changing the polarization voltage also affects the charge collection time, thus changing the amount of overlapping. Conclusions: These results have important consequences for saturation correction methods for LICs. On one hand, the two-dose-rate method, which relies on the functional dependence of the collection efficiencies on dose-per-pulse, can also be used in the overlapping situation, provided that the two measurements needed to feed the method are performed at the same pulse repetition frequency (monitor unit rate). This result opens the door to computing collection efficiencies in LICs in many clinical setups where charge overlap in the LIC exists. On the other hand, correction methods based on the voltage-dependence of Boag equation like the three-voltage method or the modified two-voltage method will not work in the overlapping scenario due to the different functional dependence of collection efficiencies on the polarization voltage.« less
Wang, Jianhua; Wong, Jessica X. H.; Kwok, Honoria; Li, Xiaochun; Yu, Hua-Zhong
2016-01-01
In this paper, we present a facile and cost-effective method to obtain superhydrophobic filter paper and demonstrate its application for efficient water/oil separation. By coupling structurally distinct organosilane precursors (e.g., octadecyltrichlorosilane and methyltrichlorosilane) to paper fibers under controlled reaction conditions, we have formulated a simple, inexpensive, and efficient protocol to achieve a desirable superhydrophobic and superoleophilic surface on conventional filter paper. The silanized superhydrophobic filter paper showed nanostructured morphology and demonstrated great separation efficiency (up to 99.4%) for water/oil mixtures. The modified filter paper is stable in both aqueous solutions and organic solvents, and can be reused multiple times. The present study shows that our newly developed binary silanization is a promising method of modifying cellulose-based materials for practical applications, in particular the treatment of industrial waste water and ecosystem recovery. PMID:26982055
Method and apparatus for efficient photodetachment and purification of negative ion beams
Beene, James R [Oak Ridge, TN; Liu, Yuan [Knoxville, TN; Havener, Charles C [Knoxville, TN
2008-02-26
Methods and apparatus are described for efficient photodetachment and purification of negative ion beams. A method of purifying an ion beam includes: inputting the ion beam into a gas-filled multipole ion guide, the ion beam including a plurality of ions; increasing a laser-ion interaction time by collisional cooling the plurality of ions using the gas-filled multipole ion guide, the plurality of ions including at least one contaminant; and suppressing the at least one contaminant by selectively removing the at least one contaminant from the ion beam by electron photodetaching at least a portion of the at least one contaminant using a laser beam.
Real-time depth camera tracking with geometrically stable weight algorithm
NASA Astrophysics Data System (ADS)
Fu, Xingyin; Zhu, Feng; Qi, Feng; Wang, Mingming
2017-03-01
We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.
A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic
Qi, Jin-Peng; Qi, Jie; Zhang, Qing
2016-01-01
Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364
A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.
Qi, Jin-Peng; Qi, Jie; Zhang, Qing
2016-01-01
Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.
IRB Process Improvements: A Machine Learning Analysis.
Shoenbill, Kimberly; Song, Yiqiang; Cobb, Nichelle L; Drezner, Marc K; Mendonca, Eneida A
2017-06-01
Clinical research involving humans is critically important, but it is a lengthy and expensive process. Most studies require institutional review board (IRB) approval. Our objective is to identify predictors of delays or accelerations in the IRB review process and apply this knowledge to inform process change in an effort to improve IRB efficiency, transparency, consistency and communication. We analyzed timelines of protocol submissions to determine protocol or IRB characteristics associated with different processing times. Our evaluation included single variable analysis to identify significant predictors of IRB processing time and machine learning methods to predict processing times through the IRB review system. Based on initial identified predictors, changes to IRB workflow and staffing procedures were instituted and we repeated our analysis. Our analysis identified several predictors of delays in the IRB review process including type of IRB review to be conducted, whether a protocol falls under Veteran's Administration purview and specific staff in charge of a protocol's review. We have identified several predictors of delays in IRB protocol review processing times using statistical and machine learning methods. Application of this knowledge to process improvement efforts in two IRBs has led to increased efficiency in protocol review. The workflow and system enhancements that are being made support our four-part goal of improving IRB efficiency, consistency, transparency, and communication.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.
2002-01-01
The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Hui; Rasch, Philip J.; Zhang, Kai
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivitymore » of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.« less
Zhang, Yonghong; Wang, Bin; Zhang, Xiaomei; Huang, Jianbin; Liu, Chenjiang
2015-02-26
We report here an efficient and green method for Biginelli condensation reaction of aldehydes, β-ketoesters and urea or thiourea catalyzed by Brønsted acidic ionic liquid [Btto][p-TSA] under solvent-free conditions. Compared to the classical Biginelli reaction conditions, the present method has the advantages of giving good yields, short reaction times, near room temperature conditions and the avoidance of the use of organic solvents and metal catalyst.
An acoustic on-chip goniometer for room temperature macromolecular crystallography.
Burton, C G; Axford, D; Edwards, A M J; Gildea, R J; Morris, R H; Newton, M I; Orville, A M; Prince, M; Topham, P D; Docker, P T
2017-12-05
This paper describes the design, development and successful use of an on-chip goniometer for room-temperature macromolecular crystallography via acoustically induced rotations. We present for the first time a low cost, rate-tunable, acoustic actuator for gradual in-fluid sample reorientation about varying axes and its utilisation for protein structure determination on a synchrotron beamline. The device enables the efficient collection of diffraction data via a rotation method from a sample within a surface confined droplet. This method facilitates efficient macromolecular structural data acquisition in fluid environments for dynamical studies.
High performance computation of radiative transfer equation using the finite element method
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.
2018-05-01
This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.
NASA Astrophysics Data System (ADS)
Li, W.; Shao, H.
2017-12-01
For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.
Menezes, Helvécio Costa; de Barcelos, Stella Maris Resende; Macedo, Damiana Freire Dias; Purceno, Aluir Dias; Machado, Bruno Fernades; Teixeira, Ana Paula Carvalho; Lago, Rochel Monteiro; Serp, Philippe; Cardeal, Zenilda Lourdes
2015-05-11
This paper describes a new, efficient and versatile method for the sampling and preconcentration of PAH in environmental water matrices using special hybrid magnetic carbon nanotubes. These N-doped amphiphilic CNT can be easily dispersed in any aqueous matrix due to the N containing hydrophilic part and at the same time show high efficiency for the adsorption of different PAH contaminants due to the very hydrophobic surface. After adsorption, the CNT can be easily removed from the medium by a simple magnetic separation. GC/MS analyses showed that the CNT method is more efficient than the use of polydimethylsiloxane (PDMS) with much lower solvent consumption, technical simplicity and time, showing good linearity (range 0.18-80.00 μg L(-1)) and determination coefficient (R(2) > 0.9810). The limit of detection ranged from 0.05 to 0.42 μg L(-1) with limit of quantification from 0.18 to 1.40 μg L(-1). Recovery (n=9) ranged from 80.50 ± 10 to 105.40 ± 12%. Intraday precision (RSD, n=9) ranged from 1.91 to 9.01%, whereas inter day precision (RSD, n=9) ranged from 7.02 to 17.94%. The method was applied to the analyses of PAH in four lake water samples collected in Belo Horizonte City, Brazil. Copyright © 2015 Elsevier B.V. All rights reserved.
PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting
Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693
PRESEE: an MDL/MML algorithm to time-series stream segmenting.
Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.
Detection of main tidal frequencies using least squares harmonic estimation method
NASA Astrophysics Data System (ADS)
Mousavian, R.; Hossainali, M. Mashhadi
2012-11-01
In this paper the efficiency of the method of Least Squares Harmonic Estimation (LS-HE) for detecting the main tidal frequencies is investigated. Using this method, the tidal spectrum of the sea level data is evaluated at two tidal stations: Bandar Abbas in south of Iran and Workington on the eastern coast of the UK. The amplitudes of the tidal constituents at these two tidal stations are not the same. Moreover, in contrary to the Workington station, the Bandar Abbas tidal record is not an equispaced time series. Therefore, the analysis of the hourly tidal observations in Bandar Abbas and Workington can provide a reasonable insight into the efficiency of this method for analyzing the frequency content of tidal time series. Furthermore, applying the method of Fourier transform to the Workington tidal record provides an independent source of information for evaluating the tidal spectrum proposed by the LS-HE method. According to the obtained results, the spectrums of these two tidal records contain the components with the maximum amplitudes among the expected ones in this time span and some new frequencies in the list of known constituents. In addition, in terms of frequencies with maximum amplitude; the power spectrums derived from two aforementioned methods are the same. These results demonstrate the ability of LS-HE for identifying the frequencies with maximum amplitude in both tidal records.
Absolute and angular efficiencies of a microchannel-plate position-sensitive detector
NASA Technical Reports Server (NTRS)
Gao, R. S.; Gibner, P. S.; Newman, J. H.; Smith, K. A.; Stebbings, R. F.
1984-01-01
This paper presents a characterization of a commercially available position-sensitive detector of energetic ions and neutrals. The detector consists of two microchannel plates followed by a resistive position-encoding anode. The work includes measurement of absolute efficiencies of H(+), He(+), and O(+) ions in the energy range between 250 and 5000 eV, measurement of relative detection efficiencies as a function of particle impact angle, and a simple method for accurate measurement of the time at which a particle strikes the detector.
GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition
NASA Astrophysics Data System (ADS)
Zhen, Z.; Jia, X.
2014-12-01
Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the speedup ratio time consumption of RTM is 11.5. At the same time, the accuracy of imaging is not harmed. Another advantage of the GPUs-GPP method is its easy applications in other numerical methods such as the FEM. Finally, in the GPUs-GPP method, the arrays require quite limited memory storage, which makes the method promising in dealing with large-scale 3D problems.
Singh, Brajesh K; Srivastava, Vineet K
2015-04-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flows.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flow.
Singh, Brajesh K.; Srivastava, Vineet K.
2015-01-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639
System Design for Ocean Sensor Data Transmission Based on Inductive Coupling
NASA Astrophysics Data System (ADS)
Xu, Ming; Liu, Fei; Zong, Yuan; Hong, Feng
Ocean observation is the precondition to explore and utilize ocean. How to acquire ocean data in a precise, efficient and real-time way is the key question of ocean surveillance. Traditionally, there are three types of methods for ocean data transmission: underwater acoustic, GPRS via mobile network and satellite communication. However, none of them can meet the requirements of efficiency, accuracy, real-time and low cost at the same time. In this paper, we propose a new wireless transmission system for underwater sensors, which established on FGR wireless modules, combined with inductive coupling lab and offshore experiments confirmed the feasibility and effectiveness of the proposed wireless transmission system.
Efficient algorithms and implementations of entropy-based moment closures for rarefied gases
NASA Astrophysics Data System (ADS)
Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel
2017-07-01
We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.
Multigrid Computations of 3-D Incompressible Internal and External Viscous Rotating Flows
NASA Technical Reports Server (NTRS)
Sheng, Chunhua; Taylor, Lafayette K.; Chen, Jen-Ping; Jiang, Min-Yee; Whitfield, David L.
1996-01-01
This report presents multigrid methods for solving the 3-D incompressible viscous rotating flows in a NASA low-speed centrifugal compressor and a marine propeller 4119. Numerical formulations are given in both the rotating reference frame and the absolute frame. Comparisons are made for the accuracy, efficiency, and robustness between the steady-state scheme and the time-accurate scheme for simulating viscous rotating flows for complex internal and external flow applications. Prospects for further increase in efficiency and accuracy of unsteady time-accurate computations are discussed.
Improving real-time efficiency of case-based reasoning for medical diagnosis.
Park, Yoon-Joo
2014-01-01
Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.
TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.
Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas
2017-01-01
Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu
2016-02-15
A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Sourcemore » Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.« less
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
A fast mass spring model solver for high-resolution elastic objects
NASA Astrophysics Data System (ADS)
Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian
2017-03-01
Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.
NASA Astrophysics Data System (ADS)
Lai, Wencong; Ogden, Fred L.; Steinke, Robert C.; Talbot, Cary A.
2015-03-01
We have developed a one-dimensional numerical method to simulate infiltration and redistribution in the presence of a shallow dynamic water table. This method builds upon the Green-Ampt infiltration with Redistribution (GAR) model and incorporates features from the Talbot-Ogden (T-O) infiltration and redistribution method in a discretized moisture content domain. The redistribution scheme is more physically meaningful than the capillary weighted redistribution scheme in the T-O method. Groundwater dynamics are considered in this new method instead of hydrostatic groundwater front. It is also computationally more efficient than the T-O method. Motion of water in the vadose zone due to infiltration, redistribution, and interactions with capillary groundwater are described by ordinary differential equations. Numerical solutions to these equations are computationally less expensive than solutions of the highly nonlinear Richards' (1931) partial differential equation. We present results from numerical tests on 11 soil types using multiple rain pulses with different boundary conditions, with and without a shallow water table and compare against the numerical solution of Richards' equation (RE). Results from the new method are in satisfactory agreement with RE solutions in term of ponding time, deponding time, infiltration rate, and cumulative infiltrated depth. The new method, which we call "GARTO" can be used as an alternative to the RE for 1-D coupled surface and groundwater models in general situations with homogeneous soils with dynamic water table. The GARTO method represents a significant advance in simulating groundwater surface water interactions because it very closely matches the RE solution while being computationally efficient, with guaranteed mass conservation, and no stability limitations that can affect RE solvers in the case of a near-surface water table.
Efficient light absorption by plasmonic metallic nanostructures in photovoltaic application
NASA Astrophysics Data System (ADS)
Roy, Rhombik; Datta, Debasish
2018-04-01
This article reports the way to trap light efficiently inside a tri-layered Cu(Zn,Sn)S2 (CZTS) and Zinc Oxide (ZnO) based solar cell module using Ag nanoparticles as light concentrators by virtue of their plasmonic property. The passage of E. M. radiation within the cell has been simulated using finite difference time domain (FDTD) method.
The Army Communications Objectives Measurement System (ACOMS): Survey Methods
1988-07-01
advertising strategy efficiencies; (3) management of the advertising program; and (4) planning and development of new marketing strategies and...scientific methodology. ACOMS is being used for Army (1) assessments of advertising program effectiveness; (2) assessments of advertising strategy efficiencies...advertising program effectiveness in a timely fashion; (2) To support Army assessments of advertising strategy in an integrated framework; and (3) To support
The Organization of Group Care Environments: The Infant Day Care Center.
ERIC Educational Resources Information Center
Cataldo, Michael F.; Risley, Todd R.
In designing group day care for infants, special attention has been given to efficient care practices, so that all the children's health needs can be met and so that the staff will have ample time to interact with the children. One efficient method is to assign each staff member the responsibility of a particular area rather than a particular…
High-Quality T2-Weighted 4-Dimensional Magnetic Resonance Imaging for Radiation Therapy Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Dongsu; Caruthers, Shelton D.; Glide-Hurst, Carri
2015-06-01
Purpose: The purpose of this study was to improve triggering efficiency of the prospective respiratory amplitude-triggered 4-dimensional magnetic resonance imaging (4DMRI) method and to develop a 4DMRI imaging protocol that could offer T2 weighting for better tumor visualization, good spatial coverage and spatial resolution, and respiratory motion sampling within a reasonable amount of time for radiation therapy applications. Methods and Materials: The respiratory state splitting (RSS) and multi-shot acquisition (MSA) methods were analytically compared and validated in a simulation study by using the respiratory signals from 10 healthy human subjects. The RSS method was more effective in improving triggering efficiency.more » It was implemented in prospective respiratory amplitude-triggered 4DMRI. 4DMRI image datasets were acquired from 5 healthy human subjects. Liver motion was estimated using the acquired 4DMRI image datasets. Results: The simulation study showed the RSS method was more effective for improving triggering efficiency than the MSA method. The average reductions in 4DMRI acquisition times were 36% and 10% for the RSS and MSA methods, respectively. The human subject study showed that T2-weighted 4DMRI with 10 respiratory states, 60 slices at a spatial resolution of 1.5 × 1.5 × 3.0 mm{sup 3} could be acquired in 9 to 18 minutes, depending on the individual's breath pattern. Based on the acquired 4DMRI image datasets, the ranges of peak-to-peak liver displacements among 5 human subjects were 9.0 to 12.9 mm, 2.5 to 3.9 mm, and 0.5 to 2.3 mm in superior-inferior, anterior-posterior, and left-right directions, respectively. Conclusions: We demonstrated that with the RSS method, it was feasible to acquire high-quality T2-weighted 4DMRI within a reasonable amount of time for radiation therapy applications.« less
NASA Astrophysics Data System (ADS)
Soni, V.; Hadjadj, A.; Roussel, O.
2017-12-01
In this paper, a fully adaptive multiresolution (MR) finite difference scheme with a time-varying tolerance is developed to study compressible fluid flows containing shock waves in interaction with solid obstacles. To ensure adequate resolution near rigid bodies, the MR algorithm is combined with an immersed boundary method based on a direct-forcing approach in which the solid object is represented by a continuous solid-volume fraction. The resulting algorithm forms an efficient tool capable of solving linear and nonlinear waves on arbitrary geometries. Through a one-dimensional scalar wave equation, the accuracy of the MR computation is, as expected, seen to decrease in time when using a constant MR tolerance considering the accumulation of error. To overcome this problem, a variable tolerance formulation is proposed, which is assessed through a new quality criterion, to ensure a time-convergence solution for a suitable quality resolution. The newly developed algorithm coupled with high-resolution spatial and temporal approximations is successfully applied to shock-bluff body and shock-diffraction problems solving Euler and Navier-Stokes equations. Results show excellent agreement with the available numerical and experimental data, thereby demonstrating the efficiency and the performance of the proposed method.
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; Gholami, Khalid El
2014-01-01
Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant. PMID:25248069
Nakagawa, Yoshiko; Sakuma, Tetsushi; Nishimichi, Norihisa; Yokosaki, Yasuyuki; Takeo, Toru; Nakagata, Naomi; Yamamoto, Takashi
2017-05-15
Robust reproductive engineering techniques are required for the efficient and rapid production of genetically modified mice. We have reported the efficient production of genome-edited mice using reproductive engineering techniques, such as ultra-superovulation, in vitro fertilization (IVF) and vitrification/warming of zygotes. We usually use vitrified/warmed fertilized oocytes created by IVF for microinjection because of work efficiency and flexible scheduling. Here, we investigated whether the culture time of zygotes before microinjection influences the efficiency of producing knock-in mice. Knock-in mice were generated using clustered regularly interspaced short palindromic repeats (CRISPR)-CRISPR-associated protein 9 (Cas9) system and single-stranded oligodeoxynucleotide (ssODN) or PITCh (Precise Integration into Target Chromosome) system, a method of integrating a donor vector assisted by microhomology-mediated end-joining. The cryopreserved fertilized oocytes were warmed, cultured for several hours and microinjected at different timings. Microinjection was performed with Cas9 protein, guide RNA(s), and an ssODN or PITCh donor plasmid for the ssODN knock-in and the PITCh knock-in, respectively. Different production efficiencies of knock-in mice were observed by changing the timing of microinjection. Our study provides useful information for the CRISPR-Cas9-based generation of knock-in mice. © 2017. Published by The Company of Biologists Ltd.
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Shi, Hongling; El Gholami, Khalid
2014-09-22
Operating system (OS) technology is significant for the proliferation of the wireless sensor network (WSN). With an outstanding OS; the constrained WSN resources (processor; memory and energy) can be utilized efficiently. Moreover; the user application development can be served soundly. In this article; a new hybrid; real-time; memory-efficient; energy-efficient; user-friendly and fault-tolerant WSN OS MIROS is designed and implemented. MIROS implements the hybrid scheduler and the dynamic memory allocator. Real-time scheduling can thus be achieved with low memory consumption. In addition; it implements a mid-layer software EMIDE (Efficient Mid-layer Software for User-Friendly Application Development Environment) to decouple the WSN application from the low-level system. The application programming process can consequently be simplified and the application reprogramming performance improved. Moreover; it combines both the software and the multi-core hardware techniques to conserve the energy resources; improve the node reliability; as well as achieve a new debugging method. To evaluate the performance of MIROS; it is compared with the other WSN OSes (TinyOS; Contiki; SOS; openWSN and mantisOS) from different OS concerns. The final evaluation results prove that MIROS is suitable to be used even on the tight resource-constrained WSN nodes. It can support the real-time WSN applications. Furthermore; it is energy efficient; user friendly and fault tolerant.
NASA Astrophysics Data System (ADS)
Kacem, S.; Eichwald, O.; Ducasse, O.; Renon, N.; Yousfi, M.; Charrada, K.
2012-01-01
Streamers dynamics are characterized by the fast propagation of ionized shock waves at the nanosecond scale under very sharp space charge variations. The streamer dynamics modelling needs the solution of charged particle transport equations coupled to the elliptic Poisson's equation. The latter has to be solved at each time step of the streamers evolution in order to follow the propagation of the resulting space charge electric field. In the present paper, a full multi grid (FMG) and a multi grid (MG) methods have been adapted to solve Poisson's equation for streamer discharge simulations between asymmetric electrodes. The validity of the FMG method for the computation of the potential field is first shown by performing direct comparisons with analytic solution of the Laplacian potential in the case of a point-to-plane geometry. The efficiency of the method is also compared with the classical successive over relaxation method (SOR) and MUltifrontal massively parallel solver (MUMPS). MG method is then applied in the case of the simulation of positive streamer propagation and its efficiency is evaluated from comparisons to SOR and MUMPS methods in the chosen point-to-plane configuration. Very good agreements are obtained between the three methods for all electro-hydrodynamics characteristics of the streamer during its propagation in the inter-electrode gap. However in the case of MG method, the computational time to solve the Poisson's equation is at least 2 times faster in our simulation conditions.