Sample records for efficient general method

  1. Measuring Efficiency of Secondary Healthcare Providers in Slovenia

    PubMed Central

    Blatnik, Patricia; Bojnec, Štefan; Tušak, Matej

    2017-01-01

    Abstract The chief aim of this study was to analyze secondary healthcare providers' efficiency, focusing on the efficiency analysis of Slovene general hospitals. We intended to present a complete picture of technical, allocative, and cost or economic efficiency of general hospitals. Methods We researched the aspects of efficiency with two econometric methods. First, we calculated the necessary quotients of efficiency with the stochastic frontier analyze (SFA), which are realized by econometric evaluation of stochastic frontier functions; then, with the data envelopment analyze (DEA), we calculated the necessary quotients that are based on the linear programming method. Results Results on measures of efficiency showed that the two chosen methods produced two different conclusions. The SFA method concluded Celje General Hospital is the most efficient general hospital, whereas the DEA method concluded Brežice General Hospital was the hospital to be declared as the most efficient hospital. Conclusion Our results are a useful tool that can aid managers, payers, and designers of healthcare policy to better understand how general hospitals operate. The participants can accordingly decide with less difficulty on any further business operations of general hospitals, having the best practices of general hospitals at their disposal. PMID:28730180

  2. Generalized Gilat-Raubenheimer method for density-of-states calculation in photonic crystals

    NASA Astrophysics Data System (ADS)

    Liu, Boyuan; Johnson, Steven G.; Joannopoulos, John D.; Lu, Ling

    2018-04-01

    An efficient numerical algorithm is the key for accurate evaluation of density of states (DOS) in band theory. The Gilat-Raubenheimer (GR) method proposed in 1966 is an efficient linear extrapolation method which was limited in specific lattices. Here, using an affine transformation, we provide a new generalization of the original GR method to any Bravais lattices and show that it is superior to the tetrahedron method and the adaptive Gaussian broadening method. Finally, we apply our generalized GR method to compute DOS of various gyroid photonic crystals of topological degeneracies.

  3. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov–Galerkin method

    PubMed Central

    Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.

    2014-01-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358

  4. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  5. 10 CFR 429.70 - Alternative methods for determining energy efficiency or energy use.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Alternative methods for determining energy efficiency or energy use. 429.70 Section 429.70 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION CERTIFICATION....70 Alternative methods for determining energy efficiency or energy use. (a) General. A manufacturer...

  6. General method for extracting the quantum efficiency of dispersive qubit readout in circuit QED

    NASA Astrophysics Data System (ADS)

    Bultink, C. C.; Tarasinski, B.; Haandbæk, N.; Poletto, S.; Haider, N.; Michalak, D. J.; Bruno, A.; DiCarlo, L.

    2018-02-01

    We present and demonstrate a general three-step method for extracting the quantum efficiency of dispersive qubit readout in circuit QED. We use active depletion of post-measurement photons and optimal integration weight functions on two quadratures to maximize the signal-to-noise ratio of the non-steady-state homodyne measurement. We derive analytically and demonstrate experimentally that the method robustly extracts the quantum efficiency for arbitrary readout conditions in the linear regime. We use the proven method to optimally bias a Josephson traveling-wave parametric amplifier and to quantify different noise contributions in the readout amplification chain.

  7. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  8. Automatic computation and solution of generalized harmonic balance equations

    NASA Astrophysics Data System (ADS)

    Peyton Jones, J. C.; Yaser, K. S. A.; Stevenson, J.

    2018-02-01

    Generalized methods are presented for generating and solving the harmonic balance equations for a broad class of nonlinear differential or difference equations and for a general set of harmonics chosen by the user. In particular, a new algorithm for automatically generating the Jacobian of the balance equations enables efficient solution of these equations using continuation methods. Efficient numeric validation techniques are also presented, and the combined algorithm is applied to the analysis of dc, fundamental, second and third harmonic response of a nonlinear automotive damper.

  9. The application of midbond basis sets in efficient and accurate ab initio calculations on electron-deficient systems

    NASA Astrophysics Data System (ADS)

    Choi, Chu Hwan

    2002-09-01

    Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.

  10. A general method for radio spectrum efficiency defining

    NASA Astrophysics Data System (ADS)

    Ramadanovic, Ljubomir M.

    1986-08-01

    A general method for radio spectrum efficiency defining is proposed. Although simple it can be applied to various radio services. The concept of spectral elements, as information carriers, is introduced to enable the organization of larger spectral spaces - radio network models - characteristic for a particular radio network. The method is applied to some radio network models, concerning cellular radio telephone systems and digital radio relay systems, to verify its unified approach capability. All discussed radio services operate continuously.

  11. A general method for the catalytic nazarov cyclization of heteroaromatic compounds.

    PubMed

    Malona, John A; Colbourne, Jessica M; Frontier, Alison J

    2006-11-23

    A general, catalytic method for efficient Nazarov cyclization of systems containing heteroaromatic components has been developed. Scandium triflate was identified as the most reactive promoter, and it was found that addition of lithium perchlorate was necessary for synthetically useful catalytic cyclizations. The method was used to synthesize a range of cyclopentanone-fused heteroaromatic systems in 36-97% yield, and the reactivity trends observed demonstrate the impact of polarization on cyclization efficiency. [reaction: see text].

  12. Survey of methods for calculating sensitivity of general eigenproblems

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha V.; Haftka, Raphael T.

    1987-01-01

    A survey of methods for sensitivity analysis of the algebraic eigenvalue problem for non-Hermitian matrices is presented. In addition, a modification of one method based on a better normalizing condition is proposed. Methods are classified as Direct or Adjoint and are evaluated for efficiency. Operation counts are presented in terms of matrix size, number of design variables and number of eigenvalues and eigenvectors of interest. The effect of the sparsity of the matrix and its derivatives is also considered, and typical solution times are given. General guidelines are established for the selection of the most efficient method.

  13. Navier-Stokes and viscous-inviscid interaction

    NASA Technical Reports Server (NTRS)

    Steger, Joseph L.; Vandalsem, William R.

    1989-01-01

    Some considerations toward developing numerical procedures for simulating viscous compressible flows are discussed. Both Navier-Stokes and boundary layer field methods are considered. Because efficient viscous-inviscid interaction methods have been difficult to extend to complex 3-D flow simulations, Navier-Stokes procedures are more frequently being utilized even though they require considerably more work per grid point. It would seem a mistake, however, not to make use of the more efficient approximate methods in those regions in which they are clearly valid. Ideally, a general purpose compressible flow solver that can optionally take advantage of approximate solution methods would suffice, both to improve accuracy and efficiency. Some potentially useful steps toward this goal are described: a generalized 3-D boundary layer formulation and the fortified Navier-Stokes procedure.

  14. Multiple-time-stepping generalized hybrid Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escribano, Bruno, E-mail: bescribano@bcamath.org; Akhmatskaya, Elena; IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC).more » The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.« less

  15. Effectiveness and Efficiency of Flashcard Drill Instructional Methods on Urban First-Graders' Word Recognition, Acquisition, Maintenance, and Generalization

    ERIC Educational Resources Information Center

    Nist, Lindsay; Joseph, Laurice M.

    2008-01-01

    This investigation built upon previous studies that compared effectiveness and efficiency among instructional methods. Instructional effectiveness and efficiency were compared among three conditions: an incremental rehearsal, a more challenging ratio of known to unknown interspersal word procedure, and a traditional drill and practice flashcard…

  16. Some problems of the calculation of three-dimensional boundary layer flows on general configurations

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Mosinskis, G. J.; Rehn, J. A.

    1973-01-01

    An accurate solution of the three-dimensional boundary layer equations over general configurations such as those encountered in aircraft and space shuttle design requires a very efficient, fast, and accurate numerical method with suitable turbulence models for the Reynolds stresses. The efficiency, speed, and accuracy of a three-dimensional numerical method together with the turbulence models for the Reynolds stresses are examined. The numerical method is the implicit two-point finite difference approach (Box Method) developed by Keller and applied to the boundary layer equations by Keller and Cebeci. In addition, a study of some of the problems that may arise in the solution of these equations for three-dimensional boundary layer flows over general configurations.

  17. Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm

    NASA Astrophysics Data System (ADS)

    Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter

    2004-05-01

    The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.

  18. Numerical study of a multigrid method with four smoothing methods for the incompressible Navier-Stokes equations in general coordinates

    NASA Technical Reports Server (NTRS)

    Zeng, S.; Wesseling, P.

    1993-01-01

    The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.

  19. Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny

    NASA Astrophysics Data System (ADS)

    Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell

    Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.

  20. Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    DOE PAGES

    Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.

    2016-01-21

    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.

  1. A general and efficient palladium-catalyzed carbonylative synthesis of 2-aryloxazolines and 2-aryloxazines from aryl bromides.

    PubMed

    Wu, Xiao-Feng; Neumann, Helfried; Neumann, Stephan; Beller, Matthias

    2012-10-22

    Oxazoline is OK! A general and efficient method for the synthesis of oxazolines has been developed. This allowed the preparation of 27 five-membered-ring heterocycles and 11 six-membered-ring heterocycles in moderate to good yields. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Efficient computation of kinship and identity coefficients on large pedigrees.

    PubMed

    Cheng, En; Elliott, Brendan; Ozsoyoglu, Z Meral

    2009-06-01

    With the rapidly expanding field of medical genetics and genetic counseling, genealogy information is becoming increasingly abundant. An important computation on pedigree data is the calculation of identity coefficients, which provide a complete description of the degree of relatedness of a pair of individuals. The areas of application of identity coefficients are numerous and diverse, from genetic counseling to disease tracking, and thus, the computation of identity coefficients merits special attention. However, the computation of identity coefficients is not done directly, but rather as the final step after computing a set of generalized kinship coefficients. In this paper, we first propose a novel Path-Counting Formula for calculating generalized kinship coefficients, which is motivated by Wright's path-counting method for computing inbreeding coefficient. We then present an efficient and scalable scheme for calculating generalized kinship coefficients on large pedigrees using NodeCodes, a special encoding scheme for expediting the evaluation of queries on pedigree graph structures. Furthermore, we propose an improved scheme using Family NodeCodes for the computation of generalized kinship coefficients, which is motivated by the significant improvement of using Family NodeCodes for inbreeding coefficient over the use of NodeCodes. We also perform experiments for evaluating the efficiency of our method, and compare it with the performance of the traditional recursive algorithm for three individuals. Experimental results demonstrate that the resulting scheme is more scalable and efficient than the traditional recursive methods for computing generalized kinship coefficients.

  3. Essential energy space random walk via energy space metadynamics method to accelerate molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Li, Hongzhi; Min, Donghong; Liu, Yusong; Yang, Wei

    2007-09-01

    To overcome the possible pseudoergodicity problem, molecular dynamic simulation can be accelerated via the realization of an energy space random walk. To achieve this, a biased free energy function (BFEF) needs to be priori obtained. Although the quality of BFEF is essential for sampling efficiency, its generation is usually tedious and nontrivial. In this work, we present an energy space metadynamics algorithm to efficiently and robustly obtain BFEFs. Moreover, in order to deal with the associated diffusion sampling problem caused by the random walk in the total energy space, the idea in the original umbrella sampling method is generalized to be the random walk in the essential energy space, which only includes the energy terms determining the conformation of a region of interest. This essential energy space generalization allows the realization of efficient localized enhanced sampling and also offers the possibility of further sampling efficiency improvement when high frequency energy terms irrelevant to the target events are free of activation. The energy space metadynamics method and its generalization in the essential energy space for the molecular dynamics acceleration are demonstrated in the simulation of a pentanelike system, the blocked alanine dipeptide model, and the leucine model.

  4. Self-learning Monte Carlo method

    DOE PAGES

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...

    2017-01-04

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less

  5. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  6. Design of spur gears for improved efficiency

    NASA Technical Reports Server (NTRS)

    Anderson, N. E.; Loewenthal, S. H.

    1981-01-01

    A method to calculate spur gear system power loss for a wide range of gear geometries and operating conditions is used to determine design requirements for an efficient gearset. The effects of spur gear size, pitch, ratio, pitch-line-velocity and load on efficiency are shown. A design example is given to illustrate how the method is to be applied. In general, peak efficiencies were found to be greater for larger diameter and fine pitched gears and tare (no-load) losses were found to be significant.

  7. Generalized non-equilibrium vertex correction method in coherent medium theory for quantum transport simulation of disordered nanoelectronics

    NASA Astrophysics Data System (ADS)

    Yan, Jiawei; Ke, Youqi

    In realistic nanoelectronics, disordered impurities/defects are inevitable and play important roles in electron transport. However, due to the lack of effective quantum transport method, the important effects of disorders remain poorly understood. Here, we report a generalized non-equilibrium vertex correction (NVC) method with coherent potential approximation to treat the disorder effects in quantum transport simulation. With this generalized NVC method, any averaged product of two single-particle Green's functions can be obtained by solving a set of simple linear equations. As a result, the averaged non-equilibrium density matrix and various important transport properties, including averaged current, disordered induced current fluctuation and the averaged shot noise, can all be efficiently computed in a unified scheme. Moreover, a generalized form of conditionally averaged non-equilibrium Green's function is derived to incorporate with density functional theory to enable first-principles simulation. We prove the non-equilibrium coherent potential equals the non-equilibrium vertex correction. Our approach provides a unified, efficient and self-consistent method for simulating non-equilibrium quantum transport through disorder nanoelectronics. Shanghaitech start-up fund.

  8. Measurement of Energy Performances for General-Structured Servers

    NASA Astrophysics Data System (ADS)

    Liu, Ren; Chen, Lili; Li, Pengcheng; Liu, Meng; Chen, Haihong

    2017-11-01

    Energy consumption of servers in data centers increases rapidly along with the wide application of Internet and connected devices. To improve the energy efficiency of servers, voluntary or mandatory energy efficiency programs for servers, including voluntary label program or mandatory energy performance standards have been adopted or being prepared in the US, EU and China. However, the energy performance of servers and testing methods of servers are not well defined. This paper presents matrices to measure the energy performances of general-structured servers. The impacts of various components of servers on their energy performances are also analyzed. Based on a set of normalized workload, the author proposes a standard method for testing energy efficiency of servers. Pilot tests are conducted to assess the energy performance testing methods of servers. The findings of the tests are discussed in the paper.

  9. ENVIRONMENTAL TECHNOLOGY VERIFICATION TEST PROTOCOL, GENERAL VENTILATION FILTERS

    EPA Science Inventory

    The Environmental Technology Verification Test Protocol, General Ventilation Filters provides guidance for verification tests.

    Reference is made in the protocol to the ASHRAE 52.2P "Method of Testing General Ventilation Air-cleaning Devices for Removal Efficiency by P...

  10. Efficiency trade-offs of steady-state methods using FEM and FDM. [iterative solutions for nonlinear flow equations

    NASA Technical Reports Server (NTRS)

    Gartling, D. K.; Roache, P. J.

    1978-01-01

    The efficiency characteristics of finite element and finite difference approximations for the steady-state solution of the Navier-Stokes equations are examined. The finite element method discussed is a standard Galerkin formulation of the incompressible, steady-state Navier-Stokes equations. The finite difference formulation uses simple centered differences that are O(delta x-squared). Operation counts indicate that a rapidly converging Newton-Raphson-Kantorovitch iteration scheme is generally preferable over a Picard method. A split NOS Picard iterative algorithm for the finite difference method was most efficient.

  11. General linear methods and friends: Toward efficient solutions of multiphysics problems

    NASA Astrophysics Data System (ADS)

    Sandu, Adrian

    2017-07-01

    Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..

  12. Finite basis representations with nondirect product basis functions having structure similar to that of spherical harmonics.

    PubMed

    Czakó, Gábor; Szalay, Viktor; Császár, Attila G

    2006-01-07

    The currently most efficient finite basis representation (FBR) method [Corey et al., in Numerical Grid Methods and Their Applications to Schrodinger Equation, NATO ASI Series C, edited by C. Cerjan (Kluwer Academic, New York, 1993), Vol. 412, p. 1; Bramley et al., J. Chem. Phys. 100, 6175 (1994)] designed specifically to deal with nondirect product bases of structures phi(n) (l)(s)f(l)(u), chi(m) (l)(t)phi(n) (l)(s)f(l)(u), etc., employs very special l-independent grids and results in a symmetric FBR. While highly efficient, this method is not general enough. For instance, it cannot deal with nondirect product bases of the above structure efficiently if the functions phi(n) (l)(s) [and/or chi(m) (l)(t)] are discrete variable representation (DVR) functions of the infinite type. The optimal-generalized FBR(DVR) method [V. Szalay, J. Chem. Phys. 105, 6940 (1996)] is designed to deal with general, i.e., direct and/or nondirect product, bases and grids. This robust method, however, is too general, and its direct application can result in inefficient computer codes [Czako et al., J. Chem. Phys. 122, 024101 (2005)]. It is shown here how the optimal-generalized FBR method can be simplified in the case of nondirect product bases of structures phi(n) (l)(s)f(l)(u), chi(m) (l)(t)phi(n) (l)(s)f(l)(u), etc. As a result the commonly used symmetric FBR is recovered and simplified nonsymmetric FBRs utilizing very special l-dependent grids are obtained. The nonsymmetric FBRs are more general than the symmetric FBR in that they can be employed efficiently even when the functions phi(n) (l)(s) [and/or chi(m) (l)(t)] are DVR functions of the infinite type. Arithmetic operation counts and a simple numerical example presented show unambiguously that setting up the Hamiltonian matrix requires significantly less computer time when using one of the proposed nonsymmetric FBRs than that in the symmetric FBR. Therefore, application of this nonsymmetric FBR is more efficient than that of the symmetric FBR when one wants to diagonalize the Hamiltonian matrix either by a direct or via a basis-set contraction method. Enormous decrease of computer time can be achieved, with respect to a direct application of the optimal-generalized FBR, by employing one of the simplified nonsymmetric FBRs as is demonstrated in noniterative calculations of the low-lying vibrational energy levels of the H3+ molecular ion. The arithmetic operation counts of the Hamiltonian matrix vector products and the properties of a recently developed diagonalization method [Andreozzi et al., J. Phys. A Math. Gen. 35, L61 (2002)] suggest that the nonsymmetric FBR applied along with this particular diagonalization method is suitable to large scale iterative calculations. Whether or not the nonsymmetric FBR is competitive with the symmetric FBR in large-scale iterative calculations still has to be investigated numerically.

  13. Suppressing molecular motions for enhanced room-temperature phosphorescence of metal-free organic materials

    PubMed Central

    Kwon, Min Sang; Yu, Youngchang; Coburn, Caleb; Phillips, Andrew W.; Chung, Kyeongwoon; Shanker, Apoorv; Jung, Jaehun; Kim, Gunho; Pipe, Kevin; Forrest, Stephen R.; Youk, Ji Ho; Gierschner, Johannes; Kim, Jinsang

    2015-01-01

    Metal-free organic phosphorescent materials are attractive alternatives to the predominantly used organometallic phosphors but are generally dimmer and are relatively rare, as, without heavy-metal atoms, spin–orbit coupling is less efficient and phosphorescence usually cannot compete with radiationless relaxation processes. Here we present a general design rule and a method to effectively reduce radiationless transitions and hence greatly enhance phosphorescence efficiency of metal-free organic materials in a variety of amorphous polymer matrices, based on the restriction of molecular motions in the proximity of embedded phosphors. Covalent cross-linking between phosphors and polymer matrices via Diels–Alder click chemistry is devised as a method. A sharp increase in phosphorescence quantum efficiency is observed in a variety of polymer matrices with this method, which is ca. two to five times higher than that of phosphor-doped polymer systems having no such covalent linkage. PMID:26626796

  14. Development of a time-dependent incompressible Navier-Stokes solver based on a fractional-step method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe

    1990-01-01

    The main goals are the development, validation, and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems. A solution method that combines a finite volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.

  15. General Formalism of Mass Scaling Approach for Replica-Exchange Molecular Dynamics and its Application

    NASA Astrophysics Data System (ADS)

    Nagai, Tetsuro

    2017-01-01

    Replica-exchange molecular dynamics (REMD) has demonstrated its efficiency by combining trajectories of a wide range of temperatures. As an extension of the method, the author formalizes the mass-manipulating replica-exchange molecular dynamics (MMREMD) method that allows for arbitrary mass scaling with respect to temperature and individual particles. The formalism enables the versatile application of mass-scaling approaches to the REMD method. The key change introduced in the novel formalism is the generalized rules for the velocity and momentum scaling after accepted replica-exchange attempts. As an application of this general formalism, the refinement of the viscosity-REMD (V-REMD) method [P. H. Nguyen, J. Chem. Phys. 132, 144109 (2010)] is presented. Numerical results are provided using a pilot system, demonstrating easier and more optimized applicability of the new version of V-REMD as well as the importance of adherence to the generalized velocity scaling rules. With the new formalism, more sound and efficient simulations will be performed.

  16. General method for simultaneous optimization of light trapping and carrier collection in an ultra-thin film organic photovoltaic cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Cheng-Chia, E-mail: ct2443@columbia.edu; Grote, Richard R.; Beck, Jonathan H.

    2014-07-14

    We describe a general method for maximizing the short-circuit current in thin planar organic photovoltaic (OPV) heterojunction cells by simultaneous optimization of light absorption and carrier collection. Based on the experimentally obtained complex refractive indices of the OPV materials and the thickness-dependence of the internal quantum efficiency of the OPV active layer, we analyze the potential benefits of light trapping strategies for maximizing the overall power conversion efficiency of the cell. This approach provides a general strategy for optimizing the power conversion efficiency of a wide range of OPV structures. In particular, as an experimental trial system, the approach ismore » applied here to a ultra-thin film solar cell with a SubPc/C{sub 60} photovoltaic structure. Using a patterned indium tin oxide (ITO) top contact, the numerically optimized designs achieve short-circuit currents of 0.790 and 0.980 mA/cm{sup 2} for 30 nm and 45 nm SubPc/C{sub 60} heterojunction layer thicknesses, respectively. These values correspond to a power conversion efficiency enhancement of 78% for the 30 nm thick cell, but only of 32% for a 45 nm thick cell, for which the overall photocurrent is actually higher. Applied to other material systems, the general optimization method can elucidate if light trapping strategies can improve a given cell architecture.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés

    The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of themore » intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.« less

  18. An applicable method for efficiency estimation of operating tray distillation columns and its comparison with the methods utilized in HYSYS and Aspen Plus

    NASA Astrophysics Data System (ADS)

    Sadeghifar, Hamidreza

    2015-10-01

    Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.

  19. A parallel overset-curvilinear-immersed boundary framework for simulating complex 3D incompressible flows

    PubMed Central

    Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis

    2013-01-01

    We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position. PMID:23833331

  20. Character recognition from trajectory by recurrent spiking neural networks.

    PubMed

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  1. Vectorized Jiles-Atherton hysteresis model

    NASA Astrophysics Data System (ADS)

    Szymański, Grzegorz; Waszak, Michał

    2004-01-01

    This paper deals with vector hysteresis modeling. A vector model consisting of individual Jiles-Atherton components placed along principal axes is proposed. The cross-axis coupling ensures general vector model properties. Minor loops are obtained using scaling method. The model is intended for efficient finite element method computations defined in terms of magnetic vector potential. Numerical efficiency is ensured by differential susceptibility approach.

  2. Self-Learning Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.

  3. Transition probabilities for general birth-death processes with applications in ecology, genetics, and evolution

    PubMed Central

    Crawford, Forrest W.; Suchard, Marc A.

    2011-01-01

    A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359

  4. Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood

    PubMed Central

    Bondell, Howard D.; Stefanski, Leonard A.

    2013-01-01

    Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805

  5. [Influence of contractual medical association on inpatient service performance].

    PubMed

    Wu, Zhi-jun; Jian, Wei-yan

    2015-06-18

    To study the influence of contractual medical association on inpatient service performance. The data came from "Database of Inpatient Record" administered by Department of Medical Insurance. Using diagnosis related groups (DRG) as the tool of risk-adjustment, the third-tier general hospitals and second-tier general hospitals in medical alliance as the intervention group, and the average level of the same grade local hospitals as the control group, the influence of medical alliance on inpatient service performance was evaluated. The difference in difference (DID) method was used for the data analysis. The assessing indicators included the number of DRG group, case mix index (CMI), the total weight, cost efficiency index and time efficiency index. After the establishment of medical association, compared with the average level of the same grade local hospitals, in the third-tier general hospitals of medical alliance, the growth rate of the total weight had declined, and cost efficiency index had increased, while in the second-tier general hospitals of medical alliance, the CMI value had declined, and the cost efficiency index had increased. Contractual medical association played a role of triage patients, and improved the service levels and management efficiency of the second-tier general hospitals.

  6. Managing for efficiency in health care: the case of Greek public hospitals.

    PubMed

    Mitropoulos, Panagiotis; Mitropoulos, Ioannis; Sissouras, Aris

    2013-12-01

    This paper evaluates the efficiency of public hospitals with two alternative conceptual models. One model targets resource usage directly to assess production efficiency, while the other model incorporates financial results to assess economic efficiency. Performance analysis of these models was conducted in two stages. In stage one, we utilized data envelopment analysis to obtain the efficiency score of each hospital, while in stage two we took into account the influence of the operational environment on efficiency by regressing those scores on explanatory variables that concern the performance of hospital services. We applied these methods to evaluate 96 general hospitals in the Greek national health system. The results indicate that, although the average efficiency scores in both models have remained relatively stable compared to past assessments, internal changes in hospital performances do exist. This study provides a clear framework for policy implications to increase the overall efficiency of general hospitals.

  7. General image method in a plane-layered elastostatic medium

    NASA Technical Reports Server (NTRS)

    Fares, N.; Li, V. C.

    1988-01-01

    The general-image method presently used to obtain the elastostatic fields in plane-layered media relies on the use of potentials in order to represent elastic fields. For the case of a single interface, this method yields the displacement field in closed form, and is applicable to antiplane, plane, and three-dimensional problems. In the case of multiplane interfaces, the image method generates the displacement fields in terms of infinite series whose convergences can be accelerated to improve method efficiency.

  8. Development of a time-dependent incompressible Navier-Stokes solver based on a fractional-step method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe

    1990-01-01

    The development, validation and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems are discussed. A solution method that combines a finite-volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries was previously developed for fixed-grids. In the present research effort, this solution method is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.

  9. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    ERIC Educational Resources Information Center

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  10. Efficient calculation of general Voigt profiles

    NASA Astrophysics Data System (ADS)

    Cope, D.; Khoury, R.; Lovett, R. J.

    1988-02-01

    An accurate and efficient program is presented for the computation of OIL profiles, generalizations of the Voigt profile resulting from the one-interacting-level model of Ward et al. (1974). These profiles have speed dependent shift and width functions and have asymmetric shapes. The program contains an adjustable error control parameter and includes the Voigt profile as a special case, although the general nature of this program renders it slower than a specialized Voigt profile method. Results on accuracy and computation time are presented for a broad set of test parameters, and a comparison is made with previous work on the asymptotic behavior of general Voigt profiles.

  11. Research on interpolation methods in medical image processing.

    PubMed

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  12. Tild-CRISPR Allows for Efficient and Precise Gene Knockin in Mouse and Human Cells.

    PubMed

    Yao, Xuan; Zhang, Meiling; Wang, Xing; Ying, Wenqin; Hu, Xinde; Dai, Pengfei; Meng, Feilong; Shi, Linyu; Sun, Yun; Yao, Ning; Zhong, Wanxia; Li, Yun; Wu, Keliang; Li, Weiping; Chen, Zi-Jiang; Yang, Hui

    2018-05-21

    The targeting efficiency of knockin sequences via homologous recombination (HR) is generally low. Here we describe a method we call Tild-CRISPR (targeted integration with linearized dsDNA-CRISPR), a targeting strategy in which a PCR-amplified or precisely enzyme-cut transgene donor with 800-bp homology arms is injected with Cas9 mRNA and single guide RNA into mouse zygotes. Compared with existing targeting strategies, this method achieved much higher knockin efficiency in mouse embryos, as well as brain tissue. Importantly, the Tild-CRISPR method also yielded up to 12-fold higher knockin efficiency than HR-based methods in human embryos, making it suitable for studying gene functions in vivo and developing potential gene therapies. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  14. An efficient and general numerical method to compute steady uniform vortices

    NASA Astrophysics Data System (ADS)

    Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.

    2011-07-01

    Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.

  15. Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.

    PubMed

    Leung, Denis H Y; Wang, You-Gan; Zhu, Min

    2009-07-01

    The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

  16. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    PubMed Central

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-01-01

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088

  17. Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning.

    PubMed

    Yang, Yimin; Wu, Q M Jonathan

    2016-11-01

    The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.

  18. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  19. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  20. An Efficient Numerical Method for Computing Synthetic Seismograms for a Layered Half-space with Sources and Receivers at Close or Same Depths

    NASA Astrophysics Data System (ADS)

    Zhang, H.-m.; Chen, X.-f.; Chang, S.

    - It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.

  1. Dispersion of speckle suppression efficiency for binary DOE structures: spectral domain and coherent matrix approaches.

    PubMed

    Lapchuk, Anatoliy; Prygun, Olexandr; Fu, Minglei; Le, Zichun; Xiong, Qiyuan; Kryuchyn, Andriy

    2017-06-26

    We present the first general theoretical description of speckle suppression efficiency based on an active diffractive optical element (DOE). The approach is based on spectral analysis of diffracted beams and a coherent matrix. Analytical formulae are obtained for the dispersion of speckle suppression efficiency using different DOE structures and different DOE activation methods. We show that a one-sided 2D DOE structure has smaller speckle suppression range than a two-sided 1D DOE structure. Both DOE structures have sufficient speckle suppression range to suppress low-order speckles in the entire visible range, but only the two-sided 1D DOE can suppress higher-order speckles. We also show that a linear shift 2D DOE in a laser projector with a large numerical aperture has higher effective speckle suppression efficiency than the method using switching or step-wise shift DOE structures. The generalized theoretical models elucidate the mechanism and practical realization of speckle suppression.

  2. High-frequency asymptotic methods for analyzing the EM scattering by open-ended waveguide cavities

    NASA Technical Reports Server (NTRS)

    Burkholder, R. J.; Pathak, P. H.

    1989-01-01

    Four high-frequency methods are described for analyzing the electromagnetic (EM) scattering by electrically large open-ended cavities. They are: (1) a hybrid combination of waveguide modal analysis and high-frequency asymptotics, (2) geometrical optics (GO) ray shooting, (3) Gaussian beam (GB) shooting, and (4) the generalized ray expansion (GRE) method. The hybrid modal method gives very accurate results but is limited to cavities which are made up of sections of uniform waveguides for which the modal fields are known. The GO ray shooting method can be applied to much more arbitrary cavity geometries and can handle absorber treated interior walls, but it generally only predicts the major trends of the RCS pattern and not the details. Also, a very large number of rays need to be tracked for each new incidence angle. Like the GO ray shooting method, the GB shooting method can handle more arbitrary cavities, but it is much more efficient and generally more accurate than the GO method because it includes the fields diffracted by the rim at the open end which enter the cavity. However, due to beam divergence effects the GB method is limited to cavities which are not very long compared to their width. The GRE method overcomes the length-to-width limitation of the GB method by replacing the GB's with GO ray tubes which are launched in the same manner as the GB's to include the interior rim diffracted field. This method gives good accuracy and is generally more efficient than the GO method, but a large number of ray tubes needs to be tracked.

  3. Enhancement of biocompatibility of 316LVM stainless steel by cyclic potentiodynamic passivation.

    PubMed

    Shahryari, Arash; Omanovic, Sasha; Szpunar, Jerzy A

    2009-06-15

    Passivation of stainless steel implants is a common procedure used to increase their biocompatibility. The results presented in this work demonstrate that the electrochemical cyclic potentiodynamic polarization (CPP) of a biomedical grade 316LVM stainless steel surface is a very efficient passivation method that can be used to significantly improve the material's general corrosion resistance and thus its biocompatibility. The influence of a range of experimental parameters on the passivation/corrosion protection efficiency is discussed. The passive film formed on a 316LVM surface by using the CPP method offers a significantly higher general corrosion resistance than the naturally grown passive film. The corresponding relative corrosion protection efficiency measured in saline during a 2-month period was 97% +/- 1%, which demonstrates a very high stability of the CPP-formed passive film. Its high corrosion protection efficiency was confirmed also at temperatures and chloride concentrations well above normal physiological levels. It was also shown that the CPP is a significantly more effective passivation method than some other surface-treatment methods commonly used to passivate biomedical grade stainless steels. In addition, the CPP-passivated 316LVM surface showed an enhanced biocompatibility in terms of preosteoblast (MC3T3) cells attachment. An increased thickness of the CPP-formed passive film and its enrichment with Cr(VI) and oxygen was determined to be the origin of the material's increased general corrosion resistance, whereas the increased surface roughness and surface (Volta) potential were suggested to be the origin of the enhanced preosteoblast cells attachment. Copyright 2008 Wiley Periodicals, Inc.

  4. Highly Efficient Production of Soluble Proteins from Insoluble Inclusion Bodies by a Two-Step-Denaturing and Refolding Method

    PubMed Central

    Zhang, Yan; Zhang, Ting; Feng, Yanye; Lu, Xiuxiu; Lan, Wenxian; Wang, Jufang; Wu, Houming; Cao, Chunyang; Wang, Xiaoning

    2011-01-01

    The production of recombinant proteins in a large scale is important for protein functional and structural studies, particularly by using Escherichia coli over-expression systems; however, approximate 70% of recombinant proteins are over-expressed as insoluble inclusion bodies. Here we presented an efficient method for generating soluble proteins from inclusion bodies by using two steps of denaturation and one step of refolding. We first demonstrated the advantages of this method over a conventional procedure with one denaturation step and one refolding step using three proteins with different folding properties. The refolded proteins were found to be active using in vitro tests and a bioassay. We then tested the general applicability of this method by analyzing 88 proteins from human and other organisms, all of which were expressed as inclusion bodies. We found that about 76% of these proteins were refolded with an average of >75% yield of soluble proteins. This “two-step-denaturing and refolding” (2DR) method is simple, highly efficient and generally applicable; it can be utilized to obtain active recombinant proteins for both basic research and industrial purposes. PMID:21829569

  5. The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories

    NASA Technical Reports Server (NTRS)

    Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.

    1972-01-01

    An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.

  6. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  7. Novel transform for image description and compression with implementation by neural architectures

    NASA Astrophysics Data System (ADS)

    Ben-Arie, Jezekiel; Rao, Raghunath K.

    1991-10-01

    A general method for signal representation using nonorthogonal basis functions that are composed of Gaussians are described. The Gaussians can be combined into groups with predetermined configuration that can approximate any desired basis function. The same configuration at different scales forms a set of self-similar wavelets. The general scheme is demonstrated by representing a natural signal employing an arbitrary basis function. The basic methodology is demonstrated by two novel schemes for efficient representation of 1-D and 2- D signals using Gaussian basis functions (BFs). Special methods are required here since the Gaussian functions are nonorthogonal. The first method employs a paradigm of maximum energy reduction interlaced with the A* heuristic search. The second method uses an adaptive lattice system to find the minimum-squared error of the BFs onto the signal, and a lateral-vertical suppression network to select the most efficient representation in terms of data compression.

  8. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  9. Tag-KEM from Set Partial Domain One-Way Permutations

    NASA Astrophysics Data System (ADS)

    Abe, Masayuki; Cui, Yang; Imai, Hideki; Kurosawa, Kaoru

    Recently a framework called Tag-KEM/DEM was introduced to construct efficient hybrid encryption schemes. Although it is known that generic encode-then-encrypt construction of chosen ciphertext secure public-key encryption also applies to secure Tag-KEM construction and some known encoding method like OAEP can be used for this purpose, it is worth pursuing more efficient encoding method dedicated for Tag-KEM construction. This paper proposes an encoding method that yields efficient Tag-KEM schemes when combined with set partial one-way permutations such as RSA and Rabin's encryption scheme. To our knowledge, this leads to the most practical hybrid encryption scheme of this type. We also present an efficient Tag-KEM which is CCA-secure under general factoring assumption rather than Blum factoring assumption.

  10. Generalized source Finite Volume Method for radiative transfer equation in participating media

    NASA Astrophysics Data System (ADS)

    Zhang, Biao; Xu, Chuan-Long; Wang, Shi-Min

    2017-03-01

    Temperature monitoring is very important in a combustion system. In recent years, non-intrusive temperature reconstruction has been explored intensively on the basis of calculating arbitrary directional radiative intensities. In this paper, a new method named Generalized Source Finite Volume Method (GSFVM) was proposed. It was based on radiative transfer equation and Finite Volume Method (FVM). This method can be used to calculate arbitrary directional radiative intensities and is proven to be accurate and efficient. To verify the performance of this method, six test cases of 1D, 2D, and 3D radiative transfer problems were investigated. The numerical results show that the efficiency of this method is close to the radial basis function interpolation method, but the accuracy and stability is higher than that of the interpolation method. The accuracy of the GSFVM is similar to that of the Backward Monte Carlo (BMC) algorithm, while the time required by the GSFVM is much shorter than that of the BMC algorithm. Therefore, the GSFVM can be used in temperature reconstruction and improvement on the accuracy of the FVM.

  11. Efficient and robust relaxation procedures for multi-component mixtures including phase transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de

    We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less

  12. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  13. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  14. Accompanying coordinate expansion and recurrence relation method using a transfer relation scheme for electron repulsion integrals with high angular momenta and long contractions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayami, Masao; Seino, Junji; Nakai, Hiromi, E-mail: nakai@waseda.jp

    An efficient algorithm for the rapid evaluation of electron repulsion integrals is proposed. The present method, denoted by accompanying coordinate expansion and transferred recurrence relation (ACE-TRR), is constructed using a transfer relation scheme based on the accompanying coordinate expansion and recurrence relation method. Furthermore, the ACE-TRR algorithm is extended for the general-contraction basis sets. Numerical assessments clarify the efficiency of the ACE-TRR method for the systems including heavy elements, whose orbitals have long contractions and high angular momenta, such as f- and g-orbitals.

  15. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  16. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  17. Acoustic prediction methods for the NASA generalized advanced propeller analysis system (GAPAS)

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Block, P. J. W.

    1984-01-01

    Classical methods of propeller performance analysis are coupled with state-of-the-art Aircraft Noise Prediction Program (ANOPP:) techniques to yield a versatile design tool, the NASA Generalized Advanced Propeller Analysis System (GAPAS) for the novel quiet and efficient propellers. ANOPP is a collection of modular specialized programs. GAPAS as a whole addresses blade geometry and aerodynamics, rotor performance and loading, and subsonic propeller noise.

  18. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  19. An Efficient Synthesis of 2-Substituted Benzimidazoles via Photocatalytic Condensation of o-Phenylenediamines and Aldehydes.

    PubMed

    Kovvuri, Jeshma; Nagaraju, Burri; Kamal, Ahmed; Srivastava, Ajay K

    2016-10-10

    A photocatalytic method has been developed for the efficient synthesis of functionalized benzimidazoles. This protocol involves photocatalytic condensation of o-phenylenediamines with various aldehydes using the Rose Bengal as photocatalyst. The method was found to be general and was successfully employed for accessing pharmaceutically important benzimidazoles by the condensation of aromatic, heteroaromatic and aliphatic aldehydes with o-phenylenediamines, in good-to-excellent yields. Notably, the method was found to be effective for the condensation of less reactive heterocyclic aldehydes with o-phenylenediamines.

  20. General Methodology Combining Engineering Optimization of Primary HVAC and R Plants with Decision Analysis Methods--Part II: Uncertainty and Decision Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Wei; Reddy, T. A.; Gurian, Patrick

    2007-01-31

    A companion paper to Jiang and Reddy that presents a general and computationally efficient methodology for dyanmic scheduling and optimal control of complex primary HVAC&R plants using a deterministic engineering optimization approach.

  1. An efficient method to compute spurious end point contributions in PO solutions. [Physical Optics

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.; Pistorius, Carl W. I.

    1987-01-01

    A method is given to compute the spurious endpoint contributions in the physical optics solution for electromagnetic scattering from conducting bodies. The method is applicable to general three-dimensional structures. The only information required to use the method is the radius of curvature of the body at the shadow boundary. Thus, the method is very efficient for numerical computations. As an illustration, the method is applied to several bodies of revolution to compute the endpoint contributions for backscattering in the case of axial incidence. It is shown that in high-frequency situations, the endpoint contributions obtained using the method are equal to the true endpoint contributions.

  2. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  3. Three-dimensional implicit lambda methods

    NASA Technical Reports Server (NTRS)

    Napolitano, M.; Dadone, A.

    1983-01-01

    This paper derives the three dimensional lambda-formulation equations for a general orthogonal curvilinear coordinate system and provides various block-explicit and block-implicit methods for solving them, numerically. Three model problems, characterized by subsonic, supersonic and transonic flow conditions, are used to assess the reliability and compare the efficiency of the proposed methods.

  4. A computational efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Dini, Paolo; Maughmer, Mark D.

    1990-01-01

    In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.

  5. A new fast direct solver for the boundary element method

    NASA Astrophysics Data System (ADS)

    Huang, S.; Liu, Y. J.

    2017-09-01

    A new fast direct linear equation solver for the boundary element method (BEM) is presented in this paper. The idea of the new fast direct solver stems from the concept of the hierarchical off-diagonal low-rank matrix. The hierarchical off-diagonal low-rank matrix can be decomposed into the multiplication of several diagonal block matrices. The inverse of the hierarchical off-diagonal low-rank matrix can be calculated efficiently with the Sherman-Morrison-Woodbury formula. In this paper, a more general and efficient approach to approximate the coefficient matrix of the BEM with the hierarchical off-diagonal low-rank matrix is proposed. Compared to the current fast direct solver based on the hierarchical off-diagonal low-rank matrix, the proposed method is suitable for solving general 3-D boundary element models. Several numerical examples of 3-D potential problems with the total number of unknowns up to above 200,000 are presented. The results show that the new fast direct solver can be applied to solve large 3-D BEM models accurately and with better efficiency compared with the conventional BEM.

  6. Mathematical programming for the efficient allocation of health care resources.

    PubMed

    Stinnett, A A; Paltiel, A D

    1996-10-01

    Previous discussions of methods for the efficient allocation of health care resources subject to a budget constraint have relied on unnecessarily restrictive assumptions. This paper makes use of established optimization techniques to demonstrate that a general mathematical programming framework can accommodate much more complex information regarding returns to scale, partial and complete indivisibility and program interdependence. Methods are also presented for incorporating ethical constraints into the resource allocation process, including explicit identification of the cost of equity.

  7. On the Use of "Green" Metrics in the Undergraduate Organic Chemistry Lecture and Lab to Assess the Mass Efficiency of Organic Reactions

    ERIC Educational Resources Information Center

    Andraos, John; Sayed, Murtuzaali

    2007-01-01

    A general analysis of reaction mass efficiency and raw material cost is developed using an Excel spread sheet format which can be applied to any chemical transformation. These new methods can be easily incorporated into standard laboratory exercises.

  8. Efficient Regressions via Optimally Combining Quantile Information*

    PubMed Central

    Zhao, Zhibiao; Xiao, Zhijie

    2014-01-01

    We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481

  9. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  10. A new generalized exponential rational function method to find exact special solutions for the resonance nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Ghanbari, Behzad; Inc, Mustafa

    2018-04-01

    The present paper suggests a novel technique to acquire exact solutions of nonlinear partial differential equations. The main idea of the method is to generalize the exponential rational function method. In order to examine the ability of the method, we consider the resonant nonlinear Schrödinger equation (R-NLSE). Many variants of exact soliton solutions for the equation are derived by the proposed method. Physical interpretations of some obtained solutions is also included. One can easily conclude that the new proposed method is very efficient and finds the exact solutions of the equation in a relatively easy way.

  11. A hybrid method for transient wave propagation in a multilayered solid

    NASA Astrophysics Data System (ADS)

    Tian, Jiayong; Xie, Zhoumin

    2009-08-01

    We present a hybrid method for the evaluation of transient elastic-wave propagation in a multilayered solid, integrating reverberation matrix method with the theory of generalized rays. Adopting reverberation matrix formulation, Laplace-Fourier domain solutions of elastic waves in the multilayered solid are expanded into the sum of a series of generalized-ray group integrals. Each generalized-ray group integral containing Kth power of reverberation matrix R represents the set of K-times reflections and refractions of source waves arriving at receivers in the multilayered solid, which was computed by fast inverse Laplace transform (FILT) and fast Fourier transform (FFT) algorithms. However, the calculation burden and low precision of FILT-FFT algorithm limit the application of reverberation matrix method. In this paper, we expand each of generalized-ray group integrals into the sum of a series of generalized-ray integrals, each of which is accurately evaluated by Cagniard-De Hoop method in the theory of generalized ray. The numerical examples demonstrate that the proposed method makes it possible to calculate the early-time transient response in the complex multilayered-solid configuration efficiently.

  12. Comparative study of high-resolution shock-capturing schemes for a real gas

    NASA Technical Reports Server (NTRS)

    Montagne, J.-L.; Yee, H. C.; Vinokur, M.

    1987-01-01

    Recently developed second-order explicit shock-capturing methods, in conjunction with generalized flux-vector splittings, and a generalized approximate Riemann solver for a real gas are studied. The comparisons are made on different one-dimensional Riemann (shock-tube) problems for equilibrium air with various ranges of Mach numbers, densities and pressures. Six different Riemann problems are considered. These tests provide a check on the validity of the generalized formulas, since theoretical prediction of their properties appears to be difficult because of the non-analytical form of the state equation. The numerical results in the supersonic and low-hypersonic regimes indicate that these produce good shock-capturing capability and that the shock resolution is only slightly affected by the state equation of equilibrium air. The difference in shock resolution between the various methods varies slightly from one Riemann problem to the other, but the overall accuracy is very similar. For the one-dimensional case, the relative efficiency in terms of operation count for the different methods is within 30%. The main difference between the methods lies in their versatility in being extended to multidimensional problems with efficient implicit solution procedures.

  13. An Analysis of Polynomial Chaos Approximations for Modeling Single-Fluid-Phase Flow in Porous Medium Systems

    PubMed Central

    Rupert, C.P.; Miller, C.T.

    2008-01-01

    We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519

  14. Fundamentals of bipolar high-frequency surgery.

    PubMed

    Reidenbach, H D

    1993-04-01

    In endoscopic surgery a very precise surgical dissection technique and an efficient hemostasis are of decisive importance. The bipolar technique may be regarded as a method which satisfies both requirements, especially regarding a high safety standard in application. In this context the biophysical and technical fundamentals of this method, which have been known in principle for a long time, are described with regard to the special demands of a newly developed field of modern surgery. After classification of this method into a general and a quasi-bipolar mode, various technological solutions of specific bipolar probes, in a strict and in a generalized sense, are characterized in terms of indication. Experimental results obtained with different bipolar instruments and probes are given. The application of modern microprocessor-controlled high-frequency surgery equipment and, wherever necessary, the integration of additional ancillary technology into the specialized bipolar instruments may result in most useful and efficient tools of a key technology in endoscopic surgery.

  15. The Precision Efficacy Analysis for Regression Sample Size Method.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…

  16. General Catalytic Enantioselective Access to Monohalomethyl and Trifluoromethyl Cyclopropanes.

    PubMed

    Huang, Wei-Sheng; Schlinquer, Claire; Poisson, Thomas; Pannecoucke, Xavier; Charette, André B; Jubault, Philippe

    2018-05-29

    An efficient catalytic enantioselective access to chiral functionalized trifluoromethyl cyclopropanes from two classes of diazo compounds and alpha-trifluoromethyl styrenes using Rh2((S)-BTPCP)4 as a catalyst is described. This method provides an efficient and practical strategy for the synthesis of highly functionalized CF3-cyclopropanes with excellent diastereoselectivities (up to 20:1) and enantioselectivities (up to 99% ee). The depicted methodology represents up to date the most efficient catalytic enantioselective method to access highly decorated chiral CF3-cyclopropanes. Extension to chiral monohalomethyl cyclopropanes in high ee is also reported. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. GSTAR-SUR Modeling With Calendar Variations And Intervention To Forecast Outflow Of Currencies In Java Indonesia

    NASA Astrophysics Data System (ADS)

    Akbar, M. S.; Setiawan; Suhartono; Ruchjana, B. N.; Riyadi, M. A. A.

    2018-03-01

    Ordinary Least Squares (OLS) is general method to estimates Generalized Space Time Autoregressive (GSTAR) parameters. But in some cases, the residuals of GSTAR are correlated between location. If OLS is applied to this case, then the estimators are inefficient. Generalized Least Squares (GLS) is a method used in Seemingly Unrelated Regression (SUR) model. This method estimated parameters of some models with residuals between equations are correlated. Simulation study shows that GSTAR with GLS method for estimating parameters (GSTAR-SUR) is more efficient than GSTAR-OLS method. The purpose of this research is to apply GSTAR-SUR with calendar variation and intervention as exogenous variable (GSTARX-SUR) for forecast outflow of currency in Java, Indonesia. As a result, GSTARX-SUR provides better performance than GSTARX-OLS.

  18. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less

  19. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    PubMed

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  20. Efficient integration method for fictitious domain approaches

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2015-10-01

    In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.

  1. An algebraic equation solution process formulated in anticipation of banded linear equations.

    DOT National Transportation Integrated Search

    1971-01-01

    A general method for the solution of large, sparsely banded, positive-definite, coefficient matrices is presented. The goal in developing the method was to produce an efficient and reliable solution process and to provide the user-programmer with a p...

  2. Theoretical and software considerations for general dynamic analysis using multilevel substructured models

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1985-01-01

    The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.

  3. VOFTools - A software package of calculation tools for volume of fluid methods using general convex grids

    NASA Astrophysics Data System (ADS)

    López, J.; Hernández, J.; Gómez, P.; Faura, F.

    2018-02-01

    The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.

  4. A General Catalytic Method for Highly Cost- and Atom-Efficient Nucleophilic Substitutions.

    PubMed

    Huy, Peter H; Filbrich, Isabel

    2018-05-23

    A general formamide-catalyzed protocol for the efficient transformation of alcohols into alkyl chlorides, which is promoted by substoichiometric amounts (down to 34 mol %) of inexpensive trichlorotriazine (TCT), is introduced. This is the first example of a TCT-mediated dihydroxychlorination of an OH-containing substrate (e.g., alcohols and carboxylic acids) in which all three chlorine atoms of TCT are transferred to the starting material. The consequently enhanced atom economy facilitates a significantly improved waste balance (E-factors down to 4), cost efficiency, and scalability (>50 g). Furthermore, the current procedure is distinguished by high levels of functional-group compatibility and stereoselectivity, as only weakly acidic cyanuric acid is released as exclusive byproduct. Finally, a one-pot protocol for the preparation of amines, azides, ethers, and sulfides enabled the synthesis of the drug rivastigmine with twofold S N 2 inversion, which demonstrates the high practical value of the presented method. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Efficient Kriging via Fast Matrix-Vector Products

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.

    2008-01-01

    Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.

  6. Frontier-based techniques in measuring hospital efficiency in Iran: a systematic review and meta-regression analysis

    PubMed Central

    2013-01-01

    Background In recent years, there has been growing interest in measuring the efficiency of hospitals in Iran and several studies have been conducted on the topic. The main objective of this paper was to review studies in the field of hospital efficiency and examine the estimated technical efficiency (TE) of Iranian hospitals. Methods Persian and English databases were searched for studies related to measuring hospital efficiency in Iran. Ordinary least squares (OLS) regression models were applied for statistical analysis. The PRISMA guidelines were followed in the search process. Results A total of 43 efficiency scores from 29 studies were retrieved and used to approach the research question. Data envelopment analysis was the principal frontier efficiency method in the estimation of efficiency scores. The pooled estimate of mean TE was 0.846 (±0.134). There was a considerable variation in the efficiency scores between the different studies performed in Iran. There were no differences in efficiency scores between data envelopment analysis (DEA) and stochastic frontier analysis (SFA) techniques. The reviewed studies are generally similar and suffer from similar methodological deficiencies, such as no adjustment for case mix and quality of care differences. The results of OLS regression revealed that studies that included more variables and more heterogeneous hospitals generally reported higher TE. Larger sample size was associated with reporting lower TE. Conclusions The features of frontier-based techniques had a profound impact on the efficiency scores among Iranian hospital studies. These studies suffer from major methodological deficiencies and were of sub-optimal quality, limiting their validity and reliability. It is suggested that improving data collection and processing in Iranian hospital databases may have a substantial impact on promoting the quality of research in this field. PMID:23945011

  7. Some efficient methods for obtaining infinite series solutions of n-th order linear ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Allen, G.

    1972-01-01

    The use of the theta-operator method and generalized hypergeometric functions in obtaining solutions to nth-order linear ordinary differential equations is explained. For completeness, the analysis of the differential equation to determine whether the point of expansion is an ordinary point or a regular singular point is included. The superiority of the two methods shown over the standard method is demonstrated by using all three of the methods to work out several examples. Also included is a compendium of formulae and properties of the theta operator and generalized hypergeometric functions which is complete enough to make the report self-contained.

  8. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    NASA Astrophysics Data System (ADS)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jin; Chavana-Bryant, Cecilia; Prohaska, Neill

    Leaf age structures the phenology and development of plants, as well as the evolution of leaf traits over life histories. Furthermore, a general method for efficiently estimating leaf age across forests and canopy environments is lacking.

  10. Retrieval of spheroid particle size distribution from spectral extinction data in the independent mode using PCA approach

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Lin, Jian-Zhong

    2013-01-01

    An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.

  11. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    NASA Astrophysics Data System (ADS)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  12. Generalized spin filtering and an improved derivative-sign binary image method for the extraction of fringe skeletons

    NASA Astrophysics Data System (ADS)

    Yu, Qifeng; Liu, Xiaolin; Sun, Xiangyi

    1998-07-01

    Generalized spin filters, including several directional filters such as the directional median filter and the directional binary filter, are proposed for removal of the noise of fringe patterns and the extraction of fringe skeletons with the help of fringe-orientation maps (FOM s). The generalized spin filters can filter off noise on fringe patterns and binary fringe patterns efficiently, without distortion of fringe features. A quadrantal angle filter is developed to filter off the FOM. With these new filters, the derivative-sign binary image (DSBI) method for extraction of fringe skeletons is improved considerably. The improved DSBI method can extract high-density skeletons as well as common density skeletons.

  13. A general method to analyze the thermal performance of multi-cavity concentrating solar power receivers

    DOE PAGES

    Fleming, Austin; Folsom, Charles; Ban, Heng; ...

    2015-11-13

    Concentrating solar power (CSP) with thermal energy storage has potential to provide grid-scale, on-demand, dispatachable renewable energy. As higher solar receiver output temperatures are necessary for higher thermal cycle efficiency, current CSP research is focused on high outlet temperature and high efficiency receivers. Here, the objective of this study is to provide a simplified model to analyze the thermal efficiency of multi-cavity concentrating solar power receivers.

  14. A three-dimensional, compressible, laminar boundary-layer method for general fuselages. Volume 1: Numerical method

    NASA Technical Reports Server (NTRS)

    Wie, Yong-Sun

    1990-01-01

    A procedure for calculating 3-D, compressible laminar boundary layer flow on general fuselage shapes is described. The boundary layer solutions can be obtained in either nonorthogonal 'body oriented' coordinates or orthogonal streamline coordinates. The numerical procedure is 'second order' accurate, efficient and independent of the cross flow velocity direction. Numerical results are presented for several test cases, including a sharp cone, an ellipsoid of revolution, and a general aircraft fuselage at angle of attack. Comparisons are made between numerical results obtained using nonorthogonal curvilinear 'body oriented' coordinates and streamline coordinates.

  15. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft, supplemental data

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1975-01-01

    Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.

  16. An accurate and efficient method for piezoelectric coated functional devices based on the two-dimensional Green’s function for a normal line force and line charge

    NASA Astrophysics Data System (ADS)

    Hou, Peng-Fei; Zhang, Yang

    2017-09-01

    Because most piezoelectric functional devices, including sensors, actuators and energy harvesters, are in the form of a piezoelectric coated structure, it is valuable to present an accurate and efficient method for obtaining the electro-mechanical coupling fields of this coated structure under mechanical and electrical loads. With this aim, the two-dimensional Green’s function for a normal line force and line charge on the surface of coated structure, which is a combination of an orthotropic piezoelectric coating and orthotropic elastic substrate, is presented in the form of elementary functions based on the general solution method. The corresponding electro-mechanical coupling fields of this coated structure under arbitrary mechanical and electrical loads can then be obtained by the superposition principle and Gauss integration. Numerical results show that the presented method has high computational precision, efficiency and stability. It can be used to design the best coating thickness in functional devices, improve the sensitivity of sensors, and improve the efficiency of actuators and energy harvesters. This method could be an efficient tool for engineers in engineering applications.

  17. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  18. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  19. Fast Fourier transform-based solution of 2D and 3D magnetization problems in type-II superconductivity

    NASA Astrophysics Data System (ADS)

    Prigozhin, Leonid; Sokolovsky, Vladimir

    2018-05-01

    We consider the fast Fourier transform (FFT) based numerical method for thin film magnetization problems (Vestgården and Johansen 2012 Supercond. Sci. Technol. 25 104001), compare it with the finite element methods, and evaluate its accuracy. Proposed modifications of this method implementation ensure stable convergence of iterations and enhance its efficiency. A new method, also based on the FFT, is developed for 3D bulk magnetization problems. This method is based on a magnetic field formulation, different from the popular h-formulation of eddy current problems typically employed with the edge finite elements. The method is simple, easy to implement, and can be used with a general current–voltage relation; its efficiency is illustrated by numerical simulations.

  20. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  1. Efficient exact-exchange time-dependent density-functional theory methods and their relation to time-dependent Hartree-Fock.

    PubMed

    Hesselmann, Andreas; Görling, Andreas

    2011-01-21

    A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.

  2. 40 CFR Table 1 to Subpart Hhhhhh... - Applicability of General Provisions to Subpart HHHHHH of Part 63

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .../Reporting Waiver Yes § 63.11 Control Device Requirements/Flares No Subpart HHHHHH does not require the use... Control Agencies and EPA Regional Offices Yes § 63.14 Incorporation by Reference Yes Test methods for measuring paint booth filter efficiency and spray gun transfer efficiency in § 63.11173(e)(2) and (3) are...

  3. 40 CFR Table 1 to Subpart Hhhhhh... - Applicability of General Provisions to Subpart HHHHHH of Part 63

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .../Reporting Waiver Yes § 63.11 Control Device Requirements/Flares No Subpart HHHHHH does not require the use... Control Agencies and EPA Regional Offices Yes § 63.14 Incorporation by Reference Yes Test methods for measuring paint booth filter efficiency and spray gun transfer efficiency in § 63.11173(e)(2) and (3) are...

  4. 40 CFR Table 1 to Subpart Hhhhhh... - Applicability of General Provisions to Subpart HHHHHH of Part 63

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .../Reporting Waiver Yes § 63.11 Control Device Requirements/Flares No Subpart HHHHHH does not require the use... Control Agencies and EPA Regional Offices Yes § 63.14 Incorporation by Reference Yes Test methods for measuring paint booth filter efficiency and spray gun transfer efficiency in § 63.11173(e)(2) and (3) are...

  5. 40 CFR Table 1 to Subpart Hhhhhh... - Applicability of General Provisions to Subpart HHHHHH of Part 63

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .../Reporting Waiver Yes § 63.11 Control Device Requirements/Flares No Subpart HHHHHH does not require the use... Control Agencies and EPA Regional Offices Yes § 63.14 Incorporation by Reference Yes Test methods for measuring paint booth filter efficiency and spray gun transfer efficiency in § 63.11173(e)(2) and (3) are...

  6. Center for Efficient Exascale Discretizations Software Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolev, Tzanio; Dobrev, Veselin; Tomov, Vladimir

    The CEED Software suite is a collection of generally applicable software tools focusing on the following computational motives: PDE discretizations on unstructured meshes, high-order finite element and spectral element methods and unstructured adaptive mesh refinement. All of this software is being developed as part of CEED, a co-design Center for Efficient Exascale Discretizations, within DOE's Exascale Computing Project (ECP) program.

  7. New conditions for obtaining the exact solutions of the general Riccati equation.

    PubMed

    Bougoffa, Lazhar

    2014-01-01

    We propose a direct method for solving the general Riccati equation y' = f(x) + g(x)y + h(x)y(2). We first reduce it into an equivalent equation, and then we formulate the relations between the coefficients functions f(x), g(x), and h(x) of the equation to obtain an equivalent separable equation from which the previous equation can be solved in closed form. Several examples are presented to demonstrate the efficiency of this method.

  8. Eigenmode computation of cavities with perturbed geometry using matrix perturbation methods applied on generalized eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gorgizadeh, Shahnam; Flisgen, Thomas; van Rienen, Ursula

    2018-07-01

    Generalized eigenvalue problems are standard problems in computational sciences. They may arise in electromagnetic fields from the discretization of the Helmholtz equation by for example the finite element method (FEM). Geometrical perturbations of the structure under concern lead to a new generalized eigenvalue problems with different system matrices. Geometrical perturbations may arise by manufacturing tolerances, harsh operating conditions or during shape optimization. Directly solving the eigenvalue problem for each perturbation is computationally costly. The perturbed eigenpairs can be approximated using eigenpair derivatives. Two common approaches for the calculation of eigenpair derivatives, namely modal superposition method and direct algebraic methods, are discussed in this paper. Based on the direct algebraic methods an iterative algorithm is developed for efficiently calculating the eigenvalues and eigenvectors of the perturbed geometry from the eigenvalues and eigenvectors of the unperturbed geometry.

  9. A symmetric version of the generalized alternating direction method of multipliers for two-block separable convex programming.

    PubMed

    Liu, Jing; Duan, Yongrui; Sun, Min

    2017-01-01

    This paper introduces a symmetric version of the generalized alternating direction method of multipliers for two-block separable convex programming with linear equality constraints, which inherits the superiorities of the classical alternating direction method of multipliers (ADMM), and which extends the feasible set of the relaxation factor α of the generalized ADMM to the infinite interval [Formula: see text]. Under the conditions that the objective function is convex and the solution set is nonempty, we establish the convergence results of the proposed method, including the global convergence, the worst-case [Formula: see text] convergence rate in both the ergodic and the non-ergodic senses, where k denotes the iteration counter. Numerical experiments to decode a sparse signal arising in compressed sensing are included to illustrate the efficiency of the new method.

  10. A metal-free general procedure for oxidation of secondary amines to nitrones.

    PubMed

    Gella, Carolina; Ferrer, Eric; Alibés, Ramon; Busqué, Félíx; de March, Pedro; Figueredo, Marta; Font, Josep

    2009-08-21

    An efficient and metal-free protocol for direct oxidation of secondary amines to nitrones has been developed, using Oxone in a biphasic basic medium as the sole oxidant. The method is general and tolerant with other functional groups or existing stereogenic centers, providing rapid access to enantiomerically pure compounds in good yields.

  11. A general solution to the hidden-line problem. [to graphically represent aerodynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Hedgley, D. R., Jr.

    1982-01-01

    The requirements for computer-generated perspective projections of three dimensional objects has escalated. A general solution was developed. The theoretical solution to this problem is presented. The method is very efficient as it minimizes the selection of points and comparison of line segments and hence avoids the devastation of square-law growth.

  12. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  13. Green Business and White Smokestacks.

    ERIC Educational Resources Information Center

    Fromont, Michel

    1993-01-01

    The Ciba chemical plant in Monthey, Switzerland, follows the Ciba-Geigy Industrial Group's general environment policy. They apply economic, less polluting production methods, dispose of waste efficiently, and train staff in ecological responsibility. (JOW)

  14. Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space

    DOEpatents

    Schach Von Wittenau, Alexis E.

    2003-01-01

    A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.

  15. Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.

    2013-10-01

    In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.

  16. Efficient solution of the simplified P N equations

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.

    2014-12-23

    We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.

  17. Efficient numerical method of freeform lens design for arbitrary irradiance shaping

    NASA Astrophysics Data System (ADS)

    Wojtanowski, Jacek

    2018-05-01

    A computational method to design a lens with a flat entrance surface and a freeform exit surface that can transform a collimated, generally non-uniform input beam into a beam with a desired irradiance distribution of arbitrary shape is presented. The methodology is based on non-linear elliptic partial differential equations, known as Monge-Ampère PDEs. This paper describes an original numerical algorithm to solve this problem by applying the Gauss-Seidel method with simplified boundary conditions. A joint MATLAB-ZEMAX environment is used to implement and verify the method. To prove the efficiency of the proposed approach, an exemplary study where the designed lens is faced with the challenging illumination task is shown. An analysis of solution stability, iteration-to-iteration ray mapping evolution (attached in video format), depth of focus and non-zero étendue efficiency is performed.

  18. Accelerated weight histogram method for exploring free energy landscapes

    NASA Astrophysics Data System (ADS)

    Lindahl, V.; Lidmar, J.; Hess, B.

    2014-07-01

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  19. Accelerated weight histogram method for exploring free energy landscapes.

    PubMed

    Lindahl, V; Lidmar, J; Hess, B

    2014-07-28

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  20. Impact of Advanced Propeller Technology on Aircraft/Mission Characteristics of Several General Aviation Aircraft

    NASA Technical Reports Server (NTRS)

    Keiter, I. D.

    1982-01-01

    Studies of several General Aviation aircraft indicated that the application of advanced technologies to General Aviation propellers can reduce fuel consumption in future aircraft by a significant amount. Propeller blade weight reductions achieved through the use of composites, propeller efficiency and noise improvements achieved through the use of advanced concepts and improved propeller analytical design methods result in aircraft with lower operating cost, acquisition cost and gross weight.

  1. Automatic Generalizability Method of Urban Drainage Pipe Network Considering Multi-Features

    NASA Astrophysics Data System (ADS)

    Zhu, S.; Yang, Q.; Shao, J.

    2018-05-01

    Urban drainage systems are indispensable dataset for storm-flooding simulation. Given data availability and current computing power, the structure and complexity of urban drainage systems require to be simplify. However, till data, the simplify procedure mainly depend on manual operation that always leads to mistakes and lower work efficiency. This work referenced the classification methodology of road system, and proposed a conception of pipeline stroke. Further, length of pipeline, angle between two pipelines, the pipeline belonged road level and diameter of pipeline were chosen as the similarity criterion to generate the pipeline stroke. Finally, designed the automatic method to generalize drainage systems with the concern of multi-features. This technique can improve the efficiency and accuracy of the generalization of drainage systems. In addition, it is beneficial to the study of urban storm-floods.

  2. Efficient high-order structure-preserving methods for the generalized Rosenau-type equation with power law nonlinearity

    NASA Astrophysics Data System (ADS)

    Cai, Jiaxiang; Liang, Hua; Zhang, Chun

    2018-06-01

    Based on the multi-symplectic Hamiltonian formula of the generalized Rosenau-type equation, a multi-symplectic scheme and an energy-preserving scheme are proposed. To improve the accuracy of the solution, we apply the composition technique to the obtained schemes to develop high-order schemes which are also multi-symplectic and energy-preserving respectively. Discrete fast Fourier transform makes a significant improvement to the computational efficiency of schemes. Numerical results verify that all the proposed schemes have satisfactory performance in providing accurate solution and preserving the discrete mass and energy invariants. Numerical results also show that although each basic time step is divided into several composition steps, the computational efficiency of the composition schemes is much higher than that of the non-composite schemes.

  3. A spectral dynamic stiffness method for free vibration analysis of plane elastodynamic problems

    NASA Astrophysics Data System (ADS)

    Liu, X.; Banerjee, J. R.

    2017-03-01

    A highly efficient and accurate analytical spectral dynamic stiffness (SDS) method for modal analysis of plane elastodynamic problems based on both plane stress and plane strain assumptions is presented in this paper. First, the general solution satisfying the governing differential equation exactly is derived by applying two types of one-dimensional modified Fourier series. Then the SDS matrix for an element is formulated symbolically using the general solution. The SDS matrices are assembled directly in a similar way to that of the finite element method, demonstrating the method's capability to model complex structures. Any arbitrary boundary conditions are represented accurately in the form of the modified Fourier series. The Wittrick-Williams algorithm is then used as the solution technique where the mode count problem (J0) of a fully-clamped element is resolved. The proposed method gives highly accurate solutions with remarkable computational efficiency, covering low, medium and high frequency ranges. The method is applied to both plane stress and plane strain problems with simple as well as complex geometries. All results from the theory in this paper are accurate up to the last figures quoted to serve as benchmarks.

  4. 10 CFR Appendix B to Subpart B of... - Uniform Test Method for Measuring Nominal Full Load Efficiency of Electric Motors

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... vertical solid shaft normal thrust general purpose electric motor (subtype II), in which case it shall be... solid shaft shall be inserted, bolted to the non-drive end of the motor and welded on the drive end... Efficiency of Electric Motors B Appendix B to Subpart B of Part 431 Energy DEPARTMENT OF ENERGY ENERGY...

  5. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  6. Comparison of the Chebyshev Method and the Generalized Crank-Nicholson Method for time Propagation in Quantum Mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Formanek, Martin; Vana, Martin; Houfek, Karel

    2010-09-30

    We compare efficiency of two methods for numerical solution of the time-dependent Schroedinger equation, namely the Chebyshev method and the recently introduced generalized Crank-Nicholson method. As a testing system the free propagation of a particle in one dimension is used. The space discretization is based on the high-order finite diferences to approximate accurately the kinetic energy operator in the Hamiltonian. We show that the choice of the more effective method depends on how many wave functions must be calculated during the given time interval to obtain relevant and reasonably accurate information about the system, i.e. on the choice of themore » time step.« less

  7. The 'number needed to sample' in primary care research. Comparison of two primary care sampling frames for chronic back pain.

    PubMed

    Smith, Blair H; Hannaford, Philip C; Elliott, Alison M; Smith, W Cairns; Chambers, W Alastair

    2005-04-01

    Sampling for primary care research must strike a balance between efficiency and external validity. For most conditions, even a large population sample will yield a small number of cases, yet other sampling techniques risk problems with extrapolation of findings. To compare the efficiency and external validity of two sampling methods for both an intervention study and epidemiological research in primary care--a convenience sample and a general population sample--comparing the response and follow-up rates, the demographic and clinical characteristics of each sample, and calculating the 'number needed to sample' (NNS) for a hypothetical randomized controlled trial. In 1996, we selected two random samples of adults from 29 general practices in Grampian, for an epidemiological study of chronic pain. One sample of 4175 was identified by an electronic questionnaire that listed patients receiving regular analgesic prescriptions--the 'repeat prescription sample'. The other sample of 5036 was identified from all patients on practice lists--the 'general population sample'. Questionnaires, including demographic, pain and general health measures, were sent to all. A similar follow-up questionnaire was sent in 2000 to all those agreeing to participate in further research. We identified a potential group of subjects for a hypothetical trial in primary care based on a recently published trial (those aged 25-64, with severe chronic back pain, willing to participate in further research). The repeat prescription sample produced better response rates than the general sample overall (86% compared with 82%, P < 0.001), from both genders and from the oldest and youngest age groups. The NNS using convenience sampling was 10 for each member of the final potential trial sample, compared with 55 using general population sampling. There were important differences between the samples in age, marital and employment status, social class and educational level. However, among the potential trial sample, there were no demographic differences. Those from the repeat prescription sample had poorer indices than the general population sample in all pain and health measures. The repeat prescription sampling method was approximately five times more efficient than the general population method. However demographic and clinical differences in the repeat prescription sample might hamper extrapolation of findings to the general population, particularly in an epidemiological study, and demonstrate that simple comparison with age and gender of the target population is insufficient.

  8. Microwave assisted synthesis of bridgehead alkenes.

    PubMed

    Cleary, Leah; Yoo, Hoseong; Shea, Kenneth J

    2011-04-01

    A new, concise method to synthesize triene precursors for the type 2 intramolecular Diels-Alder reaction has been developed. Microwave irradiation of the trienes provides a convenient method for the synthesis of bridgehead alkenes. Higher yields, shorter reaction times, and lower reaction temperatures provide a general and efficient route to this interesting class of molecules.

  9. Microwave Assisted Synthesis of Bridgehead Alkenes

    PubMed Central

    Cleary, Leah; Yoo, Hoseong; Shea, Kenneth J.

    2011-01-01

    A new, concise method to synthesize triene precursors for the type 2 intramolecular Diels–Alder reaction has been developed. Microwave irradiation of the trienes provides a convenient method for the synthesis of bridgehead alkenes. Higher yields, shorter reaction times and lower reaction temperatures provide a general and efficient route to this interesting class of molecules. PMID:21384818

  10. Comparison of Implicit Schemes for the Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1995-01-01

    For a computational flow simulation tool to be useful in a design environment, it must be very robust and efficient. To develop such a tool for incompressible flow applications, a number of different implicit schemes are compared for several two-dimensional flow problems in the current study. The schemes include Point-Jacobi relaxation, Gauss-Seidel line relaxation, incomplete lower-upper decomposition, and the generalized minimum residual method preconditioned with each of the three other schemes. The efficiency of the schemes is measured in terms of the computing time required to obtain a steady-state solution for the laminar flow over a backward-facing step, the flow over a NACA 4412 airfoil, and the flow over a three-element airfoil using overset grids. The flow solver used in the study is the INS2D code that solves the incompressible Navier-Stokes equations using the method of artificial compressibility and upwind differencing of the convective terms. The results show that the generalized minimum residual method preconditioned with the incomplete lower-upper factorization outperforms all other methods by at least a factor of 2.

  11. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.

  12. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  13. A General Formulation for Robust and Efficient Integration of Finite Differences and Phase Unwrapping on Sparse Multidimensional Domains

    NASA Astrophysics Data System (ADS)

    Costantini, Mario; Malvarosa, Fabio; Minati, Federico

    2010-03-01

    Phase unwrapping and integration of finite differences are key problems in several technical fields. In SAR interferometry and differential and persistent scatterers interferometry digital elevation models and displacement measurements can be obtained after unambiguously determining the phase values and reconstructing the mean velocities and elevations of the observed targets, which can be performed by integrating differential estimates of these quantities (finite differences between neighboring points).In this paper we propose a general formulation for robust and efficient integration of finite differences and phase unwrapping, which includes standard techniques methods as sub-cases. The proposed approach allows obtaining more reliable and accurate solutions by exploiting redundant differential estimates (not only between nearest neighboring points) and multi-dimensional information (e.g. multi-temporal, multi-frequency, multi-baseline observations), or external data (e.g. GPS measurements). The proposed approach requires the solution of linear or quadratic programming problems, for which computationally efficient algorithms exist.The validation tests obtained on real SAR data confirm the validity of the method, which was integrated in our production chain and successfully used also in massive productions.

  14. Efficient visibility encoding for dynamic illumination in direct volume rendering.

    PubMed

    Kronander, Joel; Jönsson, Daniel; Löw, Joakim; Ljung, Patric; Ynnerman, Anders; Unger, Jonas

    2012-03-01

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.

  15. Sparse Contextual Activation for Efficient Visual Re-Ranking.

    PubMed

    Bai, Song; Bai, Xiang

    2016-03-01

    In this paper, we propose an extremely efficient algorithm for visual re-ranking. By considering the original pairwise distance in the contextual space, we develop a feature vector called sparse contextual activation (SCA) that encodes the local distribution of an image. Hence, re-ranking task can be simply accomplished by vector comparison under the generalized Jaccard metric, which has its theoretical meaning in the fuzzy set theory. In order to improve the time efficiency of re-ranking procedure, inverted index is successfully introduced to speed up the computation of generalized Jaccard metric. As a result, the average time cost of re-ranking for a certain query can be controlled within 1 ms. Furthermore, inspired by query expansion, we also develop an additional method called local consistency enhancement on the proposed SCA to improve the retrieval performance in an unsupervised manner. On the other hand, the retrieval performance using a single feature may not be satisfactory enough, which inspires us to fuse multiple complementary features for accurate retrieval. Based on SCA, a robust feature fusion algorithm is exploited that also preserves the characteristic of high time efficiency. We assess our proposed method in various visual re-ranking tasks. Experimental results on Princeton shape benchmark (3D object), WM-SRHEC07 (3D competition), YAEL data set B (face), MPEG-7 data set (shape), and Ukbench data set (image) manifest the effectiveness and efficiency of SCA.

  16. Hybrid preconditioning for iterative diagonalization of ill-conditioned generalized eigenvalue problems in electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu

    2013-12-15

    The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less

  17. Design of efficient stiffened shells of revolution

    NASA Technical Reports Server (NTRS)

    Majumder, D. K.; Thornton, W. A.

    1976-01-01

    A method to produce efficient piecewise uniform stiffened shells of revolution is presented. The approach uses a first order differential equation formulation for the shell prebuckling and buckling analyses and the necessary conditions for an optimum design are derived by a variational approach. A variety of local yielding and buckling constraints and the general buckling constraint are included in the design process. The local constraints are treated by means of an interior penalty function and the general buckling load is treated by means of an exterior penalty function. This allows the general buckling constraint to be included in the design process only when it is violated. The self-adjoint nature of the prebuckling and buckling formulations is used to reduce the computational effort. Results for four conical shells and one spherical shell are given.

  18. Privacy-preserving data cube for electronic medical records: An experimental evaluation.

    PubMed

    Kim, Soohyung; Lee, Hyukki; Chung, Yon Dohn

    2017-01-01

    The aim of this study is to evaluate the effectiveness and efficiency of privacy-preserving data cubes of electronic medical records (EMRs). An EMR data cube is a complex of EMR statistics that are summarized or aggregated by all possible combinations of attributes. Data cubes are widely utilized for efficient big data analysis and also have great potential for EMR analysis. For safe data analysis without privacy breaches, we must consider the privacy preservation characteristics of the EMR data cube. In this paper, we introduce a design for a privacy-preserving EMR data cube and the anonymization methods needed to achieve data privacy. We further focus on changes in efficiency and effectiveness that are caused by the anonymization process for privacy preservation. Thus, we experimentally evaluate various types of privacy-preserving EMR data cubes using several practical metrics and discuss the applicability of each anonymization method with consideration for the EMR analysis environment. We construct privacy-preserving EMR data cubes from anonymized EMR datasets. A real EMR dataset and demographic dataset are used for the evaluation. There are a large number of anonymization methods to preserve EMR privacy, and the methods are classified into three categories (i.e., global generalization, local generalization, and bucketization) by anonymization rules. According to this classification, three types of privacy-preserving EMR data cubes were constructed for the evaluation. We perform a comparative analysis by measuring the data size, cell overlap, and information loss of the EMR data cubes. Global generalization considerably reduced the size of the EMR data cube and did not cause the data cube cells to overlap, but incurred a large amount of information loss. Local generalization maintained the data size and generated only moderate information loss, but there were cell overlaps that could decrease the search performance. Bucketization did not cause cells to overlap and generated little information loss; however, the method considerably inflated the size of the EMR data cubes. The utility of anonymized EMR data cubes varies widely according to the anonymization method, and the applicability of the anonymization method depends on the features of the EMR analysis environment. The findings help to adopt the optimal anonymization method considering the EMR analysis environment and goal of the EMR analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. A time-spectral approach to numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  20. Toward cost-efficient sampling methods

    NASA Astrophysics Data System (ADS)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  1. Compatibility of Segments of Thermoelectric Generators

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey; Ursell, Tristan

    2009-01-01

    A method of calculating (usually for the purpose of maximizing) the power-conversion efficiency of a segmented thermoelectric generator is based on equations derived from the fundamental equations of thermoelectricity. Because it is directly traceable to first principles, the method provides physical explanations in addition to predictions of phenomena involved in segmentation. In comparison with the finite-element method used heretofore to predict (without being able to explain) the behavior of a segmented thermoelectric generator, this method is much simpler to implement in practice: in particular, the efficiency of a segmented thermoelectric generator can be estimated by evaluating equations using only hand-held calculator with this method. In addition, the method provides for determination of cascading ratios. The concept of cascading is illustrated in the figure and the definition of the cascading ratio is defined in the figure caption. An important aspect of the method is its approach to the issue of compatibility among segments, in combination with introduction of the concept of compatibility within a segment. Prior approaches involved the use of only averaged material properties. Two materials in direct contact could be examined for compatibility with each other, but there was no general framework for analysis of compatibility. The present method establishes such a framework. The mathematical derivation of the method begins with the definition of reduced efficiency of a thermoelectric generator as the ratio between (1) its thermal-to-electric power-conversion efficiency and (2) its Carnot efficiency (the maximum efficiency theoretically attainable, given its hot- and cold-side temperatures). The derivation involves calculation of the reduced efficiency of a model thermoelectric generator for which the hot-side temperature is only infinitesimally greater than the cold-side temperature. The derivation includes consideration of the ratio (u) between the electric current and heat-conduction power and leads to the concept of compatibility factor (s) for a given thermoelectric material, defined as the value of u that maximizes the reduced efficiency of the aforementioned model thermoelectric generator.

  2. Efficient preparation of shuffled DNA libraries through recombination (Gateway) cloning.

    PubMed

    Lehtonen, Soili I; Taskinen, Barbara; Ojala, Elina; Kukkurainen, Sampo; Rahikainen, Rolle; Riihimäki, Tiina A; Laitinen, Olli H; Kulomaa, Markku S; Hytönen, Vesa P

    2015-01-01

    Efficient and robust subcloning is essential for the construction of high-diversity DNA libraries in the field of directed evolution. We have developed a more efficient method for the subcloning of DNA-shuffled libraries by employing recombination cloning (Gateway). The Gateway cloning procedure was performed directly after the gene reassembly reaction, without additional purification and amplification steps, thus simplifying the conventional DNA shuffling protocols. Recombination-based cloning, directly from the heterologous reassembly reaction, conserved the high quality of the library and reduced the time required for the library construction. The described method is generally compatible for the construction of DNA-shuffled gene libraries. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Efficient matrix approach to optical wave propagation and Linear Canonical Transforms.

    PubMed

    Shakir, Sami A; Fried, David L; Pease, Edwin A; Brennan, Terry J; Dolash, Thomas M

    2015-10-05

    The Fresnel diffraction integral form of optical wave propagation and the more general Linear Canonical Transforms (LCT) are cast into a matrix transformation form. Taking advantage of recent efficient matrix multiply algorithms, this approach promises an efficient computational and analytical tool that is competitive with FFT based methods but offers better behavior in terms of aliasing, transparent boundary condition, and flexibility in number of sampling points and computational window sizes of the input and output planes being independent. This flexibility makes the method significantly faster than FFT based propagators when only a single point, as in Strehl metrics, or a limited number of points, as in power-in-the-bucket metrics, are needed in the output observation plane.

  4. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  5. A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    2014-08-01

    In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show themore » accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.« less

  6. Data decomposition method for parallel polygon rasterization considering load balancing

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun

    2015-12-01

    It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.

  7. New Operational Matrices for Solving Fractional Differential Equations on the Half-Line

    PubMed Central

    2015-01-01

    In this paper, the fractional-order generalized Laguerre operational matrices (FGLOM) of fractional derivatives and fractional integration are derived. These operational matrices are used together with spectral tau method for solving linear fractional differential equations (FDEs) of order ν (0 < ν < 1) on the half line. An upper bound of the absolute errors is obtained for the approximate and exact solutions. Fractional-order generalized Laguerre pseudo-spectral approximation is investigated for solving nonlinear initial value problem of fractional order ν. The extension of the fractional-order generalized Laguerre pseudo-spectral method is given to solve systems of FDEs. We present the advantages of using the spectral schemes based on fractional-order generalized Laguerre functions and compare them with other methods. Several numerical examples are implemented for FDEs and systems of FDEs including linear and nonlinear terms. We demonstrate the high accuracy and the efficiency of the proposed techniques. PMID:25996369

  8. New operational matrices for solving fractional differential equations on the half-line.

    PubMed

    Bhrawy, Ali H; Taha, Taha M; Alzahrani, Ebraheem O; Alzahrani, Ebrahim O; Baleanu, Dumitru; Alzahrani, Abdulrahim A

    2015-01-01

    In this paper, the fractional-order generalized Laguerre operational matrices (FGLOM) of fractional derivatives and fractional integration are derived. These operational matrices are used together with spectral tau method for solving linear fractional differential equations (FDEs) of order ν (0 < ν < 1) on the half line. An upper bound of the absolute errors is obtained for the approximate and exact solutions. Fractional-order generalized Laguerre pseudo-spectral approximation is investigated for solving nonlinear initial value problem of fractional order ν. The extension of the fractional-order generalized Laguerre pseudo-spectral method is given to solve systems of FDEs. We present the advantages of using the spectral schemes based on fractional-order generalized Laguerre functions and compare them with other methods. Several numerical examples are implemented for FDEs and systems of FDEs including linear and nonlinear terms. We demonstrate the high accuracy and the efficiency of the proposed techniques.

  9. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Design criteria for sediment basins.

    DOT National Transportation Integrated Search

    1972-01-01

    The need for controlling construction induced sediment to keep it from entering the nation's waterways is generally accepted. Efficient means and methods for sediment control, however, are not simple, and in some cases have not been developed to a hi...

  11. Regioselective, borinic acid-catalyzed monoacylation, sulfonylation and alkylation of diols and carbohydrates: expansion of substrate scope and mechanistic studies.

    PubMed

    Lee, Doris; Williamson, Caitlin L; Chan, Lina; Taylor, Mark S

    2012-05-16

    Synthetic and mechanistic aspects of the diarylborinic acid-catalyzed regioselective monofunctionalization of 1,2- and 1,3-diols are presented. Diarylborinic acid catalysis is shown to be an efficient and general method for monotosylation of pyranoside derivatives bearing three secondary hydroxyl groups (7 examples, 88% average yield). In addition, the scope of the selective acylation, sulfonylation, and alkylation is extended to 1,2- and 1,3-diols not derived from carbohydrates (28 examples); the efficiency, generality, and operational simplicity of this method are competitive with those of state-of-the-art protocols including the broadly applied organotin-catalyzed or -mediated reactions. Mechanistic details of the organoboron-catalyzed processes are explored using competition experiments, kinetics, and catalyst structure-activity relationships. These experiments are consistent with a mechanism in which a tetracoordinate borinate complex reacts with the electrophilic species in the turnover-limiting step of the catalytic cycle.

  12. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  13. The use of Lanczos's method to solve the large generalized symmetric definite eigenvalue problem

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.

    1989-01-01

    The generalized eigenvalue problem, Kx = Lambda Mx, is of significant practical importance, especially in structural enginering where it arises as the vibration and buckling problem. A new algorithm, LANZ, based on Lanczos's method is developed. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the Lanczos algorithm. A new algorithm for solving the tridiagonal matrices that arise when using Lanczos's method is described. A modification of Parlett and Scott's selective orthogonalization algorithm is proposed. Results from an implementation of LANZ on a Convex C-220 show it to be superior to a subspace iteration code.

  14. Exact solutions to the time-fractional differential equations via local fractional derivatives

    NASA Astrophysics Data System (ADS)

    Guner, Ozkan; Bekir, Ahmet

    2018-01-01

    This article utilizes the local fractional derivative and the exp-function method to construct the exact solutions of nonlinear time-fractional differential equations (FDEs). For illustrating the validity of the method, it is applied to the time-fractional Camassa-Holm equation and the time-fractional-generalized fifth-order KdV equation. Moreover, the exact solutions are obtained for the equations which are formed by different parameter values related to the time-fractional-generalized fifth-order KdV equation. This method is an reliable and efficient mathematical tool for solving FDEs and it can be applied to other non-linear FDEs.

  15. Dual initiation strip charge apparatus and methods for making and implementing the same

    DOEpatents

    Jakaboski, Juan-Carlos [Albuquerque, NM; Todd,; Steven, N [Rio Rancho, NM; Polisar, Stephen [Albuquerque, NM; Hughs, Chance [Tijeras, NM

    2011-03-22

    A Dual Initiation Strip Charge (DISC) apparatus is initiated by a single initiation source and detonates a strip of explosive charge at two separate contacts. The reflection of explosively induced stresses meet and create a fracture and breach a target along a generally single fracture contour and produce generally fragment-free scattering and no spallation. Methods for making and implementing a DISC apparatus provide numerous advantages over previous methods of creating explosive charges by utilizing steps for rapid prototyping; by implementing efficient steps and designs for metering consistent, repeatable, and controlled amount of high explosive; and by utilizing readily available materials.

  16. Mean-Reverting Portfolio With Budget Constraint

    NASA Astrophysics Data System (ADS)

    Zhao, Ziping; Palomar, Daniel P.

    2018-05-01

    This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.

  17. Parallel O(N) Stokes’ solver towards scalable Brownian dynamics of hydrodynamically interacting objects in general geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Xujun; Li, Jiyuan; Jiang, Xikai

    An efficient parallel Stokes’s solver is developed towards the complete inclusion of hydrodynamic interactions of Brownian particles in any geometry. A Langevin description of the particle dynamics is adopted, where the long-range interactions are included using a Green’s function formalism. We present a scalable parallel computational approach, where the general geometry Stokeslet is calculated following a matrix-free algorithm using the General geometry Ewald-like method. Our approach employs a highly-efficient iterative finite element Stokes’ solver for the accurate treatment of long-range hydrodynamic interactions within arbitrary confined geometries. A combination of mid-point time integration of the Brownian stochastic differential equation, the parallelmore » Stokes’ solver, and a Chebyshev polynomial approximation for the fluctuation-dissipation theorem result in an O(N) parallel algorithm. We also illustrate the new algorithm in the context of the dynamics of confined polymer solutions in equilibrium and non-equilibrium conditions. Our method is extended to treat suspended finite size particles of arbitrary shape in any geometry using an Immersed Boundary approach.« less

  18. Parallel O(N) Stokes’ solver towards scalable Brownian dynamics of hydrodynamically interacting objects in general geometries

    DOE PAGES

    Zhao, Xujun; Li, Jiyuan; Jiang, Xikai; ...

    2017-06-29

    An efficient parallel Stokes’s solver is developed towards the complete inclusion of hydrodynamic interactions of Brownian particles in any geometry. A Langevin description of the particle dynamics is adopted, where the long-range interactions are included using a Green’s function formalism. We present a scalable parallel computational approach, where the general geometry Stokeslet is calculated following a matrix-free algorithm using the General geometry Ewald-like method. Our approach employs a highly-efficient iterative finite element Stokes’ solver for the accurate treatment of long-range hydrodynamic interactions within arbitrary confined geometries. A combination of mid-point time integration of the Brownian stochastic differential equation, the parallelmore » Stokes’ solver, and a Chebyshev polynomial approximation for the fluctuation-dissipation theorem result in an O(N) parallel algorithm. We also illustrate the new algorithm in the context of the dynamics of confined polymer solutions in equilibrium and non-equilibrium conditions. Our method is extended to treat suspended finite size particles of arbitrary shape in any geometry using an Immersed Boundary approach.« less

  19. Recursive Newton-Euler formulation of manipulator dynamics

    NASA Technical Reports Server (NTRS)

    Nasser, M. G.

    1989-01-01

    A recursive Newton-Euler procedure is presented for the formulation and solution of manipulator dynamical equations. The procedure includes rotational and translational joints and a topological tree. This model was verified analytically using a planar two-link manipulator. Also, the model was tested numerically against the Walker-Orin model using the Shuttle Remote Manipulator System data. The hinge accelerations obtained from both models were identical. The computational requirements of the model vary linearly with the number of joints. The computational efficiency of this method exceeds that of Walker-Orin methods. This procedure may be viewed as a considerable generalization of Armstrong's method. A six-by-six formulation is adopted which enhances both the computational efficiency and simplicity of the model.

  20. Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems

    NASA Astrophysics Data System (ADS)

    Arrarás, A.; Portero, L.; Yotov, I.

    2014-01-01

    We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.

  1. Parallel iterative methods for sparse linear and nonlinear equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.

  2. Experiences with online consultation systems in primary care: case study of one early adopter site

    PubMed Central

    Casey, Michael; Shaw, Sara; Swinglehurst, Deborah

    2017-01-01

    Background There is a strong policy drive towards implementing alternatives to face-to-face consultations in general practice to improve access, efficiency, and cost-effectiveness. These alternatives embrace novel technologies that are assumed to offer potential to improve care. Aim To explore the introduction of one online consultation system (Tele-Doc) and how it shapes working practices. Design and setting Mixed methods case study in an inner-city general practice. Method The study was conducted through interviews with IT developers, clinicians, and administrative staff, and scrutiny of documents, websites, and demonstrator versions of Tele-Doc, followed by thematic analysis and discourse analysis. Results Three interrelated themes were identified: online consultation systems as innovation, managing the ‘messiness’ of general practice consultations, and redistribution of the work of general practice. These themes raise timely questions about what it means to consult in contemporary general practice. Uptake of Tele-Doc by patients was low. Much of the work of the consultation was redistributed to patients and administrators, sometimes causing misunderstandings. The ‘messiness’ of consultations was hard to eliminate. In-house training focused on the technical application rather than associated transformations to practice work that were not anticipated. GPs welcomed varied modes of consulting, but the aspiration of improved efficiency was not realised in practice. Conclusion Tele-Doc offers a new kind of consultation that is still being worked out in practice. It may offer convenience for patients with discrete, single problems, and a welcome variation to GPs’ workload. Tele-Doc’s potential for addressing more complex problems and achieving efficiency is less clear, and its adoption may involve unforeseeable consequences. PMID:28993306

  3. Chapter 6: Residential Lighting Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Dimetrosky, Scott; Parkinson, Katie

    Given new regulations, increased complexity in the market, and the general shift from CFLs to LEDs, this evaluation protocol was updated in 2017 to shift the focus of the protocols toward LEDs and away from CFLs and to resolve evaluation uncertainties affecting residential lighting incentive programs.

  4. Rational design of aptazyme riboswitches for efficient control of gene expression in mammalian cells

    PubMed Central

    Zhong, Guocai; Wang, Haimin; Bailey, Charles C; Gao, Guangping; Farzan, Michael

    2016-01-01

    Efforts to control mammalian gene expression with ligand-responsive riboswitches have been hindered by lack of a general method for generating efficient switches in mammalian systems. Here we describe a rational-design approach that enables rapid development of efficient cis-acting aptazyme riboswitches. We identified communication-module characteristics associated with aptazyme functionality through analysis of a 32-aptazyme test panel. We then developed a scoring system that predicts an aptazymes’s activity by integrating three characteristics of communication-module bases: hydrogen bonding, base stacking, and distance to the enzymatic core. We validated the power and generality of this approach by designing aptazymes responsive to three distinct ligands, each with markedly wider dynamic ranges than any previously reported. These aptayzmes efficiently regulated adeno-associated virus (AAV)-vectored transgene expression in cultured mammalian cells and mice, highlighting one application of these broadly usable regulatory switches. Our approach enables efficient, protein-independent control of gene expression by a range of small molecules. DOI: http://dx.doi.org/10.7554/eLife.18858.001 PMID:27805569

  5. Deep imitation learning for 3D navigation tasks.

    PubMed

    Hussein, Ahmed; Elyan, Eyad; Gaber, Mohamed Medhat; Jayne, Chrisina

    2018-01-01

    Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.

  6. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  7. Numerical Study of Boundary Layer Interaction with Shocks: Method Improvement and Test Computation

    NASA Technical Reports Server (NTRS)

    Adams, N. A.

    1995-01-01

    The objective is the development of a high-order and high-resolution method for the direct numerical simulation of shock turbulent-boundary-layer interaction. Details concerning the spatial discretization of the convective terms can be found in Adams and Shariff (1995). The computer code based on this method as introduced in Adams (1994) was formulated in Cartesian coordinates and thus has been limited to simple rectangular domains. For more general two-dimensional geometries, as a compression corner, an extension to generalized coordinates is necessary. To keep the requirements or limitations for grid generation low, the extended formulation should allow for non-orthogonal grids. Still, for simplicity and cost efficiency, periodicity can be assumed in one cross-flow direction. For easy vectorization, the compact-ENO coupling algorithm as used in Adams (1994) treated whole planes normal to the derivative direction with the ENO scheme whenever at least one point of this plane satisfied the detection criterion. This is apparently too restrictive for more general geometries and more complex shock patterns. Here we introduce a localized compact-ENO coupling algorithm, which is efficient as long as the overall number of grid points treated by the ENO scheme is small compared to the total number of grid points. Validation and test computations with the final code are performed to assess the efficiency and suitability of the computer code for the problems of interest. We define a set of parameters where a direct numerical simulation of a turbulent boundary layer along a compression corner with reasonably fine resolution is affordable.

  8. An efficient and general approach for implementing thermodynamic phase equilibria information in geophysical and geodynamic studies

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos; Zlotnik, Sergio; Díez, Pedro

    2015-10-01

    We present a flexible, general, and efficient approach for implementing thermodynamic phase equilibria information (in the form of sets of physical parameters) into geophysical and geodynamic studies. The approach is based on Tensor Rank Decomposition methods, which transform the original multidimensional discrete information into a separated representation that contains significantly fewer terms, thus drastically reducing the amount of information to be stored in memory during a numerical simulation or geophysical inversion. Accordingly, the amount and resolution of the thermodynamic information that can be used in a simulation or inversion increases substantially. In addition, the method is independent of the actual software used to obtain the primary thermodynamic information, and therefore, it can be used in conjunction with any thermodynamic modeling program and/or database. Also, the errors associated with the decomposition procedure are readily controlled by the user, depending on her/his actual needs (e.g., preliminary runs versus full resolution runs). We illustrate the benefits, generality, and applicability of our approach with several examples of practical interest for both geodynamic modeling and geophysical inversion/modeling. Our results demonstrate that the proposed method is a competitive and attractive candidate for implementing thermodynamic constraints into a broad range of geophysical and geodynamic studies. MATLAB implementations of the method and examples are provided as supporting information and can be downloaded from the journal's website.

  9. Local error estimates for adaptive simulation of the Reaction–Diffusion Master Equation via operator splitting

    PubMed Central

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2015-01-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735

  10. Local error estimates for adaptive simulation of the Reaction-Diffusion Master Equation via operator splitting.

    PubMed

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2014-06-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.

  11. Comparison of Predicted Thermoelectric Energy Conversion Efficiency by Cumulative Properties and Reduced Variables Approaches

    NASA Astrophysics Data System (ADS)

    Linker, Thomas M.; Lee, Glenn S.; Beekman, Matt

    2018-06-01

    The semi-analytical methods of thermoelectric energy conversion efficiency calculation based on the cumulative properties approach and reduced variables approach are compared for 21 high performance thermoelectric materials. Both approaches account for the temperature dependence of the material properties as well as the Thomson effect, thus the predicted conversion efficiencies are generally lower than that based on the conventional thermoelectric figure of merit ZT for nearly all of the materials evaluated. The two methods also predict material energy conversion efficiencies that are in very good agreement which each other, even for large temperature differences (average percent difference of 4% with maximum observed deviation of 11%). The tradeoff between obtaining a reliable assessment of a material's potential for thermoelectric applications and the complexity of implementation of the three models, as well as the advantages of using more accurate modeling approaches in evaluating new thermoelectric materials, are highlighted.

  12. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  13. Increasing the Energy Efficiency of the Cyclic Action Mechanisms in Rolling for a Roller Bed Used as an Example

    NASA Astrophysics Data System (ADS)

    andreev, A. N.; Kolesnichenko, D. A.

    2017-12-01

    The possibility of increasing the energy efficiency of the production cycle in a roller bed is briefly reviewed and justified. The sequence diagram of operation of the electrical drive in a roller bed is analyzed, and the possible increase in the energy efficiency is calculated. A method for energy saving is described for the application of a frequency-controlled asynchronous electrical drive of drive rollers in a roller bed with an increased capacitor capacity in a dc link. A fine mathematical model is developed to describe the behavior of the electrical drive during the deceleration of a roller bed. An experimental setup is created and computer simulation and physical modeling are performed. The basic information flows of the general hierarchical automatic control system of an enterprise are described and determined with allowance for the proposed method of increasing the energy efficiency.

  14. Axioms of adaptivity

    PubMed Central

    Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.

    2014-01-01

    This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390

  15. Efficient methods for overlapping group lasso.

    PubMed

    Yuan, Lei; Liu, Jun; Ye, Jieping

    2013-09-01

    The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.

  16. Nonimaging optical concentrators using graded-index dielectric.

    PubMed

    Zitelli, M

    2014-04-01

    A new generation of inhomogeneous nonimaging optical concentrators is proposed, able to achieve simultaneously high optical efficiency and acceptance solid angle at a given geometrical concentration factor. General design methods are given, and concentrators are numerically investigated and optimized.

  17. NASA aeronautics

    NASA Technical Reports Server (NTRS)

    Anderton, D. A.

    1982-01-01

    Aeronautical research programs are discussed in relation to research methods and the status of the programs. The energy efficient aircraft, STOL aircraft and general aviation aircraft are considered. Aerodynamic concepts, rotary wing aircraft, aircraft safety, noise reduction, and aircraft configurations are among the topics included.

  18. Optimization of blade motion of vertical axis turbine

    NASA Astrophysics Data System (ADS)

    Ma, Yong; Zhang, Liang; Zhang, Zhi-yang; Han, Duan-feng

    2016-04-01

    In this paper, a method is proposed to improve the energy efficiency of the vertical axis turbine. First of all, a single disk multiple stream-tube model is used to calculate individual fitness. Genetic algorithm is adopted to optimize blade pitch motion of vertical axis turbine with the maximum energy efficiency being selected as the optimization objective. Then, a particular data processing method is proposed, fitting the result data into a cosine-like curve. After that, a general formula calculating the blade motion is developed. Finally, CFD simulation is used to validate the blade pitch motion formula. The results show that the turbine's energy efficiency becomes higher after the optimization of blade pitch motion; compared with the fixed pitch turbine, the efficiency of variable-pitch turbine is significantly improved by the active blade pitch control; the energy efficiency declines gradually with the growth of speed ratio; besides, compactness has lager effect on the blade motion while the number of blades has little effect on it.

  19. Studying the Global Bifurcation Involving Wada Boundary Metamorphosis by a Method of Generalized Cell Mapping with Sampling-Adaptive Interpolation

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng

    In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.

  20. Testing the Predictive Capability of the High-Fidelity Generalized Method of Cells Using an Efficient Reformulation

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M. (Technical Monitor); Bansal, Yogesh; Pindera, Marek-Jerzy

    2004-01-01

    The High-Fidelity Generalized Method of Cells is a new micromechanics model for unidirectionally reinforced periodic multiphase materials that was developed to overcome the original model's shortcomings. The high-fidelity version predicts the local stress and strain fields with dramatically greater accuracy relative to the original model through the use of a better displacement field representation. Herein, we test the high-fidelity model's predictive capability in estimating the elastic moduli of periodic composites characterized by repeating unit cells obtained by rotation of an infinite square fiber array through an angle about the fiber axis. Such repeating unit cells may contain a few or many fibers, depending on the rotation angle. In order to analyze such multi-inclusion repeating unit cells efficiently, the high-fidelity micromechanics model's framework is reformulated using the local/global stiffness matrix approach. The excellent agreement with the corresponding results obtained from the standard transformation equations confirms the new model's predictive capability for periodic composites characterized by multi-inclusion repeating unit cells lacking planes of material symmetry. Comparison of the effective moduli and local stress fields with the corresponding results obtained from the original Generalized Method of Cells dramatically highlights the original model's shortcomings for certain classes of unidirectional composites.

  1. Algebraic solution for the forward displacement analysis of the general 6-6 stewart mechanism

    NASA Astrophysics Data System (ADS)

    Wei, Feng; Wei, Shimin; Zhang, Ying; Liao, Qizheng

    2016-01-01

    The solution for the forward displacement analysis(FDA) of the general 6-6 Stewart mechanism(i.e., the connection points of the moving and fixed platforms are not restricted to lying in a plane) has been extensively studied, but the efficiency of the solution remains to be effectively addressed. To this end, an algebraic elimination method is proposed for the FDA of the general 6-6 Stewart mechanism. The kinematic constraint equations are built using conformal geometric algebra(CGA). The kinematic constraint equations are transformed by a substitution of variables into seven equations with seven unknown variables. According to the characteristic of anti-symmetric matrices, the aforementioned seven equations can be further transformed into seven equations with four unknown variables by a substitution of variables using the Gröbner basis. Its elimination weight is increased through changing the degree of one variable, and sixteen equations with four unknown variables can be obtained using the Gröbner basis. A 40th-degree univariate polynomial equation is derived by constructing a relatively small-sized 9´9 Sylvester resultant matrix. Finally, two numerical examples are employed to verify the proposed method. The results indicate that the proposed method can effectively improve the efficiency of solution and reduce the computational burden because of the small-sized resultant matrix.

  2. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  3. Modeling open nanophotonic systems using the Fourier modal method: generalization to 3D Cartesian coordinates.

    PubMed

    Häyrynen, Teppo; Osterkryger, Andreas Dyhl; de Lasson, Jakob Rosenkrantz; Gregersen, Niels

    2017-09-01

    Recently, an open geometry Fourier modal method based on a new combination of an open boundary condition and a non-uniform k-space discretization was introduced for rotationally symmetric structures, providing a more efficient approach for modeling nanowires and micropillar cavities [J. Opt. Soc. Am. A33, 1298 (2016)JOAOD61084-752910.1364/JOSAA.33.001298]. Here, we generalize the approach to three-dimensional (3D) Cartesian coordinates, allowing for the modeling of rectangular geometries in open space. The open boundary condition is a consequence of having an infinite computational domain described using basis functions that expand the whole space. The strength of the method lies in discretizing the Fourier integrals using a non-uniform circular "dartboard" sampling of the Fourier k space. We show that our sampling technique leads to a more accurate description of the continuum of the radiation modes that leak out from the structure. We also compare our approach to conventional discretization with direct and inverse factorization rules commonly used in established Fourier modal methods. We apply our method to a variety of optical waveguide structures and demonstrate that the method leads to a significantly improved convergence, enabling more accurate and efficient modeling of open 3D nanophotonic structures.

  4. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  5. Numerical simulation using vorticity-vector potential formulation

    NASA Technical Reports Server (NTRS)

    Tokunaga, Hiroshi

    1993-01-01

    An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.

  6. Coupled wave-packets for non-adiabatic molecular dynamics: a generalization of Gaussian wave-packet dynamics to multiple potential energy surfaces

    DOE PAGES

    White, Alexander James; Tretiak, Sergei; Mozyrsky, Dima V.

    2016-04-25

    Accurate simulation of the non-adiabatic dynamics of molecules in excited electronic states is key to understanding molecular photo-physical processes. Here we present a novel method, based on a semiclassical approximation, that is as efficient as the commonly used mean field Ehrenfest or ad hoc surface hopping methods and properly accounts for interference and decoherence effects. This novel method is an extension of Heller's thawed Gaussian wave-packet dynamics that includes coupling between potential energy surfaces. By studying several standard test problems we demonstrate that the accuracy of the method can be systematically improved while maintaining high efficiency. The method is suitablemore » for investigating the role of quantum coherence in the non-adiabatic dynamics of many-atom molecules.« less

  7. Self-learning Monte Carlo method and cumulative update in fermion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Junwei; Shen, Huitao; Qi, Yang

    2017-06-07

    In this study, we develop the self-learning Monte Carlo (SLMC) method, a general-purpose numerical method recently introduced to simulate many-body systems, for studying interacting fermion systems. Our method uses a highly efficient update algorithm, which we design and dub “cumulative update”, to generate new candidate configurations in the Markov chain based on a self-learned bosonic effective model. From a general analysis and a numerical study of the double exchange model as an example, we find that the SLMC with cumulative update drastically reduces the computational cost of the simulation, while remaining statistically exact. Remarkably, its computational complexity is far lessmore » than the conventional algorithm with local updates.« less

  8. A general panel method for the analysis and design of arbitrary configurations in incompressible flows. [boundary value problem

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.

    1980-01-01

    A method for solving the linear integral equations of incompressible potential flow in three dimensions is presented. Both analysis (Neumann) and design (Dirichlet) boundary conditions are treated in a unified approach to the general flow problem. The method is an influence coefficient scheme which employs source and doublet panels as boundary surfaces. Curved panels possessing singularity strengths, which vary as polynomials are used, and all influence coefficients are derived in closed form. These and other features combine to produce an efficient scheme which is not only versatile but eminently suited to the practical realities of a user-oriented environment. A wide variety of numerical results demonstrating the method is presented.

  9. A general CPL-AdS methodology for fixing dynamic parameters in dual environments.

    PubMed

    Huang, De-Shuang; Jiang, Wen

    2012-10-01

    The algorithm of Continuous Point Location with Adaptive d-ary Search (CPL-AdS) strategy exhibits its efficiency in solving stochastic point location (SPL) problems. However, there is one bottleneck for this CPL-AdS strategy which is that, when the dimension of the feature, or the number of divided subintervals for each iteration, d is large, the decision table for elimination process is almost unavailable. On the other hand, the larger dimension of the features d can generally make this CPL-AdS strategy avoid oscillation and converge faster. This paper presents a generalized universal decision formula to solve this bottleneck problem. As a matter of fact, this decision formula has a wider usage beyond handling out this SPL problems, such as dealing with deterministic point location problems and searching data in Single Instruction Stream-Multiple Data Stream based on Concurrent Read and Exclusive Write parallel computer model. Meanwhile, we generalized the CPL-AdS strategy with an extending formula, which is capable of tracking an unknown dynamic parameter λ in both informative and deceptive environments. Furthermore, we employed different learning automata in the generalized CPL-AdS method to find out if faster learning algorithm will lead to better realization of the generalized CPL-AdS method. All of these aforementioned contributions are vitally important whether in theory or in practical applications. Finally, extensive experiments show that our proposed approaches are efficient and feasible.

  10. Integrated LED-based luminaire for general lighting

    DOEpatents

    Dowling, Kevin J.; Lys, Ihor A.; Williamson, Ryan C.; Roberge, Brian; Roberts, Ron; Morgan, Frederick; Datta, Michael Jay; Mollnow, Tomas Jonathan

    2016-08-30

    Lighting apparatus and methods employing LED light sources are described. The LED light sources are integrated with other components in the form of a luminaire or other general purpose lighting structure. Some of the lighting structures are formed as Parabolic Aluminum Reflector (PAR) luminaires, allowing them to be inserted into conventional sockets. The lighting structures display beneficial operating characteristics, such as efficient operation, high thermal dissipation, high output, and good color mixing.

  11. Integrated LED-based luminare for general lighting

    DOEpatents

    Dowling, Kevin J.; Lys, Ihor A.; Roberge, Brian; Williamson, Ryan C.; Roberts, Ron; Datta, Michael; Mollnow, Tomas; Morgan, Frederick M.

    2013-03-05

    Lighting apparatus and methods employing LED light sources are described. The LED light sources are integrated with other components in the form of a luminaire or other general purpose lighting structure. Some of the lighting structures are formed as Parabolic Aluminum Reflector (PAR) luminaires, allowing them to be inserted into conventional sockets. The lighting structures display beneficial operating characteristics, such as efficient operation, high thermal dissipation, high output, and good color mixing.

  12. Generalized image charge solvation model for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    PubMed Central

    Deng, Shaozhong; Xue, Changfeng; Baumketner, Andriy; Jacobs, Donald; Cai, Wei

    2013-01-01

    This paper extends the image charge solvation model (ICSM) [J. Chem. Phys. 131, 154103 (2009)], a hybrid explicit/implicit method to treat electrostatic interactions in computer simulations of biomolecules formulated for spherical cavities, to prolate spheroidal and triaxial ellipsoidal cavities, designed to better accommodate non-spherical solutes in molecular dynamics (MD) simulations. In addition to the utilization of a general truncated octahedron as the MD simulation box, central to the proposed extension is an image approximation method to compute the reaction field for a point charge placed inside such a non-spherical cavity by using a single image charge located outside the cavity. The resulting generalized image charge solvation model (GICSM) is tested in simulations of liquid water, and the results are analyzed in comparison with those obtained from the ICSM simulations as a reference. We find that, for improved computational efficiency due to smaller simulation cells and consequently a less number of explicit solvent molecules, the generalized model can still faithfully reproduce known static and dynamic properties of liquid water at least for systems considered in the present paper, indicating its great potential to become an accurate but more efficient alternative to the ICSM when bio-macromolecules of irregular shapes are to be simulated. PMID:23913979

  13. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  14. Simulation Methods for Design of Networked Power Electronics and Information Systems

    DTIC Science & Technology

    2014-07-01

    Insertion of latency in every branch and at every node permits the system model to be efficiently distributed across many separate computing cores. An... the system . We demonstrated extensibility and generality of the Virtual Test Bed (VTB) framework to support multiple solvers and their associated...Information Systems Objectives The overarching objective of this program is to develop methods for fast

  15. Detecting Lower Bounds to Quantum Channel Capacities.

    PubMed

    Macchiavello, Chiara; Sacchi, Massimiliano F

    2016-04-08

    We propose a method to detect lower bounds to quantum capacities of a noisy quantum communication channel by means of a few measurements. The method is easily implementable and does not require any knowledge about the channel. We test its efficiency by studying its performance for most well-known single-qubit noisy channels and for the generalized Pauli channel in an arbitrary finite dimension.

  16. Efficient Synthesis of γ-Lactams by a Tandem Reductive Amination/Lactamization Sequence

    PubMed Central

    Nöth, Julica; Frankowski, Kevin J.; Neuenswander, Benjamin; Aubé, Jeffrey; Reiser, Oliver

    2009-01-01

    A three-component method for synthesizing highly-substituted γ-lactams from readily available maleimides, aldehydes and amines is described. A new reductive amination/intramolecular lactamization sequence provides a straightforward route to the lactam products in a single manipulation. The general utility of this method is demonstrated by the parallel synthesis of a γ-lactam library. PMID:18338857

  17. Efficient synthesis of gamma-lactams by a tandem reductive amination/lactamization sequence.

    PubMed

    Nöth, Julica; Frankowski, Kevin J; Neuenswander, Benjamin; Aubé, Jeffrey; Reiser, Oliver

    2008-01-01

    A three-component method for the synthesis of highly substituted gamma-lactams from readily available maleimides, aldehydes, and amines is described. A new reductive amination/intramolecular lactamization sequence provides a straightforward route to the lactam products in a single manipulation. The general utility of this method is demonstrated by the parallel synthesis of a gamma-lactam library.

  18. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  19. Measurement and removal of cladding light in high power fiber systems

    NASA Astrophysics Data System (ADS)

    Walbaum, Till; Liem, Andreas; Schreiber, Thomas; Eberhardt, Ramona; Tünnermann, Andreas

    2018-02-01

    The amount of cladding light is important to ensure longevity of high power fiber components. However, it is usually measured either by adding a cladding light stripper (and thus permanently modifying the fiber) or by using a pinhole to only transmit the core light (ignoring that there may be cladding mode content in the core area). We present a novel noninvasive method to measure the cladding light content in double-clad fibers based on extrapolation from a cladding region of constant average intensity. The method can be extended to general multi-layer radially symmetric fibers, e.g. to evaluate light content in refractive index pedestal structures. To effectively remove cladding light in high power systems, cladding light strippers are used. We show that the stripping efficiency can be significantly improved by bending the fiber in such a device and present respective experimental data. Measurements were performed with respect to the numerical aperture as well, showing the dependency of the CLS efficiency on the NA of the cladding light and implying that efficiency data cannot reliably be given for a certain fiber in general without regard to the properties of the guided light.

  20. Continuation of probability density functions using a generalized Lyapunov approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.

  1. Random search optimization based on genetic algorithm and discriminant function

    NASA Technical Reports Server (NTRS)

    Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.

    1990-01-01

    The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.

  2. Nanoshells for photothermal therapy: a Monte-Carlo based numerical study of their design tolerance

    PubMed Central

    Grosges, Thomas; Barchiesi, Dominique; Kessentini, Sameh; Gréhan, Gérard; de la Chapelle, Marc Lamy

    2011-01-01

    The optimization of the coated metallic nanoparticles and nanoshells is a current challenge for biological applications, especially for cancer photothermal therapy, considering both the continuous improvement of their fabrication and the increasing requirement of efficiency. The efficiency of the coupling between illumination with such nanostructures for burning purposes depends unevenly on their geometrical parameters (radius, thickness of the shell) and material parameters (permittivities which depend on the illumination wavelength). Through a Monte-Carlo method, we propose a numerical study of such nanodevice, to evaluate tolerances (or uncertainty) on these parameters, given a threshold of efficiency, to facilitate the design of nanoparticles. The results could help to focus on the relevant parameters of the engineering process for which the absorbed energy is the most dependant. The Monte-Carlo method confirms that the best burning efficiency are obtained for hollow nanospheres and exhibit the sensitivity of the absorbed electromagnetic energy as a function of each parameter. The proposed method is general and could be applied in design and development of new embedded coated nanomaterials used in biomedicine applications. PMID:21698021

  3. Efficient robust doubly adaptive regularized regression with applications.

    PubMed

    Karunamuni, Rohana J; Kong, Linglong; Tu, Wei

    2018-01-01

    We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.

  4. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  5. Multigrid Method for Modeling Multi-Dimensional Combustion with Detailed Chemistry

    NASA Technical Reports Server (NTRS)

    Zheng, Xiaoqing; Liu, Chaoqun; Liao, Changming; Liu, Zhining; McCormick, Steve

    1996-01-01

    A highly accurate and efficient numerical method is developed for modeling 3-D reacting flows with detailed chemistry. A contravariant velocity-based governing system is developed for general curvilinear coordinates to maintain simplicity of the continuity equation and compactness of the discretization stencil. A fully-implicit backward Euler technique and a third-order monotone upwind-biased scheme on a staggered grid are used for the respective temporal and spatial terms. An efficient semi-coarsening multigrid method based on line-distributive relaxation is used as the flow solver. The species equations are solved in a fully coupled way and the chemical reaction source terms are treated implicitly. Example results are shown for a 3-D gas turbine combustor with strong swirling inflows.

  6. The effect of zinc (Zn) content to cell potential value and efficiency aluminium sacrificial anode in 0.2 M sulphuric acid environment

    NASA Astrophysics Data System (ADS)

    Akranata, Ahmad Ridho; Sulistijono, Awali, Jatmoko

    2018-04-01

    Sacrificial anode is sacirifial component that used to protect steel from corrosion. Generally, the component are made of aluminium and zinc in water environment. Sacrificial anode change the protected metal structure become cathodic with giving current. The advantages of aluminium is corrosion resistance, non toxicity and easy forming. Zinc generally used for coating in steel to prevent steel from corrosion. This research was conducted to analyze the effect of zinc content to the value of cell potential and efficiency aluminium sacrificial anode with sand casting method in 0.2 M sulphuric acid environment. The sacrificial anode fabrication made with alloying aluminium and zinc metals with variation composition of alloy with pure Al, Al-3Zn, Al-6Zn, and Al-9Zn with open die sand casting process. The component installed with ASTM A36 steel. After the research has been done the result showed that addition of zinc content increase the cell potential, protection efficiency, and anode efficiency from steel plate. Cell potential value measurement and weight loss measurement showed that addition of zinc content increase the cell potential value into more positive that can protected the ASTM A36 steel more efficiently that showed in weight loss measurement where the protection efficiency and anodic efficiency of Al-9Zn sacrificial anode is better than protection efficiency and anodic efficiency of pure Al. The highest protection efficiency gotten by Al-9Zn alloy

  7. Investigation of methods for sterilization of potting compounds and mated surfaces

    NASA Technical Reports Server (NTRS)

    Tulius, J. J.; Daley, D. J.; Phillips, G. B.

    1972-01-01

    The feasibility of using formaldehyde-liberating synthetic resins or polymers for the sterilization of potting compounds, mated and occluded areas, and spacecraft surfaces was demonstrated. The detailed study of interrelated parameters of formaldehyde gas sterilization revealed that efficient cycle conditions can be developed for the sterilization of spacecraft components. It was determined that certain parameters were more important than others in the development of cycles for specific applications. The use of formaldehyde gas for the sterilization of spacecraft components provides NASA with a highly efficient method which is inexpensive, reproducible, easily quantitated, materials compatible, operationally simple, generally non-hazardous and not thermally destructive.

  8. Application of fast Fourier transforms to the direct solution of a class of two-dimensional separable elliptic equations on the sphere

    NASA Technical Reports Server (NTRS)

    Moorthi, Shrinivas; Higgins, R. W.

    1993-01-01

    An efficient, direct, second-order solver for the discrete solution of a class of two-dimensional separable elliptic equations on the sphere (which generally arise in implicit and semi-implicit atmospheric models) is presented. The method involves a Fourier transformation in longitude and a direct solution of the resulting coupled second-order finite-difference equations in latitude. The solver is made efficient by vectorizing over longitudinal wave-number and by using a vectorized fast Fourier transform routine. It is evaluated using a prescribed solution method and compared with a multigrid solver and the standard direct solver from FISHPAK.

  9. Recommender engine for continuous-time quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  10. Computationally Efficient Nonlinear Bell Inequalities for Quantum Networks

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing

    2018-04-01

    The correlations in quantum networks have attracted strong interest with new types of violations of the locality. The standard Bell inequalities cannot characterize the multipartite correlations that are generated by multiple sources. The main problem is that no computationally efficient method is available for constructing useful Bell inequalities for general quantum networks. In this work, we show a significant improvement by presenting new, explicit Bell-type inequalities for general networks including cyclic networks. These nonlinear inequalities are related to the matching problem of an equivalent unweighted bipartite graph that allows constructing a polynomial-time algorithm. For the quantum resources consisting of bipartite entangled pure states and generalized Greenberger-Horne-Zeilinger (GHZ) states, we prove the generic nonmultilocality of quantum networks with multiple independent observers using new Bell inequalities. The violations are maximal with respect to the presented Tsirelson's bound for Einstein-Podolsky-Rosen states and GHZ states. Moreover, these violations hold for Werner states or some general noisy states. Our results suggest that the presented Bell inequalities can be used to characterize experimental quantum networks.

  11. The cost of quality: Implementing generalization and suppression for anonymizing biomedical data with minimal information loss.

    PubMed

    Kohlmayer, Florian; Prasser, Fabian; Kuhn, Klaus A

    2015-12-01

    With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Progress Toward an Efficient and General CFD Tool for Propulsion Design/Analysis

    NASA Technical Reports Server (NTRS)

    Cox, C. F.; Cinnella, P.; Westmoreland, S.

    1996-01-01

    The simulation of propulsive flows inherently involves chemical activity. Recent years have seen substantial strides made in the development of numerical schemes for reacting flowfields, in particular those involving finite-rate chemistry. However, finite-rate calculations are computationally intensive and require knowledge of the actual kinetics, which are not always known with sufficient accuracy. Alternatively, flow simulations based on the assumption of local chemical equilibrium are capable of obtaining physically reasonable results at far less computational cost. The present study summarizes the development of efficient numerical techniques for the simulation of flows in local chemical equilibrium, whereby a 'Black Box' chemical equilibrium solver is coupled to the usual gasdynamic equations. The generalization of the methods enables the modelling of any arbitrary mixture of thermally perfect gases, including air, combustion mixtures and plasmas. As demonstration of the potential of the methodologies, several solutions, involving reacting and perfect gas flows, will be presented. Included is a preliminary simulation of the SSME startup transient. Future enhancements to the proposed techniques will be discussed, including more efficient finite-rate and hybrid (partial equilibrium) schemes. The algorithms that have been developed and are being optimized provide for an efficient and general tool for the design and analysis of propulsion systems.

  13. MODELING A MIXTURE: PBPK/PD APPROACHES FOR PREDICTING CHEMICAL INTERACTIONS.

    EPA Science Inventory

    Since environmental chemical exposures generally involve multiple chemicals, there are both regulatory and scientific drivers to develop methods to predict outcomes of these exposures. Even using efficient statistical and experimental designs, it is not possible to test in vivo a...

  14. Endophytic Phytoaugmentation: Treating Wastewater and Runoff Through Augmented Phytoremediation

    PubMed Central

    Redfern, Lauren K.

    2016-01-01

    Abstract Limited options exist for efficiently and effectively treating water runoff from agricultural fields and landfills. Traditional treatments include excavation, transport to landfills, incineration, stabilization, and vitrification. In general, treatment options relying on biological methods such as bioremediation have the ability to be applied in situ and offer a sustainable remedial option with a lower environmental impact and reduced long-term operating expenses. These methods are generally considered ecologically friendly, particularly when compared to traditional physicochemical cleanup options. Phytoremediation, which relies on plants to take up and/or transform the contaminant of interest, is another alternative treatment method which has been developed. However, phytoremediation is not widely used, largely due to its low treatment efficiency. Endophytic phytoaugmentation is a variation on phytoremediation that relies on augmenting the phytoremediating plants with exogenous strains to stimulate associated plant-microbe interactions to facilitate and improve remediation efficiency. In this review, we offer a summary of the current knowledge as well as developments in endophytic phytoaugmentation and present some potential future applications for this technology. There has been a limited number of published endophytic phytoaugmentation case studies and much remains to be done to transition lab-scale results to field applications. Future research needs include large-scale endophytic phytoaugmentation experiments as well as the development of more exhaustive tools for monitoring plant-microbe-pollutant interactions. PMID:27158249

  15. More efficient parameter estimates for factor analysis of ordinal variables by ridge generalized least squares.

    PubMed

    Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying

    2017-11-01

    Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.

  16. A new Newton-like method for solving nonlinear equations.

    PubMed

    Saheya, B; Chen, Guo-Qing; Sui, Yun-Kang; Wu, Cai-Ying

    2016-01-01

    This paper presents an iterative scheme for solving nonline ar equations. We establish a new rational approximation model with linear numerator and denominator which has generalizes the local linear model. We then employ the new approximation for nonlinear equations and propose an improved Newton's method to solve it. The new method revises the Jacobian matrix by a rank one matrix each iteration and obtains the quadratic convergence property. The numerical performance and comparison show that the proposed method is efficient.

  17. Theoretical and software considerations for nonlinear dynamic analysis

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1983-01-01

    In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.

  18. A Novel General Imaging Formation Algorithm for GNSS-Based Bistatic SAR.

    PubMed

    Zeng, Hong-Cheng; Wang, Peng-Bo; Chen, Jie; Liu, Wei; Ge, LinLin; Yang, Wei

    2016-02-26

    Global Navigation Satellite System (GNSS)-based bistatic Synthetic Aperture Radar (SAR) recently plays a more and more significant role in remote sensing applications for its low-cost and real-time global coverage capability. In this paper, a general imaging formation algorithm was proposed for accurately and efficiently focusing GNSS-based bistatic SAR data, which avoids the interpolation processing in traditional back projection algorithms (BPAs). A two-dimensional point target spectrum model was firstly presented, and the bulk range cell migration correction (RCMC) was consequently derived for reducing range cell migration (RCM) and coarse focusing. As the bulk RCMC seriously changes the range history of the radar signal, a modified and much more efficient hybrid correlation operation was introduced for compensating residual phase errors. Simulation results were presented based on a general geometric topology with non-parallel trajectories and unequal velocities for both transmitter and receiver platforms, showing a satisfactory performance by the proposed method.

  19. A Novel General Imaging Formation Algorithm for GNSS-Based Bistatic SAR

    PubMed Central

    Zeng, Hong-Cheng; Wang, Peng-Bo; Chen, Jie; Liu, Wei; Ge, LinLin; Yang, Wei

    2016-01-01

    Global Navigation Satellite System (GNSS)-based bistatic Synthetic Aperture Radar (SAR) recently plays a more and more significant role in remote sensing applications for its low-cost and real-time global coverage capability. In this paper, a general imaging formation algorithm was proposed for accurately and efficiently focusing GNSS-based bistatic SAR data, which avoids the interpolation processing in traditional back projection algorithms (BPAs). A two-dimensional point target spectrum model was firstly presented, and the bulk range cell migration correction (RCMC) was consequently derived for reducing range cell migration (RCM) and coarse focusing. As the bulk RCMC seriously changes the range history of the radar signal, a modified and much more efficient hybrid correlation operation was introduced for compensating residual phase errors. Simulation results were presented based on a general geometric topology with non-parallel trajectories and unequal velocities for both transmitter and receiver platforms, showing a satisfactory performance by the proposed method. PMID:26927117

  20. Complexity analysis based on generalized deviation for financial markets

    NASA Astrophysics Data System (ADS)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  1. Identification of unmeasured variables in the set of model constraints of the data reconciliation in a power unit

    NASA Astrophysics Data System (ADS)

    Szega, Marcin; Nowak, Grzegorz Tadeusz

    2013-12-01

    In generalized method of data reconciliation as equations of conditions beside substance and energy balances can be used equations which don't have precisely the status of conservation lows. Empirical coefficients in these equations are traded as unknowns' values. To this kind of equations, in application of the generalized method of data reconciliation in supercritical power unit, can be classified: steam flow capacity of a turbine for a group of stages, adiabatic internal efficiency of group of stages, equations for pressure drop in pipelines and equations for heat transfer in regeneration heat exchangers. Mathematical model of a power unit was developed in the code Thermoflex. Using this model the off-design calculation has been made in several points of loads for the power unit. Using these calculations identification of unknown values and empirical coefficients for generalized method of data reconciliation used in power unit has been made. Additional equations of conditions will be used in the generalized method of data reconciliation which will be used in optimization of measurement placement in redundant measurement system in power unit for new control systems

  2. Assessment of Li/SOCL2 Battery Technology; Reserve, Thin-Cell Design. Volume 3

    DTIC Science & Technology

    1990-06-01

    power density and efficiency of an operating electrochemical system . The method is general - the examples to illustrate the selected points pertain to... System : Design, Manufacturing and QC Considerations), S. Szpak, P. A. Mosier-Boss, and J. J. Smith, 34th International Power Sources Symposium, Cherry...I) the computer time required to evaluate the integral in Eqn. Ill, and (iii the lack of generality in the attainable lineshapes. However, since this

  3. Rapid iterative reanalysis for automated design

    NASA Technical Reports Server (NTRS)

    Bhatia, K. G.

    1973-01-01

    A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.

  4. Efficient Analysis of Complex Structures

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.

    2000-01-01

    Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).

  5. Multifractality, efficiency analysis of Chinese stock market and its cross-correlation with WTI crude oil price

    NASA Astrophysics Data System (ADS)

    Zhuang, Xiaoyang; Wei, Yu; Ma, Feng

    2015-07-01

    In this paper, the multifractality and efficiency degrees of ten important Chinese sectoral indices are evaluated using the methods of MF-DFA and generalized Hurst exponents. The study also scrutinizes the dynamics of the efficiency of Chinese sectoral stock market by the rolling window approach. The overall empirical findings revealed that all the sectoral indices of Chinese stock market exist different degrees of multifractality. The results of different efficiency measures have agreed on that the 300 Materials index is the least efficient index. However, they have a slight diffidence on the most efficient one. The 300 Information Technology, 300 Telecommunication Services and 300 Health Care indices are comparatively efficient. We also investigate the cross-correlations between the ten sectoral indices and WTI crude oil price based on Multifractal Detrended Cross-correlation Analysis. At last, some relevant discussions and implications of the empirical results are presented.

  6. Integrated pest management and allocation of control efforts for vector-borne diseases

    USGS Publications Warehouse

    Ginsberg, H.S.

    2001-01-01

    Applications of various control methods were evaluated to determine how to integrate methods so as to minimize the number of human cases of vector-borne diseases. These diseases can be controlled by lowering the number of vector-human contacts (e.g., by pesticide applications or use of repellents), or by lowering the proportion of vectors infected with pathogens (e.g., by lowering or vaccinating reservoir host populations). Control methods should be combined in such a way as to most efficiently lower the probability of human encounter with an infected vector. Simulations using a simple probabilistic model of pathogen transmission suggest that the most efficient way to integrate different control methods is to combine methods that have the same effect (e.g., combine treatments that lower the vector population; or combine treatments that lower pathogen prevalence in vectors). Combining techniques that have different effects (e.g., a technique that lowers vector populations with a technique that lowers pathogen prevalence in vectors) will be less efficient than combining two techniques that both lower vector populations or combining two techniques that both lower pathogen prevalence, costs being the same. Costs of alternative control methods generally differ, so the efficiency of various combinations at lowering human contact with infected vectors should be estimated at available funding levels. Data should be collected from initial trials to improve the effects of subsequent interventions on the number of human cases.

  7. Increased glycosylation efficiency of recombinant proteins in Escherichia coli by auto-induction.

    PubMed

    Ding, Ning; Yang, Chunguang; Sun, Shenxia; Han, Lichi; Ruan, Yao; Guo, Longhua; Hu, Xuejun; Zhang, Jianing

    2017-03-25

    Escherichia coli cells have been considered as promising hosts for producing N-glycosylated proteins since the successful production of N-glycosylated protein in E. coli with the pgl (N-linked protein glycosylation) locus from Campylobacter jejuni. However, one hurdle in producing N-glycosylated proteins in large scale using E. coli is inefficient glycan glycosylation. In this study, we developed a strategy for the production of N-glycosylated proteins with high efficiency via an optimized auto-induction method. The 10th human fibronectin type III domain (FN3) was engineered with native glycosylation sequon DFNRSK and optimized DQNAT sequon in C-terminus with flexible linker as acceptor protein models. The resulting glycosylation efficiencies were confirmed by Western blots with anti-FLAG M1 antibody. Increased efficiency of glycosylation was obtained by changing the conventional IPTG induction to auto-induction method, which increased the glycosylation efficiencies from 60% and 75% up to 90% and 100% respectively. Moreover, in the condition of inserting the glycosylation sequon in the loop of FN3 (the acceptor sequon with local structural conformation), the glycosylation efficiency was increased from 35% to 80% by our optimized auto-induction procedures. To justify the potential for general application of the optimized auto-induction method, the reconstituted lsg locus from Haemophilus influenzae and PglB from C. jejuni were utilized, and this led to 100% glycosylation efficiency. Our studies provided quantitative evidence that the optimized auto-induction method will facilitate the large-scale production of pure exogenous N-glycosylation proteins in E. coli cells. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment

    PubMed Central

    Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  9. Geometric multigrid for an implicit-time immersed boundary method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.

    2014-10-12

    The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less

  10. Intertransaction Class Association Rule Mining Based on Genetic Network Programming and Its Application to Stock Market Prediction

    NASA Astrophysics Data System (ADS)

    Yang, Yuchen; Mabu, Shingo; Shimada, Kaoru; Hirasawa, Kotaro

    Intertransaction association rules have been reported to be useful in many fields such as stock market prediction, but still there are not so many efficient methods to dig them out from large data sets. Furthermore, how to use and measure these more complex rules should be considered carefully. In this paper, we propose a new intertransaction class association rule mining method based on Genetic Network Programming (GNP), which has the ability to overcome some shortages of Apriori-like based intertransaction association methods. Moreover, a general classifier model for intertransaction rules is also introduced. In experiments on the real world application of stock market prediction, the method shows its efficiency and ability to obtain good results and can bring more benefits with a suitable classifier considering larger interval span.

  11. [Methods of substances and organelles introduction in living cell for cell engineering technologies].

    PubMed

    Nikitin, V A

    2007-01-01

    We have presented the classification of more than 40 methods of genetic material, substances and organelles introduction into a living cell. Each of them has its characteristic advantages, disadvantages and limitations with respect to cell viability, transfer efficiency, general applicability, and technical requirements. It this article we have enlarged on the description of our developments of several new and improved approaches, methods and devices of the direct microinjection into a single cell and cell microsurgery with the help of glass micropipettes. The problem of low efficiency of mammalian cloning is discussed with emphasis on the necessity of expertizing of each step of single cell reconstruction to begin with microsurgical manipulations and necessity of the development of such methods of single cell resonstruction that could minimize the possible damage of the cell.

  12. Lagrangian methods in the analysis of nonlinear wave interactions in plasma

    NASA Technical Reports Server (NTRS)

    Galloway, J. J.

    1972-01-01

    An averaged-Lagrangian method is developed for obtaining the equations which describe the nonlinear interactions of the wave (oscillatory) and background (nonoscillatory) components which comprise a continuous medium. The method applies to monochromatic waves in any continuous medium that can be described by a Lagrangian density, but is demonstrated in the context of plasma physics. The theory is presented in a more general and unified form by way of a new averaged-Lagrangian formalism which simplifies the perturbation ordering procedure. Earlier theory is extended to deal with a medium distributed in velocity space and to account for the interaction of the background with the waves. The analytic steps are systematized, so as to maximize calculational efficiency. An assessment of the applicability and limitations of the method shows that it has some definite advantages over other approaches in efficiency and versatility.

  13. Hydration Free Energy from Orthogonal Space Random Walk and Polarizable Force Field.

    PubMed

    Abella, Jayvee R; Cheng, Sara Y; Wang, Qiantao; Yang, Wei; Ren, Pengyu

    2014-07-08

    The orthogonal space random walk (OSRW) method has shown enhanced sampling efficiency in free energy calculations from previous studies. In this study, the implementation of OSRW in accordance with the polarizable AMOEBA force field in TINKER molecular modeling software package is discussed and subsequently applied to the hydration free energy calculation of 20 small organic molecules, among which 15 are positively charged and five are neutral. The calculated hydration free energies of these molecules are compared with the results obtained from the Bennett acceptance ratio method using the same force field, and overall an excellent agreement is obtained. The convergence and the efficiency of the OSRW are also discussed and compared with BAR. Combining enhanced sampling techniques such as OSRW with polarizable force fields is very promising for achieving both accuracy and efficiency in general free energy calculations.

  14. Pot economy and one-pot synthesis.

    PubMed

    Hayashi, Yujiro

    2016-02-01

    The one-pot synthesis of a target molecule in the same reaction vessel is widely considered to be an efficient approach in synthetic organic chemistry. In this review, the characteristics and limitations of various one-pot syntheses of biologically active molecules are explained, primarily involving organocatalytic methods as key tactics. Besides catalysis, the pot-economy concepts presented herein are also applicable to organometallic and organic reaction methods in general.

  15. Software sensors for bioprocesses.

    PubMed

    Bogaerts, Ph; Vande Wouwer, A

    2003-10-01

    State estimation is a significant problem in biotechnological processes, due to the general lack of hardware sensor measurements of the variables describing the process dynamics. The objective of this paper is to review a number of software sensor design methods, including extended Kalman filters, receding-horizon observers, asymptotic observers, and hybrid observers, which can be efficiently applied to bioprocesses. These several methods are illustrated with simulation and real-life case studies.

  16. Scaling Support Vector Machines On Modern HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Fu, Haohuan; Song, Shuaiwen

    2015-02-01

    We designed and implemented MIC-SVM, a highly efficient parallel SVM for x86 based multicore and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools.

  17. Robust stability of linear systems: Some computational considerations

    NASA Technical Reports Server (NTRS)

    Laub, A. J.

    1979-01-01

    The cases of both additive and multiplicative perturbations were discussed and a number of relationships between the two cases were given. A number of computational aspects of the theory were also discussed, including a proposed new method for evaluating general transfer or frequency response matrices. The new method is numerically stable and efficient, requiring only operations to update for new values of the frequency parameter.

  18. Marketers hone their skills to reach target markets.

    PubMed

    White, D; Christensen, M

    1990-08-05

    New challenges lie ahead for health care marketers. Marketers must communicate with several important groups: their own administration, the general public, employers, hospital employees, and, of course, physicians. And as budgets tighten, the methods used to communicate must become more creative and more efficient.

  19. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  20. Pabon Lasso and Data Envelopment Analysis: A Complementary Approach to Hospital Performance Measurement

    PubMed Central

    Mehrtak, Mohammad; Yusefzadeh, Hasan; Jaafaripooyan, Ebrahim

    2014-01-01

    Background: Performance measurement is essential to the management of health care organizations to which efficiency is per se a vital indicator. Present study accordingly aims to measure the efficiency of hospitals employing two distinct methods. Methods: Data Envelopment Analysis and Pabon Lasso Model were jointly applied to calculate the efficiency of all general hospitals located in Iranian Eastern Azerbijan Province. Data was collected using hospitals’ monthly performance forms and analyzed and displayed by MS Visio and DEAP software. Results: In accord with Pabon Lasso model, 44.5% of the hospitals were entirely efficient, whilst DEA revealed 61% to be efficient. As such, 39% of the hospitals, by the Pabon Lasso, were wholly inefficient; based on DEA though; the relevant figure was only 22.2%. Finally, 16.5% of hospitals as calculated by Pabon Lasso and 16.7% by DEA were relatively efficient. DEA appeared to show more hospitals as efficient as opposed to the Pabon Lasso model. Conclusion: Simultaneous use of two models rendered complementary and corroborative results as both evidently reveal efficient hospitals. However, their results should be compared with prudence. Whilst the Pabon Lasso inefficient zone is fully clear, DEA does not provide such a crystal clear limit for inefficiency. PMID:24999147

  1. Carotenoid coloration is related to fat digestion efficiency in a wild bird

    NASA Astrophysics Data System (ADS)

    Madonia, Christina; Hutton, Pierce; Giraudeau, Mathieu; Sepp, Tuul

    2017-12-01

    Some of the most spectacular visual signals found in the animal kingdom are based on dietarily derived carotenoid pigments (which cannot be produced de novo), with a general assumption that carotenoids are limited resources for wild organisms, causing trade-offs in allocation of carotenoids to different physiological functions and ornamentation. This resource trade-off view has been recently questioned, since the efficiency of carotenoid processing may relax the trade-off between allocation toward condition or ornamentation. This hypothesis has so far received little exploratory support, since studies of digestive efficiency of wild animals are limited due to methodological difficulties. Recently, a method for quantifying the percentage of fat in fecal samples to measure digestive efficiency has been developed in birds. Here, we use this method to test if the intensity of the carotenoid-based coloration predicts digestive efficiency in a wild bird, the house finch ( Haemorhous mexicanus). The redness of carotenoid feather coloration (hue) positively predicted digestion efficiency, with redder birds being more efficient at absorbing fats from seeds. We show for the first time in a wild species that digestive efficiency predicts ornamental coloration. Though not conclusive due to the correlative nature of our study, these results strongly suggest that fat extraction might be a crucial but overlooked process behind many ornamental traits.

  2. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction

    PubMed Central

    Lu, Hongyang; Wei, Jingbo; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. PMID:27110235

  3. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.

    PubMed

    Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  4. SPIP: A computer program implementing the Interaction Picture method for simulation of light-wave propagation in optical fibre

    NASA Astrophysics Data System (ADS)

    Balac, Stéphane; Fernandez, Arnaud

    2016-02-01

    The computer program SPIP is aimed at solving the Generalized Non-Linear Schrödinger equation (GNLSE), involved in optics e.g. in the modelling of light-wave propagation in an optical fibre, by the Interaction Picture method, a new efficient alternative method to the Symmetric Split-Step method. In the SPIP program a dedicated costless adaptive step-size control based on the use of a 4th order embedded Runge-Kutta method is implemented in order to speed up the resolution.

  5. Three-Dimensional Navier-Stokes Method with Two-Equation Turbulence Models for Efficient Numerical Simulation of Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.

    1994-01-01

    A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.

  6. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  7. Additive Runge-Kutta Schemes for Convection-Diffusion-Reaction Equations

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    2001-01-01

    Additive Runge-Kutta (ARK) methods are investigated for application to the spatially discretized one-dimensional convection-diffusion-reaction (CDR) equations. First, accuracy, stability, conservation, and dense output are considered for the general case when N different Runge-Kutta methods are grouped into a single composite method. Then, implicit-explicit, N = 2, additive Runge-Kutta ARK2 methods from third- to fifth-order are presented that allow for integration of stiff terms by an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge-Kutta (ESDIRK) method while the nonstiff terms are integrated with a traditional explicit Runge-Kutta method (ERK). Coupling error terms are of equal order to those of the elemental methods. Derived ARK2 methods have vanishing stability functions for very large values of the stiff scaled eigenvalue, z(exp [I]) goes to infinity, and retain high stability efficiency in the absence of stiffness, z(exp [I]) goes to zero. Extrapolation-type stage-value predictors are provided based on dense-output formulae. Optimized methods minimize both leading order ARK2 error terms and Butcher coefficient magnitudes as well as maximize conservation properties. Numerical tests of the new schemes on a CDR problem show negligible stiffness leakage and near classical order convergence rates. However, tests on three simple singular-perturbation problems reveal generally predictable order reduction. Error control is best managed with a PID-controller. While results for the fifth-order method are disappointing, both the new third- and fourth-order methods are at least as efficient as existing ARK2 methods while offering error control and stage-value predictors.

  8. Synthesis of metal oxide nanoparticles via a robust ``solvent-deficient'' method

    NASA Astrophysics Data System (ADS)

    Smith, Stacey J.; Huang, Baiyu; Liu, Shengfeng; Liu, Qingyuan; Olsen, Rebecca E.; Boerio-Goates, Juliana; Woodfield, Brian F.

    2014-11-01

    We report an efficient, general methodology for producing high-surface area metal oxide nanomaterials for a vast range of metal oxides, including at least one metal oxide nanomaterial from nearly every transition metal and semi-metal group in the periodic table (groups 3-4 and 6-15) as well as several from the lanthanide group (see Table 1). The method requires only 2-3 simple steps; a hydrated metal salt (usually a nitrate or chloride salt) is ground with bicarbonate (usually NH4HCO3) for 10-30 minutes to form a precursor that is then either untreated or rinsed before being calcined at relatively low temperatures (220-550 °C) for 1-3 hours. The method is thus similar to surfactant-free aqueous methods such as co-precipitation but is unique in that no solvents are added. The resulting ``solvent-deficient'' environment has interesting and unique consequences, including increased crystallinity of the products over other aqueous methods and a mesoporous nature in the inevitable agglomerates. The products are chemically pure and phase pure with crystallites generally 3-30 nm in average size that aggregate into high surface area, mesoporous agglomerates 50-300 nm in size that would be useful for catalyst and gas sensing applications. The versatility of products and efficiency of the method lend its unique potential for improving the industrial viability of a broad family of useful metal oxide nanomaterials. In this paper, we outline the methodology of the solvent-deficient method using our understanding of its mechanism, and we describe the range and quality of nanomaterials it has produced thus far.We report an efficient, general methodology for producing high-surface area metal oxide nanomaterials for a vast range of metal oxides, including at least one metal oxide nanomaterial from nearly every transition metal and semi-metal group in the periodic table (groups 3-4 and 6-15) as well as several from the lanthanide group (see Table 1). The method requires only 2-3 simple steps; a hydrated metal salt (usually a nitrate or chloride salt) is ground with bicarbonate (usually NH4HCO3) for 10-30 minutes to form a precursor that is then either untreated or rinsed before being calcined at relatively low temperatures (220-550 °C) for 1-3 hours. The method is thus similar to surfactant-free aqueous methods such as co-precipitation but is unique in that no solvents are added. The resulting ``solvent-deficient'' environment has interesting and unique consequences, including increased crystallinity of the products over other aqueous methods and a mesoporous nature in the inevitable agglomerates. The products are chemically pure and phase pure with crystallites generally 3-30 nm in average size that aggregate into high surface area, mesoporous agglomerates 50-300 nm in size that would be useful for catalyst and gas sensing applications. The versatility of products and efficiency of the method lend its unique potential for improving the industrial viability of a broad family of useful metal oxide nanomaterials. In this paper, we outline the methodology of the solvent-deficient method using our understanding of its mechanism, and we describe the range and quality of nanomaterials it has produced thus far. Electronic supplementary information (ESI) available: (1) Preliminary Netzsch milling results for Al2O3 and CeO2, (2) XRD patterns/analyses of the dried and rinsed precursors plotted with the ICDD standard patterns of the materials they contain, (3) all TG/DTA-MS data. See DOI: 10.1039/c4nr04964k

  9. Accuracy, precision, and economic efficiency for three methods of thrips (Thysanoptera: Thripidae) population density assessment.

    PubMed

    Sutherland, Andrew M; Parrella, Michael P

    2011-08-01

    Western flower thrips, Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), is a major horticultural pest and an important vector of plant viruses in many parts of the world. Methods for assessing thrips population density for pest management decision support are often inaccurate or imprecise due to thrips' positive thigmotaxis, small size, and naturally aggregated populations. Two established methods, flower tapping and an alcohol wash, were compared with a novel method, plant desiccation coupled with passive trapping, using accuracy, precision and economic efficiency as comparative variables. Observed accuracy was statistically similar and low (37.8-53.6%) for all three methods. Flower tapping was the least expensive method, in terms of person-hours, whereas the alcohol wash method was the most expensive. Precision, expressed by relative variation, depended on location within the greenhouse, location on greenhouse benches, and the sampling week, but it was generally highest for the flower tapping and desiccation methods. Economic efficiency, expressed by relative net precision, was highest for the flower tapping method and lowest for the alcohol wash method. Advantages and disadvantages are discussed for all three methods used. If relative density assessment methods such as these can all be assumed to accurately estimate a constant proportion of absolute density, then high precision becomes the methodological goal in terms of measuring insect population density, decision making for pest management, and pesticide efficacy assessments.

  10. A convergent diffusion and social marketing approach for disseminating proven approaches to physical activity promotion.

    PubMed

    Dearing, James W; Maibach, Edward W; Buller, David B

    2006-10-01

    Approaches from diffusion of innovations and social marketing are used here to propose efficient means to promote and enhance the dissemination of evidence-based physical activity programs. While both approaches have traditionally been conceptualized as top-down, center-to-periphery, centralized efforts at social change, their operational methods have usually differed. The operational methods of diffusion theory have a strong relational emphasis, while the operational methods of social marketing have a strong transactional emphasis. Here, we argue for a convergence of diffusion of innovation and social marketing principles to stimulate the efficient dissemination of proven-effective programs. In general terms, we are encouraging a focus on societal sectors as a logical and efficient means for enhancing the impact of dissemination efforts. This requires an understanding of complex organizations and the functional roles played by different individuals in such organizations. In specific terms, ten principles are provided for working effectively within societal sectors and enhancing user involvement in the processes of adoption and implementation.

  11. Transferring and generalizing deep-learning-based neural encoding models across subjects.

    PubMed

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-08-01

    Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a target subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the target subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while a deep residual neural network driven by image recognition was used to model visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish both subject-specific and population-wide predictive models of cortical representations of high-dimensional and hierarchical visual features. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. A Class of High-Resolution Explicit and Implicit Shock-Capturing Methods

    NASA Technical Reports Server (NTRS)

    Yee, H. C.

    1994-01-01

    The development of shock-capturing finite difference methods for hyperbolic conservation laws has been a rapidly growing area for the last decade. Many of the fundamental concepts, state-of-the-art developments and applications to fluid dynamics problems can only be found in meeting proceedings, scientific journals and internal reports. This paper attempts to give a unified and generalized formulation of a class of high-resolution, explicit and implicit shock capturing methods, and to illustrate their versatility in various steady and unsteady complex shock waves, perfect gases, equilibrium real gases and nonequilibrium flow computations. These numerical methods are formulated for the purpose of ease and efficient implementation into a practical computer code. The various constructions of high-resolution shock-capturing methods fall nicely into the present framework and a computer code can be implemented with the various methods as separate modules. Included is a systematic overview of the basic design principle of the various related numerical methods. Special emphasis will be on the construction of the basic nonlinear, spatially second and third-order schemes for nonlinear scalar hyperbolic conservation laws and the methods of extending these nonlinear scalar schemes to nonlinear systems via the approximate Riemann solvers and flux-vector splitting approaches. Generalization of these methods to efficiently include real gases and large systems of nonequilibrium flows will be discussed. Some perbolic conservation laws to problems containing stiff source terms and terms and shock waves are also included. The performance of some of these schemes is illustrated by numerical examples for one-, two- and three-dimensional gas-dynamics problems. The use of the Lax-Friedrichs numerical flux to obtain high-resolution shock-capturing schemes is generalized. This method can be extended to nonlinear systems of equations without the use of Riemann solvers or flux-vector splitting approaches and thus provides a large savings for multidimensional, equilibrium real gases and nonequilibrium flow computations.

  13. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Gong, S

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  14. Using the entire history in the analysis of nested case cohort samples.

    PubMed

    Rivera, C L; Lumley, T

    2016-08-15

    Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. ACCESS 1: Approximation Concepts Code for Efficient Structural Synthesis program documentation and user's guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1976-01-01

    The program documentation and user's guide for the ACCESS-1 computer program is presented. ACCESS-1 is a research oriented program which implements a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and general mathematical programming algorithms are applied in the design optimization procedure. Implementation of the computer program, preparation of input data and basic program structure are described, and three illustrative examples are given.

  16. Applied Use Value of Scientific Information for Management of Ecosystem Services

    NASA Astrophysics Data System (ADS)

    Raunikar, R. P.; Forney, W.; Bernknopf, R.; Mishra, S.

    2012-12-01

    The U.S. Geological Survey has developed and applied methods for quantifying the value of scientific information (VOI) that are based on the applied use value of the information. In particular the applied use value of U.S. Geological Survey information often includes efficient management of ecosystem services. The economic nature of U.S. Geological Survey scientific information is largely equivalent to that of any information, but we focus application of our VOI quantification methods on the information products provided freely to the public by the U.S. Geological Survey. We describe VOI economics in general and illustrate by referring to previous studies that use the evolving applied use value methods, which includes examples of the siting of landfills in Louden County, the mineral exploration efficiencies of finer resolution geologic maps in Canada, and improved agricultural production and groundwater protection in Eastern Iowa possible with Landsat moderate resolution satellite imagery. Finally, we describe the adaptation of the applied use value method to the case of streamgage information used to improve the efficiency of water markets in New Mexico.

  17. Simple-MSSM: a simple and efficient method for simultaneous multi-site saturation mutagenesis.

    PubMed

    Cheng, Feng; Xu, Jian-Miao; Xiang, Chao; Liu, Zhi-Qiang; Zhao, Li-Qing; Zheng, Yu-Guo

    2017-04-01

    To develop a practically simple and robust multi-site saturation mutagenesis (MSSM) method that enables simultaneously recombination of amino acid positions for focused mutant library generation. A general restriction enzyme-free and ligase-free MSSM method (Simple-MSSM) based on prolonged overlap extension PCR (POE-PCR) and Simple Cloning techniques. As a proof of principle of Simple-MSSM, the gene of eGFP (enhanced green fluorescent protein) was used as a template gene for simultaneous mutagenesis of five codons. Forty-eight randomly selected clones were sequenced. Sequencing revealed that all the 48 clones showed at least one mutant codon (mutation efficiency = 100%), and 46 out of the 48 clones had mutations at all the five codons. The obtained diversities at these five codons are 27, 24, 26, 26 and 22, respectively, which correspond to 84, 75, 81, 81, 69% of the theoretical diversity offered by NNK-degeneration (32 codons; NNK, K = T or G). The enzyme-free Simple-MSSM method can simultaneously and efficiently saturate five codons within one day, and therefore avoid missing interactions between residues in interacting amino acid networks.

  18. Computer-based learning: interleaving whole and sectional representation of neuroanatomy.

    PubMed

    Pani, John R; Chariker, Julia H; Naaz, Farah

    2013-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). Copyright © 2012 American Association of Anatomists.

  19. Computer-Based Learning: Interleaving Whole and Sectional Representation of Neuroanatomy

    PubMed Central

    Pani, John R.; Chariker, Julia H.; Naaz, Farah

    2015-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). PMID:22761001

  20. Spectroscopic and laser characterization of emerald. Final report, April 1983-April 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, S.T.; Chai, B.H.

    1986-08-01

    The spectroscopic characteristics and laser properties of emerald were investigated. The laser measurements showed that the emerald-laser tuning range was 720-842 nm and exhibited a high gain and high efficiency in the 760-790 nm range. Under a crystal growth development program, the laser loss was reduced from 11%/cm to 0.4%/cm. The limiting factor in the laser efficiency is the excited-state absorption (ESA). The ESA was measured by two methods: a laser-pumped single-pass gain method, which is generally applicable to all tunable laser materials, and a laser-pumped laser method. A 76% laser quantum yield was obtained in high-optical-quality emerald. The maximummore » yield is estimated to be 83%, based on the ESA measurements.« less

  1. An easily implemented static condensation method for structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  2. An indirect approach to the extensive calculation of relationship coefficients

    PubMed Central

    Colleau, Jean-Jacques

    2002-01-01

    A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102

  3. Efficiency of reactant site sampling in network-free simulation of rule-based models for biochemical systems

    PubMed Central

    Yang, Jin; Hlavacek, William S.

    2011-01-01

    Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie’s method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e., long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e., time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that ligand-induced aggregation of receptors yields a large connected receptor cluster, the rejection-free method is more efficient. PMID:21832806

  4. SPAR improved structure-fluid dynamic analysis capability, phase 2

    NASA Technical Reports Server (NTRS)

    Pearson, M. L.

    1984-01-01

    An efficient and general method of analyzing a coupled dynamic system of fluid flow and elastic structures is investigated. The improvement of Structural Performance Analysis and Redesign (SPAR) code is summarized. All error codes are documented and the SPAR processor/subroutine cross reference is included.

  5. Convergence in relationships between leaf traits, spectra and age across diverse canopy environments and two contrasting tropical forests

    DOE PAGES

    Wu, Jin; Chavana-Bryant, Cecilia; Prohaska, Neill; ...

    2016-07-06

    Leaf age structures the phenology and development of plants, as well as the evolution of leaf traits over life histories. Furthermore, a general method for efficiently estimating leaf age across forests and canopy environments is lacking.

  6. 40 CFR 21.4 - Review of application.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Review of application. 21.4 Section 21.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.4 Review of..., efficiency, or technological standpoint. (c) An application which proposes additions, alterations, or methods...

  7. 40 CFR 21.4 - Review of application.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 1 2013-07-01 2013-07-01 false Review of application. 21.4 Section 21.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.4 Review of..., efficiency, or technological standpoint. (c) An application which proposes additions, alterations, or methods...

  8. 40 CFR 21.4 - Review of application.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 1 2014-07-01 2014-07-01 false Review of application. 21.4 Section 21.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.4 Review of..., efficiency, or technological standpoint. (c) An application which proposes additions, alterations, or methods...

  9. 40 CFR 21.4 - Review of application.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 1 2012-07-01 2012-07-01 false Review of application. 21.4 Section 21.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.4 Review of..., efficiency, or technological standpoint. (c) An application which proposes additions, alterations, or methods...

  10. 40 CFR 21.4 - Review of application.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Review of application. 21.4 Section 21.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.4 Review of..., efficiency, or technological standpoint. (c) An application which proposes additions, alterations, or methods...

  11. Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification

    PubMed Central

    Zhao, Yuwei; Han, Jiuqi; Chen, Yushu; Sun, Hongji; Chen, Jiayun; Ke, Ang; Han, Yao; Zhang, Peng; Zhang, Yi; Zhou, Jin; Wang, Changyong

    2018-01-01

    Multichannel electroencephalography (EEG) is widely used in typical brain-computer interface (BCI) systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB) with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP) methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems. PMID:29867307

  12. Recruitment strategies for a clinical trial of community-based water therapy for osteoarthritis.

    PubMed

    Davey, Rachel; Edwards, Sarah Matthes; Cochrane, Tom

    2003-04-01

    This study compares the efficiency of two methods of recruitment into a randomised controlled trial examining the cost-effectiveness of water therapy for elderly people with lower limb osteoarthritis. The direct cost of recruiting patients via general practice was 27.66 Pounds per patient (1.1 personnel hours/patient). The cost per recruited patient from a local newspaper article was 2.72 Pounds (0.2 personnel hours/patient). The cost differential between the two recruitment methods was largely owing to poor administration practices, difficulties in accessing patient information, and difficulties in contacting patients from the general practice computer database.

  13. Data Content and Exchange in General Practice: a Review

    PubMed Central

    Kalankesh, Leila R; Farahbakhsh, Mostafa; Rahimi, Niloofar

    2014-01-01

    Background: efficient communication of data is inevitable requirement for general practice. Any issue in data content and its exchange among GP and other related entities hinders continuity of patient care. Methods: literature search for this review was conducted on three electronic databases including Medline, Scopus and Science Direct. Results: through reviewing papers, we extracted information on the GP data content, use cases of GP information exchange, its participants, tools and methods, incentives and barriers. Conclusion: considering importance of data content and exchange for GP systems, it seems that more research is needed to be conducted toward providing a comprehensive framework for data content and exchange in GP systems. PMID:25648317

  14. Stability and bifurcation analysis of oscillators with piecewise-linear characteristics - A general approach

    NASA Technical Reports Server (NTRS)

    Noah, S. T.; Kim, Y. B.

    1991-01-01

    A general approach is developed for determining the periodic solutions and their stability of nonlinear oscillators with piecewise-smooth characteristics. A modified harmonic balance/Fourier transform procedure is devised for the analysis. The procedure avoids certain numerical differentiation employed previously in determining the periodic solutions, therefore enhancing the reliability and efficiency of the method. Stability of the solutions is determined via perturbations of their state variables. The method is applied to a forced oscillator interacting with a stop of finite stiffness. Flip and fold bifurcations are found to occur. This led to the identification of parameter ranges in which chaotic response occurred.

  15. The status of silicon ribbon growth technology for high-efficiency silicon solar cells

    NASA Technical Reports Server (NTRS)

    Ciszek, T. F.

    1985-01-01

    More than a dozen methods have been applied to the growth of silicon ribbons, beginning as early as 1963. The ribbon geometry has been particularly intriguing for photovoltaic applications, because it might provide large area, damage free, nearly continuous substrates without the material loss or cost of ingot wafering. In general, the efficiency of silicon ribbon solar cells has been lower than that of ingot cells. The status of some ribbon growth techniques that have achieved laboratory efficiencies greater than 13.5% are reviewed, i.e., edge-defined, film-fed growth (EFG), edge-supported pulling (ESP), ribbon against a drop (RAD), and dendritic web growth (web).

  16. Efficient quantum circuits for dense circulant and circulant like operators

    PubMed Central

    Zhou, S. S.

    2017-01-01

    Circulant matrices are an important family of operators, which have a wide range of applications in science and engineering-related fields. They are, in general, non-sparse and non-unitary. In this paper, we present efficient quantum circuits to implement circulant operators using fewer resources and with lower complexity than existing methods. Moreover, our quantum circuits can be readily extended to the implementation of Toeplitz, Hankel and block circulant matrices. Efficient quantum algorithms to implement the inverses and products of circulant operators are also provided, and an example application in solving the equation of motion for cyclic systems is discussed. PMID:28572988

  17. Has the 2008 financial crisis affected stock market efficiency? The case of Eurozone

    NASA Astrophysics Data System (ADS)

    Anagnostidis, P.; Varsakelis, C.; Emmanouilides, C. J.

    2016-04-01

    In this paper, the impact of the 2008 financial crisis on the weak-form efficiency of twelve Eurozone stock markets is investigated empirically. Efficiency is tested via the Generalized Hurst Exponent method, while dynamic Hurst exponents are estimated by means of the rolling window technique. To account for biases associated with the finite sample size and the leptokurtosis of the financial data, the statistical significance of the Hurst exponent estimates is assessed through a series of Monte-Carlo simulations drawn from the class of α-stable distributions. According to our results, the 2008 crisis has adversely affected stock price efficiency in most of the Eurozone capital markets, leading to the emergence of significant mean-reverting patterns in stock price movements.

  18. Automated optimization techniques for aircraft synthesis

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1976-01-01

    Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.

  19. When Does Changing Representation Improve Problem-Solving Performance?

    NASA Technical Reports Server (NTRS)

    Holte, Robert; Zimmer, Robert; MacDonald, Alan

    1992-01-01

    The aim of changing representation is the improvement of problem-solving efficiency. For the most widely studied family of methods of change of representation it is shown that the value of a single parameter, called the expulsion factor, is critical in determining (1) whether the change of representation will improve or degrade problem-solving efficiency and (2) whether the solutions produced using the change of representation will or will not be exponentially longer than the shortest solution. A method of computing the expansion factor for a given change of representation is sketched in general and described in detail for homomorphic changes of representation. The results are illustrated with homomorphic decompositions of the Towers of Hanoi problem.

  20. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers' Adaptations.

    PubMed

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers' technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers' technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008-2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4-66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers' practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations.

  1. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers’ Adaptations

    PubMed Central

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers’ technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers’ technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008–2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4–66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers’ practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations. PMID:28900435

  2. Efficient parallel implicit methods for rotary-wing aerodynamics calculations

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.

    Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good parallel performance on the IBM SP2, with OS-GCR giving slightly better performance than GMRES on large numbers of processors. For steady and quasi-steady calculations, the convergence rate is accelerated but the overall solution time remains about the same as the standard hybrid LU-SGS scheme. For unsteady calculations, however, the Newton method maintains a higher degree of time-accuracy which allows tbe use of larger timesteps and results in CPU savings of 20-35%.

  3. Front panel engineering with CAD simulation tool

    NASA Astrophysics Data System (ADS)

    Delacour, Jacques; Ungar, Serge; Mathieu, Gilles; Hasna, Guenther; Martinez, Pascal; Roche, Jean-Christophe

    1999-04-01

    THe progress made recently in display technology covers many fields of application. The specification of radiance, colorimetry and lighting efficiency creates some new challenges for designers. Photometric design is limited by the capability of correctly predicting the result of a lighting system, to save on the costs and time taken to build multiple prototypes or bread board benches. The second step of the research carried out by company OPTIS is to propose an optimization method to be applied to the lighting system, developed in the software SPEOS. The main features of the tool requires include the CAD interface, to enable fast and efficient transfer between mechanical and light design software, the source modeling, the light transfer model and an optimization tool. The CAD interface is mainly a prototype of transfer, which is not the subjects here. Photometric simulation is efficiently achieved by using the measured source encoding and a simulation by the Monte Carlo method. Today, the advantages and the limitations of the Monte Carlo method are well known. The noise reduction requires a long calculation time, which increases with the complexity of the display panel. A successful optimization is difficult to achieve, due to the long calculation time required for each optimization pass including a Monte Carlo simulation. The problem was initially defined as an engineering method of study. The experience shows that good understanding and mastering of the phenomenon of light transfer is limited by the complexity of non sequential propagation. The engineer must call for the help of a simulation and optimization tool. The main point needed to be able to perform an efficient optimization is a quick method for simulating light transfer. Much work has been done in this area and some interesting results can be observed. It must be said that the Monte Carlo method wastes time calculating some results and information which are not required for the needs of the simulation. Low efficiency transfer system cost a lot of lost time. More generally, the light transfer simulation can be treated efficiently when the integrated result is composed of elementary sub results that include quick analytical calculated intersections. The first axis of research appear. The quick integration research and the quick calculation of geometric intersections. The first axis of research brings some general solutions also valid for multi-reflection systems. The second axis requires some deep thinking on the intersection calculation. An interesting way is the subdivision of space in VOXELS. This is an adapted method of 3D division of space according to the objects and their location. An experimental software has been developed to provide a validation of the method. The gain is particularly high in complex systems. An important reduction in the calculation time has been achieved.

  4. Efficient free energy calculations by combining two complementary tempering sampling methods.

    PubMed

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  5. Efficient free energy calculations by combining two complementary tempering sampling methods

    NASA Astrophysics Data System (ADS)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-01

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  6. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  7. Analysis of Nonequivalent Assessments across Different Linguistic Groups Using a Mixed Methods Approach: Understanding the Causes of Differential Item Functioning by Cognitive Interviewing

    ERIC Educational Resources Information Center

    Benítez, Isabel; Padilla, José-Luis

    2014-01-01

    Differential item functioning (DIF) can undermine the validity of cross-lingual comparisons. While a lot of efficient statistics for detecting DIF are available, few general findings have been found to explain DIF results. The objective of the article was to study DIF sources by using a mixed method design. The design involves a quantitative phase…

  8. Fast Multiscale Algorithms for Wave Propagation in Heterogeneous Environments

    DTIC Science & Technology

    2016-01-07

    methods for waves’’, Nonlinear solvers for high- intensity focused ultrasound with application to cancer treatment, AIMS, Palo Alto, 2012. ``Hermite...formulation but different parametrizations. . . . . . . . . . . . 6 4 Density µ(t) at mode 0 for scattering of a plane Gaussian pulse from a sphere. On the...spatiotemporal scales. Two crucial components of the highly-efficient, general-purpose wave simulator we envision are • Reliable, low -cost methods for truncating

  9. Development of Methods for Carrier-Mediated Targeted Delivery of Antiviral Compounds Using Monoclonal Antibodies

    DTIC Science & Technology

    1987-04-01

    2. Comparison of Immunofluorescent Staining in Formaldehyde-Fixed Pichlnde Virus-Infected Cells That Had Been either Dried prior to Reaction with...was undertaken. 37 &aa&&3&M ^.{m^mmsmmmmmmmmmiä B. Experimental Methods General Procedures and Instrumentation. When required, reactions and...period, the reaction mixture was red and efficient stirring became very difficult. After the addition was complete, the reaction mixture was allowed

  10. A General Simulation Method for Multiple Bodies in Proximate Flight

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    2003-01-01

    Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.

  11. In silico concurrent multisite pH titration in proteins.

    PubMed

    Hu, Hao; Shen, Lin

    2014-07-30

    The concurrent proton binding at multiple sites in macromolecules such as proteins and nucleic acids is an important yet challenging problem in biochemistry. We develop an efficient generalized Hamiltonian approach to attack this issue. Based on the previously developed generalized-ensemble methods, an effective potential energy is constructed which combines the contributions of all (relevant) protonation states of the molecule. The effective potential preserves important phase regions of all states and, thus, allows efficient sampling of these regions in one simulation. The need for intermediate states in alchemical free energy simulations is greatly reduced. Free energy differences between different protonation states can be determined accurately and enable one to construct the grand canonical partition function. Therefore, the complicated concurrent multisite proton titration process of protein molecules can be satisfactorily simulated. Application of this method to the simulation of the pKa of Glu49, Asp50, and C-terminus of bovine pancreatic trypsin inhibitor shows reasonably good agreement with published experimental work. This method provides an unprecedented vivid picture of how different protonation states change their relative population upon pH titration. We believe that the method will be very useful in deciphering the molecular mechanism of pH-dependent biomolecular processes in terms of a detailed atomistic description. Copyright © 2014 Wiley Periodicals, Inc.

  12. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  13. FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)

    NASA Astrophysics Data System (ADS)

    Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.

    2011-04-01

    A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.

  14. Phenomenological modeling of nonlinear holograms based on metallic geometric metasurfaces.

    PubMed

    Ye, Weimin; Li, Xin; Liu, Juan; Zhang, Shuang

    2016-10-31

    Benefiting from efficient local phase and amplitude control at the subwavelength scale, metasurfaces offer a new platform for computer generated holography with high spatial resolution. Three-dimensional and high efficient holograms have been realized by metasurfaces constituted by subwavelength meta-atoms with spatially varying geometries or orientations. Metasurfaces have been recently extended to the nonlinear optical regime to generate holographic images in harmonic generation waves. Thus far, there has been no vector field simulation of nonlinear metasurface holograms because of the tremendous computational challenge in numerically calculating the collective nonlinear responses of the large number of different subwavelength meta-atoms in a hologram. Here, we propose a general phenomenological method to model nonlinear metasurface holograms based on the assumption that every meta-atom could be described by a localized nonlinear polarizability tensor. Applied to geometric nonlinear metasurfaces, we numerically model the holographic images formed by the second-harmonic waves of different spins. We show that, in contrast to the metasurface holograms operating in the linear optical regime, the wavelength of incident fundamental light should be slightly detuned from the fundamental resonant wavelength to optimize the efficiency and quality of nonlinear holographic images. The proposed modeling provides a general method to simulate nonlinear optical devices based on metallic metasurfaces.

  15. Approximating the Basset force by optimizing the method of van Hinsberg et al.

    NASA Astrophysics Data System (ADS)

    Casas, G.; Ferrer, A.; Oñate, E.

    2018-01-01

    In this work we put the method proposed by van Hinsberg et al. [29] to the test, highlighting its accuracy and efficiency in a sequence of benchmarks of increasing complexity. Furthermore, we explore the possibility of systematizing the way in which the method's free parameters are determined by generalizing the optimization problem that was considered originally. Finally, we provide a list of worked-out values, ready for implementation in large-scale particle-laden flow simulations.

  16. Finite element area and line integral transforms for generalization of aperture function and geometry in Kirchhoff scalar diffraction theory

    NASA Astrophysics Data System (ADS)

    Kraus, Hal G.

    1993-02-01

    Two finite element-based methods for calculating Fresnel region and near-field region intensities resulting from diffraction of light by two-dimensional apertures are presented. The first is derived using the Kirchhoff area diffraction integral and the second is derived using a displaced vector potential to achieve a line integral transformation. The specific form of each of these formulations is presented for incident spherical waves and for Gaussian laser beams. The geometry of the two-dimensional diffracting aperture(s) is based on biquadratic isoparametric elements, which are used to define apertures of complex geometry. These elements are also used to build complex amplitude and phase functions across the aperture(s), which may be of continuous or discontinuous form. The finite element transform integrals are accurately and efficiently integrated numerically using Gaussian quadrature. The power of these methods is illustrated in several examples which include secondary obstructions, secondary spider supports, multiple mirror arrays, synthetic aperture arrays, apertures covered by screens, apodization, phase plates, and off-axis apertures. Typically, the finite element line integral transform results in significant gains in computational efficiency over the finite element Kirchhoff transform method, but is also subject to some loss in generality.

  17. Rolling bearing fault feature learning using improved convolutional deep belief network with compressed sensing

    NASA Astrophysics Data System (ADS)

    Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng

    2018-02-01

    The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.

  18. Transfer matrix method for dynamics modeling and independent modal space vibration control design of linear hybrid multibody system

    NASA Astrophysics Data System (ADS)

    Rong, Bao; Rui, Xiaoting; Lu, Kun; Tao, Ling; Wang, Guoping; Ni, Xiaojun

    2018-05-01

    In this paper, an efficient method of dynamics modeling and vibration control design of a linear hybrid multibody system (MS) is studied based on the transfer matrix method. The natural vibration characteristics of a linear hybrid MS are solved by using low-order transfer equations. Then, by constructing the brand-new body dynamics equation, augmented operator and augmented eigenvector, the orthogonality of augmented eigenvector of a linear hybrid MS is satisfied, and its state space model expressed in each independent model space is obtained easily. According to this dynamics model, a robust independent modal space-fuzzy controller is designed for vibration control of a general MS, and the genetic optimization of some critical control parameters of fuzzy tuners is also presented. Two illustrative examples are performed, which results show that this method is computationally efficient and with perfect control performance.

  19. Bayesian data analysis in observational comparative effectiveness research: rationale and examples.

    PubMed

    Olson, William H; Crivera, Concetta; Ma, Yi-Wen; Panish, Jessica; Mao, Lian; Lynch, Scott M

    2013-11-01

    Many comparative effectiveness research and patient-centered outcomes research studies will need to be observational for one or both of two reasons: first, randomized trials are expensive and time-consuming; and second, only observational studies can answer some research questions. It is generally recognized that there is a need to increase the scientific validity and efficiency of observational studies. Bayesian methods for the design and analysis of observational studies are scientifically valid and offer many advantages over frequentist methods, including, importantly, the ability to conduct comparative effectiveness research/patient-centered outcomes research more efficiently. Bayesian data analysis is being introduced into outcomes studies that we are conducting. Our purpose here is to describe our view of some of the advantages of Bayesian methods for observational studies and to illustrate both realized and potential advantages by describing studies we are conducting in which various Bayesian methods have been or could be implemented.

  20. Massively parallel multicanonical simulations

    NASA Astrophysics Data System (ADS)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  1. Composite SAR imaging using sequential joint sparsity

    NASA Astrophysics Data System (ADS)

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.

    2017-06-01

    This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.

  2. Gradient-based Optimization for Poroelastic and Viscoelastic MR Elastography

    PubMed Central

    Tan, Likun; McGarry, Matthew D.J.; Van Houten, Elijah E.W.; Ji, Ming; Solamen, Ligin; Weaver, John B.

    2017-01-01

    We describe an efficient gradient computation for solving inverse problems arising in magnetic resonance elastography (MRE). The algorithm can be considered as a generalized ‘adjoint method’ based on a Lagrangian formulation. One requirement for the classic adjoint method is assurance of the self-adjoint property of the stiffness matrix in the elasticity problem. In this paper, we show this property is no longer a necessary condition in our algorithm, but the computational performance can be as efficient as the classic method, which involves only two forward solutions and is independent of the number of parameters to be estimated. The algorithm is developed and implemented in material property reconstructions using poroelastic and viscoelastic modeling. Various gradient- and Hessian-based optimization techniques have been tested on simulation, phantom and in vivo brain data. The numerical results show the feasibility and the efficiency of the proposed scheme for gradient calculation. PMID:27608454

  3. Visual versus mechanised leucocyte differential counts: costing and evaluation of traditional and Hemalog D methods.

    PubMed

    Hudson, M J; Green, A E

    1980-11-01

    Visual differential counts were examined for efficiency, cost effectiveness, and staff acceptability within our laboratory. A comparison with the Hemalog D system was attempted. The advantages and disadvantages of each system are enumerated and discussed in the context of a large general hospital.

  4. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  5. An efficient nontraditional method of directly converting a cotton fibrous material into a woven-like hydroentangled nonwoven cotton fabric

    USDA-ARS?s Scientific Manuscript database

    The traditional technology of producing cotton woven fabrics is comprised of about 20 mechanical and chemical processes that generally are costly, slow, inefficient, and environmentally somewhat unfriendly. A modern system, using fewer preparatory processes, of fabricating hydro-entangled cotton and...

  6. A SCILAB Program for Computing General-Relativistic Models of Rotating Neutron Stars by Implementing Hartle's Perturbation Method

    NASA Astrophysics Data System (ADS)

    Papasotiriou, P. J.; Geroyannis, V. S.

    We implement Hartle's perturbation method to the computation of relativistic rigidly rotating neutron star models. The program has been written in SCILAB (© INRIA ENPC), a matrix-oriented high-level programming language. The numerical method is described in very detail and is applied to many models in slow or fast rotation. We show that, although the method is perturbative, it gives accurate results for all practical purposes and it should prove an efficient tool for computing rapidly rotating pulsars.

  7. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Study of photodissociation parameters of carboxyhemoglobin

    NASA Astrophysics Data System (ADS)

    Kuz'min, V. V.; Salmin, V. V.; Salmina, A. B.; Provorov, A. S.

    2008-07-01

    The general properties of photodissociation of carboxyhemoglobin (HbCO) in buffer solutions of whole human blood are studied by the flash photolysis method on a setup with intersecting beams. It is shown that the efficiency of photoinduced dissociation of the HbCO complex virtually linearly depends on the photolytic irradiation intensity for the average power density not exceeding 45 mW cm-2. The general dissociation of the HbCO complex in native conditions occurs in a narrower range of values of the saturation degree than in model experiments with the hemoglobin solution. The dependence of the pulse photolysis efficiency of HbCO on the photolytic radiation wavelength in the range from 550 to 585 nm has a broad bell shape. The efficiency maximum corresponds to the electronic Q transition (porphyrin π—π* absorption) in HbCO at a wavelength of 570 nm. No dissociation of the complex was observed under given experimental conditions upon irradiation of solutions by photolytic radiation at wavelengths above 585 nm.

  8. Exact consideration of data redundancies for spiral cone-beam CT

    NASA Astrophysics Data System (ADS)

    Lauritsch, Guenter; Katsevich, Alexander; Hirsch, Michael

    2004-05-01

    In multi-slice spiral computed tomography (CT) there is an obvious trend in adding more and more detector rows. The goals are numerous: volume coverage, isotropic spatial resolution, and speed. Consequently, there will be a variety of scan protocols optimizing clinical applications. Flexibility in table feed requires consideration of data redundancies to ensure efficient detector usage. Until recently this was achieved by approximate reconstruction algorithms only. However, due to the increasing cone angles there is a need of exact treatment of the cone beam geometry. A new, exact and efficient 3-PI algorithm for considering three-fold data redundancies was derived from a general, theoretical framework based on 3D Radon inversion using Grangeat's formula. The 3-PI algorithm possesses a simple and efficient structure as the 1-PI method for non-redundant data previously proposed. Filtering is one-dimensional, performed along lines with variable tilt on the detector. This talk deals with a thorough evaluation of the performance of the 3-PI algorithm in comparison to the 1-PI method. Image quality of the 3-PI algorithm is superior. The prominent spiral artifacts and other discretization artifacts are significantly reduced due to averaging effects when taking into account redundant data. Certainly signal-to-noise ratio is increased. The computational expense is comparable even to that of approximate algorithms. The 3-PI algorithm proves its practicability for applications in medical imaging. Other exact n-PI methods for n-fold data redundancies (n odd) can be deduced from the general, theoretical framework.

  9. Phylogenetic studies of transmission dynamics in generalized HIV epidemics: An essential tool where the burden is greatest?

    PubMed Central

    Dennis, Ann M.; Herbeck, Joshua T.; Brown, Andrew Leigh; Kellam, Paul; de Oliveira, Tulio; Pillay, Deenan; Fraser, Christophe; Cohen, Myron S.

    2014-01-01

    Efficient and effective HIV prevention measures for generalized epidemics in sub-Saharan Africa have not yet been validated at the population-level. Design and impact evaluation of such measures requires fine-scale understanding of local HIV transmission dynamics. The novel tools of HIV phylogenetics and molecular epidemiology may elucidate these transmission dynamics. Such methods have been incorporated into studies of concentrated HIV epidemics to identify proximate and determinant traits associated with ongoing transmission. However, applying similar phylogenetic analyses to generalized epidemics, including the design and evaluation of prevention trials, presents additional challenges. Here we review the scope of these methods and present examples of their use in concentrated epidemics in the context of prevention. Next, we describe the current uses for phylogenetics in generalized epidemics, and discuss their promise for elucidating transmission patterns and informing prevention trials. Finally, we review logistic and technical challenges inherent to large-scale molecular epidemiological studies of generalized epidemics, and suggest potential solutions. PMID:24977473

  10. Fully interferometric controllable anomalous refraction efficiency using cross modulation with plasmonic metasurfaces.

    PubMed

    Liu, Zhaocheng; Chen, Shuqi; Li, Jianxiong; Cheng, Hua; Li, Zhancheng; Liu, Wenwei; Yu, Ping; Xia, Ji; Tian, Jianguo

    2014-12-01

    We present a method of fully interferometric, controllable anomalous refraction efficiency by introducing cross-modulated incident light based on plasmonic metasurfaces. Theoretical analyses and numerical simulations indicate that the anomalous and ordinary refracted beams generated from two opposite-helicity incident beams and following the generalized Snell's law will have a superposition for certain incident angles, and the anomalous refraction efficiency can be dynamically controlled by changing the relative phase of the incident sources. As the incident wavelength nears the resonant wavelength of the plasmonic metasurfaces, two equal-amplitude incident beams with opposite helicity can be used to control the anomalous refraction efficiency. Otherwise, two unequal-amplitude incident beams with opposite helicity can be used to fully control the anomalous refraction efficiency. This Letter may offer a further step in the development of controllable anomalous refraction.

  11. A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique

    NASA Technical Reports Server (NTRS)

    Barth, T. J.; Steger, J. L.

    1985-01-01

    An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.

  12. The density-matrix renormalization group: a short introduction.

    PubMed

    Schollwöck, Ulrich

    2011-07-13

    The density-matrix renormalization group (DMRG) method has established itself over the last decade as the leading method for the simulation of the statics and dynamics of one-dimensional strongly correlated quantum lattice systems. The DMRG is a method that shares features of a renormalization group procedure (which here generates a flow in the space of reduced density operators) and of a variational method that operates on a highly interesting class of quantum states, so-called matrix product states (MPSs). The DMRG method is presented here entirely in the MPS language. While the DMRG generally fails in larger two-dimensional systems, the MPS picture suggests a straightforward generalization to higher dimensions in the framework of tensor network states. The resulting algorithms, however, suffer from difficulties absent in one dimension, apart from a much more unfavourable efficiency, such that their ultimate success remains far from clear at the moment.

  13. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  14. Applying Nyquist's method for stability determination to solar wind observations

    NASA Astrophysics Data System (ADS)

    Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.

    2017-10-01

    The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.

  15. Protocol vulnerability detection based on network traffic analysis and binary reverse engineering.

    PubMed

    Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing

    2017-01-01

    Network protocol vulnerability detection plays an important role in many domains, including protocol security analysis, application security, and network intrusion detection. In this study, by analyzing the general fuzzing method of network protocols, we propose a novel approach that combines network traffic analysis with the binary reverse engineering method. For network traffic analysis, the block-based protocol description language is introduced to construct test scripts, while the binary reverse engineering method employs the genetic algorithm with a fitness function designed to focus on code coverage. This combination leads to a substantial improvement in fuzz testing for network protocols. We build a prototype system and use it to test several real-world network protocol implementations. The experimental results show that the proposed approach detects vulnerabilities more efficiently and effectively than general fuzzing methods such as SPIKE.

  16. Total generalized variation-regularized variational model for single image dehazing

    NASA Astrophysics Data System (ADS)

    Shu, Qiao-Ling; Wu, Chuan-Sheng; Zhong, Qiu-Xiang; Liu, Ryan Wen

    2018-04-01

    Imaging quality is often significantly degraded under hazy weather condition. The purpose of this paper is to recover the latent sharp image from its hazy version. It is well known that the accurate estimation of depth information could assist in improving dehazing performance. In this paper, a detail-preserving variational model was proposed to simultaneously estimate haze-free image and depth map. In particular, the total variation (TV) and total generalized variation (TGV) regularizers were introduced to restrain haze-free image and depth map, respectively. The resulting nonsmooth optimization problem was efficiently solved using the alternating direction method of multipliers (ADMM). Comprehensive experiments have been conducted on realistic datasets to compare our proposed method with several state-of-the-art dehazing methods. Results have illustrated the superior performance of the proposed method in terms of visual quality evaluation.

  17. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  18. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  19. Bio-Orthogonal Mediated Nucleic Acid Transfection of Cells via Cell Surface Engineering.

    PubMed

    O'Brien, Paul J; Elahipanah, Sina; Rogozhnikov, Dmitry; Yousaf, Muhammad N

    2017-05-24

    The efficient delivery of foreign nucleic acids (transfection) into cells is a critical tool for fundamental biomedical research and a pillar of several biotechnology industries. There are currently three main strategies for transfection including reagent, instrument, and viral based methods. Each technology has significantly advanced cell transfection; however, reagent based methods have captured the majority of the transfection market due to their relatively low cost and ease of use. This general method relies on the efficient packaging of a reagent with nucleic acids to form a stable complex that is subsequently associated and delivered to cells via nonspecific electrostatic targeting. Reagent transfection methods generally use various polyamine cationic type molecules to condense with negatively charged nucleic acids into a highly positively charged complex, which is subsequently delivered to negatively charged cells in culture for association, internalization, release, and expression. Although this appears to be a straightforward procedure, there are several major issues including toxicity, low efficiency, sorting of viable transfected from nontransfected cells, and limited scope of transfectable cell types. Herein, we report a new strategy (SnapFect) for nucleic acid transfection to cells that does not rely on electrostatic interactions but instead uses an integrated approach combining bio-orthogonal liposome fusion, click chemistry, and cell surface engineering. We show that a target cell population is rapidly and efficiently engineered to present a bio-orthogonal functional group on its cell surface through nanoparticle liposome delivery and fusion. A complementary bio-orthogonal nucleic acid complex is then formed and delivered to which chemoselective click chemistry induced transfection occurs to the primed cell. This new strategy requires minimal time, steps, and reagents and leads to superior transfection results for a broad range of cell types. Moreover the transfection is efficient with high cell viability and does not require a postsorting step to separate transfected from nontransfected cells in the cell population. We also show for the first time a precision transfection strategy where a single cell type in a coculture is target transfected via bio-orthogonal click chemistry.

  20. Computation of Reacting Flows in Combustion Processes

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Chen, Kuo-Huey

    1997-01-01

    The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.

  1. Development of numerical methods for overset grids with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1995-01-01

    Algorithms and computer code developments were performed for the overset grid approach to solving computational fluid dynamics problems. The techniques developed are applicable to compressible Navier-Stokes flow for any general complex configurations. The computer codes developed were tested on different complex configurations with the Space Shuttle launch vehicle configuration as the primary test bed. General, efficient and user-friendly codes were produced for grid generation, flow solution and force and moment computation.

  2. Monte Carlo sampling in diffusive dynamical systems

    NASA Astrophysics Data System (ADS)

    Tapias, Diego; Sanders, David P.; Altmann, Eduardo G.

    2018-05-01

    We introduce a Monte Carlo algorithm to efficiently compute transport properties of chaotic dynamical systems. Our method exploits the importance sampling technique that favors trajectories in the tail of the distribution of displacements, where deviations from a diffusive process are most prominent. We search for initial conditions using a proposal that correlates states in the Markov chain constructed via a Metropolis-Hastings algorithm. We show that our method outperforms the direct sampling method and also Metropolis-Hastings methods with alternative proposals. We test our general method through numerical simulations in 1D (box-map) and 2D (Lorentz gas) systems.

  3. The Use of Generalized Laguerre Polynomials in Spectral Methods for Solving Fractional Delay Differential Equations.

    PubMed

    Khader, M M

    2013-10-01

    In this paper, an efficient numerical method for solving the fractional delay differential equations (FDDEs) is considered. The fractional derivative is described in the Caputo sense. The proposed method is based on the derived approximate formula of the Laguerre polynomials. The properties of Laguerre polynomials are utilized to reduce FDDEs to a linear or nonlinear system of algebraic equations. Special attention is given to study the error and the convergence analysis of the proposed method. Several numerical examples are provided to confirm that the proposed method is in excellent agreement with the exact solution.

  4. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited bandwidth available between CPU and memory. In terms of parallel efficiency/scaling the different studied methods perform similarly, with the greedy algorithm being the bottleneck. In terms of absolute computational work the adaptive methods are better for the cases studied due to their more efficient selection of the control points. By automating most of the RBF mesh deformation, a robust, efficient and almost user-independent mesh deformation method is presented.

  5. [Analysis of cost and efficiency of a medical nursing unit using time-driven activity-based costing].

    PubMed

    Lim, Ji Young; Kim, Mi Ja; Park, Chang Gi

    2011-08-01

    Time-driven activity-based costing was applied to analyze the nursing activity cost and efficiency of a medical unit. Data were collected at a medical unit of a general hospital. Nursing activities were measured using a nursing activities inventory and classified as 6 domains using Easley-Storfjell Instrument. Descriptive statistics were used to identify general characteristics of the unit, nursing activities and activity time, and stochastic frontier model was adopted to estimate true activity time. The average efficiency of the medical unit using theoretical resource capacity was 77%, however the efficiency using practical resource capacity was 96%. According to these results, the portion of non-added value time was estimated 23% and 4% each. The sums of total nursing activity costs were estimated 109,860,977 won in traditional activity-based costing and 84,427,126 won in time-driven activity-based costing. The difference in the two cost calculating methods was 25,433,851 won. These results indicate that the time-driven activity-based costing provides useful and more realistic information about the efficiency of unit operation compared to traditional activity-based costing. So time-driven activity-based costing is recommended as a performance evaluation framework for nursing departments based on cost management.

  6. The Multidimensional Efficiency of Pension System: Definition and Measurement in Cross-Country Studies.

    PubMed

    Chybalski, Filip

    The existing literature on the efficiency of pension system, usually addresses the problem between the choice of different theoretical models, or concerns one or few empirical pension systems. In this paper quite different approach to the measurement of pension system efficiency is proposed. It is dedicated mainly to the cross-country studies of empirical pension systems, however it may be also employed to the analysis of a given pension system on the basis of time series. I identify four dimensions of pension system efficiency, referring to: GDP-distribution, adequacy of pension, influence on the labour market and administrative costs. Consequently, I propose four sets of static and one set of dynamic efficiency indicators. In the empirical part of the paper, I use Spearman's rank correlation coefficient and cluster analysis to verify the proposed method on statistical data covering 28 European countries in years 2007-2011. I prove that the method works and enables some comparisons as well as clustering of analyzed pension systems. The study delivers also some interesting empirical findings. The main goal of pension systems seems to become poverty alleviation, since the efficiency of ensuring protection against poverty, as well as the efficiency of reducing poverty, is very resistant to the efficiency of GDP-distribution. The opposite situation characterizes the efficiency of consumption smoothing-this is generally sensitive to the efficiency of GDP-distribution, and its dynamics are sensitive to the dynamics of GDP-distribution efficiency. The results of the study indicate the Norwegian and the Icelandic pension systems to be the most efficient in the analyzed group.

  7. Development of Composite Materials with High Passive Damping Properties

    DTIC Science & Technology

    2006-05-15

    frequency response function analysis. Sound transmission through sandwich panels was studied using the statistical energy analysis (SEA). Modal density...2.2.3 Finite element models 14 2.2.4 Statistical energy analysis method 15 CHAPTER 3 ANALYSIS OF DAMPING IN SANDWICH MATERIALS. 24 3.1 Equation of...sheets and the core. 2.2.4 Statistical energy analysis method Finite element models are generally only efficient for problems at low and middle frequencies

  8. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  9. αAMG based on Weighted Matching for Systems of Elliptic PDEs Arising From Displacement and Mixed Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Ambra, P.; Vassilevski, P. S.

    2014-05-30

    Adaptive Algebraic Multigrid (or Multilevel) Methods (αAMG) are introduced to improve robustness and efficiency of classical algebraic multigrid methods in dealing with problems where no a-priori knowledge or assumptions on the near-null kernel of the underlined matrix are available. Recently we proposed an adaptive (bootstrap) AMG method, αAMG, aimed to obtain a composite solver with a desired convergence rate. Each new multigrid component relies on a current (general) smooth vector and exploits pairwise aggregation based on weighted matching in a matrix graph to define a new automatic, general-purpose coarsening process, which we refer to as “the compatible weighted matching”. Inmore » this work, we present results that broaden the applicability of our method to different finite element discretizations of elliptic PDEs. In particular, we consider systems arising from displacement methods in linear elasticity problems and saddle-point systems that appear in the application of the mixed method to Darcy problems.« less

  10. A systematic and efficient method for modeling acoustic response of multilayered media

    NASA Astrophysics Data System (ADS)

    Feng, Shi-Jin; Chen, Zhang-Long; Chen, Hong-Xin

    2017-12-01

    A generalized transmission and reflection matrix (GTRM) method for determining the acoustic response of multilayered media is developed in this paper. The principle of the method is to decipher the wave vectors by constructing generalized T/R matrices and recursive formulas. The generalized T/R matrices are introduced to account for the contributions of multiple reflection and transmission in a global manner. Three types of interface T/R matrices are developed to accommodate the coupling between any type of physical medium. This method is not only stable for high-frequency and large thickness cases by excluding the growing terms but also explicitly exhibits the physical mechanism of wave propagation. Acoustic response in both frequency and time domains can be solved by the method. Moreover, the method has a simple framework, and it is easy to be implemented in numerical tools. The method is successfully validated by several typical acoustic problems, including poroelastic medium immersed in fluid, aluminum-glass-aluminum immersed in fluid, and poroelastic sediment sandwiched between water and bedrock. In fact, the present method can be extended to any physical problem of a multilayered structure. When the wave frequency is high or the layered media is greatly thick or the material combination is highly complex, the advantage of the GTRM method is more conspicuous.

  11. Improving real-time efficiency of case-based reasoning for medical diagnosis.

    PubMed

    Park, Yoon-Joo

    2014-01-01

    Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.

  12. Toward an optimal solver for time-spectral fluid-dynamic and aeroelastic solutions on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Mundis, Nathan L.; Mavriplis, Dimitri J.

    2017-09-01

    The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.

  13. Comparison of non-toxic methods for creating beta-carotene encapsulated in PMMA nanoparticles

    NASA Astrophysics Data System (ADS)

    Dobrzanski, Christopher D.

    Nano/microcapsules are becoming more prevalent in various industries such as drug delivery, cosmetics, etc. Current methods of particle formation often use toxic or carcinogenic/mutagenic/reprotoxic (CMR) chemicals. This study intends to improve upon existing methods of particle formation and compare their effectiveness in terms of entrapment efficiency, mean particle size, and yield utilizing only non-toxic chemicals. In this study, the solvent evaporation (SE), spontaneous emulsification, and spontaneous emulsion solvent diffusion (SESD) methods were compared in systems containing green solvents ethyl acetate, dimethyl carbonate or acetone. PMMA particles containing encapsulated beta carotene, an ultraviolet sensitive substance, were synthesized. It was desired to produce particles with minimum mean size and maximum yield and entrapment of beta carotene. The mass of the water phase, the mass of the polymer and the pumping or blending rate were varied for each synthesis method. The smallest particle sizes for SE and SESD both were obtained from the middle water phase sizes, 200 g and 100 g respectively. The particles obtained from the larger water phase in SESD were much bigger, about 5 microns in diameter, even larger than the ones obtained from SE. When varying the mass of PMMA used in each synthesis method, as expected, more PMMA led to larger particles. Increasing the blending rate in SE from 6,500 to 13,500 rpm had a minimal effect on average particle size, but the higher shear resulted in highly polydisperse particles (PDI = 0.87). By decreasing the pump rate in SESD, particles became smaller and had lower entrapment efficiency. The entrapment efficiencies of the particles were generally higher for the larger particles within a mode. Therefore, we found that minimizing the particle size while maximizing entrapment were somewhat contradictory goals. The solvent evaporation method was very consistent in terms of the values of mean particle size, yield, and entrapment efficiency. Comparing the synthesis methods, the smallest particles with the highest yield and entrapment efficiency were generated by the spontaneous emulsification method.

  14. Generalized analog thresholding for spike acquisition at ultralow sampling rates

    PubMed Central

    He, Bryan D.; Wein, Alex; Varshney, Lav R.; Kusuma, Julius; Richardson, Andrew G.

    2015-01-01

    Efficient spike acquisition techniques are needed to bridge the divide from creating large multielectrode arrays (MEA) to achieving whole-cortex electrophysiology. In this paper, we introduce generalized analog thresholding (gAT), which achieves millisecond temporal resolution with sampling rates as low as 10 Hz. Consider the torrent of data from a single 1,000-channel MEA, which would generate more than 3 GB/min using standard 30-kHz Nyquist sampling. Recent neural signal processing methods based on compressive sensing still require Nyquist sampling as a first step and use iterative methods to reconstruct spikes. Analog thresholding (AT) remains the best existing alternative, where spike waveforms are passed through an analog comparator and sampled at 1 kHz, with instant spike reconstruction. By generalizing AT, the new method reduces sampling rates another order of magnitude, detects more than one spike per interval, and reconstructs spike width. Unlike compressive sensing, the new method reveals a simple closed-form solution to achieve instant (noniterative) spike reconstruction. The base method is already robust to hardware nonidealities, including realistic quantization error and integration noise. Because it achieves these considerable specifications using hardware-friendly components like integrators and comparators, generalized AT could translate large-scale MEAs into implantable devices for scientific investigation and medical technology. PMID:25904712

  15. Designing scalable product families by the radial basis function-high-dimensional model representation metamodelling technique

    NASA Astrophysics Data System (ADS)

    Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary

    2015-10-01

    Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.

  16. Adaptive [theta]-methods for pricing American options

    NASA Astrophysics Data System (ADS)

    Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran

    2008-12-01

    We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.

  17. Comparison of three newton-like nonlinear least-squares methods for estimating parameters of ground-water flow models

    USGS Publications Warehouse

    Cooley, R.L.; Hill, M.C.

    1992-01-01

    Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.

  18. Work Measurement as a Generalized Quantum Measurement

    NASA Astrophysics Data System (ADS)

    Roncaglia, Augusto J.; Cerisola, Federico; Paz, Juan Pablo

    2014-12-01

    We present a new method to measure the work w performed on a driven quantum system and to sample its probability distribution P (w ). The method is based on a simple fact that remained unnoticed until now: Work on a quantum system can be measured by performing a generalized quantum measurement at a single time. Such measurement, which technically speaking is denoted as a positive operator valued measure reduces to an ordinary projective measurement on an enlarged system. This observation not only demystifies work measurement but also suggests a new quantum algorithm to efficiently sample the distribution P (w ). This can be used, in combination with fluctuation theorems, to estimate free energies of quantum states on a quantum computer.

  19. Large space structure damping design

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Haviland, J. K.

    1983-01-01

    Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.

  20. Alkali-templated surface nanopatterning of chalcogenide thin films: a novel approach toward solar cells with enhanced efficiency.

    PubMed

    Reinhard, Patrick; Bissig, Benjamin; Pianezzi, Fabian; Hagendorfer, Harald; Sozzi, Giovanna; Menozzi, Roberto; Gretener, Christina; Nishiwaki, Shiro; Buecheler, Stephan; Tiwari, Ayodhya N

    2015-05-13

    Concepts of localized contacts and junctions through surface passivation layers are already advantageously applied in Si wafer-based photovoltaic technologies. For Cu(In,Ga)Se2 thin film solar cells, such concepts are generally not applied, especially at the heterojunction, because of the lack of a simple method yielding features with the required size and distribution. Here, we show a novel, innovative surface nanopatterning approach to form homogeneously distributed nanostructures (<30 nm) on the faceted, rough surface of polycrystalline chalcogenide thin films. The method, based on selective dissolution of self-assembled and well-defined alkali condensates in water, opens up new research opportunities toward development of thin film solar cells with enhanced efficiency.

  1. Calibration of 4π NaI(Tl) detectors with coincidence summing correction using new numerical procedure and ANGLE4 software

    NASA Astrophysics Data System (ADS)

    Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.

    2017-03-01

    The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.

  2. Eco-efficiency improvements in industrial water-service systems: assessing options with stakeholders.

    PubMed

    Levidow, Les; Lindgaard-Jørgensen, Palle; Nilsson, Asa; Skenhall, Sara Alongi; Assimacopoulos, Dionysis

    2014-01-01

    The well-known eco-efficiency concept helps to assess the economic value and resource burdens of potential improvements by comparison with the baseline situation. But eco-efficiency assessments have generally focused on a specific site, while neglecting wider effects, for example, through interactions between water users and wastewater treatment (WWT) providers. To address the methodological gap, the EcoWater project has developed a method and online tools for meso-level analysis of the entire water-service value chain. This study investigated improvement options in two large manufacturing companies which have significant potential for eco-efficiency gains. They have been considering investment in extra processes which can lower resource burdens from inputs and wastewater, as well as internalising WWT processes. In developing its methodology, the EcoWater project obtained the necessary information from many agents, involved them in the meso-level assessment and facilitated their discussion on alternative options. Prior discussions with stakeholders stimulated their attendance at a workshop to discuss a comparative eco-efficiency assessment for whole-system improvement. Stakeholders expressed interest in jointly extending the EcoWater method to more options and in discussing investment strategies. In such ways, optimal solutions will depend on stakeholders overcoming fragmentation by sharing responsibility and knowledge.

  3. Efficient least angle regression for identification of linear-in-the-parameters models

    PubMed Central

    Beach, Thomas H.; Rezgui, Yacine

    2017-01-01

    Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140

  4. Determination of Thermoelectric Module Efficiency A Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsin; McCarty, Robin; Salvador, James R.

    2014-01-01

    The development of thermoelectrics (TE) for energy conversion is in the transition phase from laboratory research to device development. There is an increasing demand to accurately determine the module efficiency, especially for the power generation mode. For many thermoelectrics, the figure of merit, ZT, of the material sometimes cannot be fully realized at the device level. Reliable efficiency testing of thermoelectric modules is important to assess the device ZT and provide the end-users with realistic values on how much power can be generated under specific conditions. We conducted a general survey of efficiency testing devices and their performance. The resultsmore » indicated the lack of industry standards and test procedures. This study included a commercial test system and several laboratory systems. Most systems are based on the heat flow meter method and some are based on the Harman method. They are usually reproducible in evaluating thermoelectric modules. However, cross-checking among different systems often showed large errors that are likely caused by unaccounted heat loss and thermal resistance. Efficiency testing is an important area for the thermoelectric community to focus on. A follow-up international standardization effort is planned.« less

  5. Short ensembles: An Efficient Method for Discerning Climate-relevant Sensitivities in Atmospheric General Circulation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Hui; Rasch, Philip J.; Zhang, Kai

    2014-09-08

    This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivitymore » of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.« less

  6. Vector critical points and generalized quasi-efficient solutions in nonsmooth multi-objective programming.

    PubMed

    Wang, Zhen; Li, Ru; Yu, Guolin

    2017-01-01

    In this work, several extended approximately invex vector-valued functions of higher order involving a generalized Jacobian are introduced, and some examples are presented to illustrate their existences. The notions of higher-order (weak) quasi-efficiency with respect to a function are proposed for a multi-objective programming. Under the introduced generalization of higher-order approximate invexities assumptions, we prove that the solutions of generalized vector variational-like inequalities in terms of the generalized Jacobian are the generalized quasi-efficient solutions of nonsmooth multi-objective programming problems. Moreover, the equivalent conditions are presented, namely, a vector critical point is a weakly quasi-efficient solution of higher order with respect to a function.

  7. Toyota Prius HEV neurocontrol and diagnostics.

    PubMed

    Prokhorov, Danil V

    2008-01-01

    A neural network controller for improved fuel efficiency of the Toyota Prius hybrid electric vehicle is proposed. A new method to detect and mitigate a battery fault is also presented. The approach is based on recurrent neural networks and includes the extended Kalman filter. The proposed approach is quite general and applicable to other control systems.

  8. Visual versus mechanised leucocyte differential counts: costing and evaluation of traditional and Hemalog D methods.

    PubMed Central

    Hudson, M J; Green, A E

    1980-01-01

    Visual differential counts were examined for efficiency, cost effectiveness, and staff acceptability within our laboratory. A comparison with the Hemalog D system was attempted. The advantages and disadvantages of each system are enumerated and discussed in the context of a large general hospital. PMID:7440760

  9. System Concept in Education. Professional Paper No. 20-74.

    ERIC Educational Resources Information Center

    Smith, Robert G., Jr.

    In its most general sense, a system is a group of components integrated to accomplish a purpose. The heart of an educational system is the instructional system. An instructional system is an integrated set of media, equipment, methods, and personnel performing efficiently those functions required to accomplish one or more learning objectives. An…

  10. The Qtracer2 Program for Tracer-Breakthrough Curve Analysis for Tracer Tests in Karstic Aquifers and Other Hydrologic Systems (2002)

    EPA Science Inventory

    Tracer testing is generally regarded as the most reliable and efficient method of gathering surface and subsurface hydraulic information. This is especially true for karstic and fractured-rock aquifers. Qualitative tracing tests have been conventionally employed in most karst s...

  11. New hybrid conjugate gradient methods with the generalized Wolfe line search.

    PubMed

    Xu, Xiao; Kong, Fan-Yu

    2016-01-01

    The conjugate gradient method was an efficient technique for solving the unconstrained optimization problem. In this paper, we made a linear combination with parameters β k of the DY method and the HS method, and putted forward the hybrid method of DY and HS. We also proposed the hybrid of FR and PRP by the same mean. Additionally, to present the two hybrid methods, we promoted the Wolfe line search respectively to compute the step size α k of the two hybrid methods. With the new Wolfe line search, the two hybrid methods had descent property and global convergence property of the two hybrid methods that can also be proved.

  12. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    PubMed

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Family-Based Rare Variant Association Analysis: A Fast and Efficient Method of Multivariate Phenotype Association Analysis.

    PubMed

    Wang, Longfei; Lee, Sungyoung; Gim, Jungsoo; Qiao, Dandi; Cho, Michael; Elston, Robert C; Silverman, Edwin K; Won, Sungho

    2016-09-01

    Family-based designs have been repeatedly shown to be powerful in detecting the significant rare variants associated with human diseases. Furthermore, human diseases are often defined by the outcomes of multiple phenotypes, and thus we expect multivariate family-based analyses may be very efficient in detecting associations with rare variants. However, few statistical methods implementing this strategy have been developed for family-based designs. In this report, we describe one such implementation: the multivariate family-based rare variant association tool (mFARVAT). mFARVAT is a quasi-likelihood-based score test for rare variant association analysis with multiple phenotypes, and tests both homogeneous and heterogeneous effects of each variant on multiple phenotypes. Simulation results show that the proposed method is generally robust and efficient for various disease models, and we identify some promising candidate genes associated with chronic obstructive pulmonary disease. The software of mFARVAT is freely available at http://healthstat.snu.ac.kr/software/mfarvat/, implemented in C++ and supported on Linux and MS Windows. © 2016 WILEY PERIODICALS, INC.

  14. Efficient and Robust Model-to-Image Alignment using 3D Scale-Invariant Features

    PubMed Central

    Toews, Matthew; Wells, William M.

    2013-01-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a-posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. PMID:23265799

  15. Solving search problems by strongly simulating quantum circuits

    PubMed Central

    Johnson, T. H.; Biamonte, J. D.; Clark, S. R.; Jaksch, D.

    2013-01-01

    Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time. PMID:23390585

  16. US general aviation: The ingredients for a renaissance. A vision and technology strategy for US industry, NASA, FAA, universities

    NASA Technical Reports Server (NTRS)

    Holmes, Bruce

    1993-01-01

    General aviation today is a vital component in the nation's air transportation system. It is threatened for survival but has enormous potential for expansion in utility and use. This potential for expansion is fueled by new satellite navigation and communication systems, small computers, flat panel displays, and advanced aerodynamics, materials and manufacturing methods, and propulsion technologies which create opportunities for new levels of environmental and economic acceptability. Expanded general aviation utility and use could have a large impact on the nation's jobs, commerce, industry, airspace capacity, trade balance, and quality of life. This paper presents, in viewgraph form, a general overview of U.S. general aviation. Topics covered include general aviation shipment and billings; airport and general aviation infrastructure; cockpit, airplane, and airspace technologies; market demand; air traffic operations and aviation accidents; fuel efficiency comparisons; and general aviation goals and strategy.

  17. Estimation and modeling of electrofishing capture efficiency for fishes in wadeable warmwater streams

    USGS Publications Warehouse

    Price, A.; Peterson, James T.

    2010-01-01

    Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.

  18. An efficient genome-wide association test for multivariate phenotypes based on the Fisher combination function.

    PubMed

    Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne

    2016-01-05

    In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.

  19. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells

    PubMed Central

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized. PMID:28207765

  20. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells.

    PubMed

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang; Baifeng, Li

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized.

  1. On extracting design principles from biology: I. Method-General answers to high-level design questions for bioinspired robots.

    PubMed

    Haberland, M; Kim, S

    2015-02-02

    When millions of years of evolution suggest a particular design solution, we may be tempted to abandon traditional design methods and copy the biological example. However, biological solutions do not often translate directly into the engineering domain, and even when they do, copying eliminates the opportunity to improve. A better approach is to extract design principles relevant to the task of interest, incorporate them in engineering designs, and vet these candidates against others. This paper presents the first general framework for determining whether biologically inspired relationships between design input variables and output objectives and constraints are applicable to a variety of engineering systems. Using optimization and statistics to generalize the results beyond a particular system, the framework overcomes shortcomings observed of ad hoc methods, particularly those used in the challenging study of legged locomotion. The utility of the framework is demonstrated in a case study of the relative running efficiency of rotary-kneed and telescoping-legged robots.

  2. Generalized ISAR--part II: interferometric techniques for three-dimensional location of scatterers.

    PubMed

    Given, James A; Schmidt, William R

    2005-11-01

    This paper is the second part of a study dedicated to optimizing diagnostic inverse synthetic aperture radar (ISAR) studies of large naval vessels. The method developed here provides accurate determination of the position of important radio-frequency scatterers by combining accurate knowledge of ship position and orientation with specialized signal processing. The method allows for the simultaneous presence of substantial Doppler returns from both change of roll angle and change of aspect angle by introducing generalized ISAR ates. The first paper provides two modes of interpreting ISAR plots, one valid when roll Doppler is dominant, the other valid when the aspect angle Doppler is dominant. Here, we provide, for each type of ISAR plot technique, a corresponding interferometric ISAR (InSAR) technique. The former, aspect-angle dominated InSAR, is a generalization of standard InSAR; the latter, roll-angle dominated InSAR, seems to be new to this work. Both methods are shown to be efficient at identifying localized scatterers under simulation conditions.

  3. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  4. Comparison of numerical techniques for integration of stiff ordinary differential equations arising in combustion chemistry

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    The efficiency and accuracy of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations are compared. The methods examined include two general-purpose codes, EPISODE and LSODE, and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an interactive solution of the algebraic energy conservation equation to compute the temperature does not result in significant errors. In addition, this method is more efficient than evaluating the temperature by integrating its time derivative. Significant reductions in computational work are realized by updating the rate constants (k = at(supra N) N exp(-E/RT) only when the temperature change exceeds an amount delta T that is problem dependent. An approximate expression for the automatic evaluation of delta T is derived and is shown to result in increased efficiency.

  5. Applications of He's semi-inverse method, ITEM and GGM to the Davey-Stewartson equation

    NASA Astrophysics Data System (ADS)

    Zinati, Reza Farshbaf; Manafian, Jalil

    2017-04-01

    We investigate the Davey-Stewartson (DS) equation. Travelling wave solutions were found. In this paper, we demonstrate the effectiveness of the analytical methods, namely, He's semi-inverse variational principle method (SIVPM), the improved tan(φ/2)-expansion method (ITEM) and generalized G'/G-expansion method (GGM) for seeking more exact solutions via the DS equation. These methods are direct, concise and simple to implement compared to other existing methods. The exact solutions containing four types solutions have been achieved. The results demonstrate that the aforementioned methods are more efficient than the Ansatz method applied by Mirzazadeh (2015). Abundant exact travelling wave solutions including solitons, kink, periodic and rational solutions have been found by the improved tan(φ/2)-expansion and generalized G'/G-expansion methods. By He's semi-inverse variational principle we have obtained dark and bright soliton wave solutions. Also, the obtained semi-inverse variational principle has profound implications in physical understandings. These solutions might play important role in engineering and physics fields. Moreover, by using Matlab, some graphical simulations were done to see the behavior of these solutions.

  6. An efficient method to compute microlensed light curves for point sources

    NASA Technical Reports Server (NTRS)

    Witt, Hans J.

    1993-01-01

    We present a method to compute microlensed light curves for point sources. This method has the general advantage that all microimages contributing to the light curve are found. While a source moves along a straight line, all micro images are located either on the primary image track or on the secondary image tracks (loops). The primary image track extends from - infinity to + infinity and is made of many sequents which are continuously connected. All the secondary image tracks (loops) begin and end on the lensing point masses. The method can be applied to any microlensing situation with point masses in the deflector plane, even for the overcritical case and surface densities close to the critical. Furthermore, we present general rules to evaluate the light curve for a straight track arbitrary placed in the caustic network of a sample of many point masses.

  7. A quantum–quantum Metropolis algorithm

    PubMed Central

    Yung, Man-Hong; Aspuru-Guzik, Alán

    2012-01-01

    The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584

  8. Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order

    NASA Astrophysics Data System (ADS)

    Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy

    Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.

  9. A class of high resolution explicit and implicit shock-capturing methods

    NASA Technical Reports Server (NTRS)

    Yee, H. C.

    1989-01-01

    An attempt is made to give a unified and generalized formulation of a class of high resolution, explicit and implicit shock capturing methods, and to illustrate their versatility in various steady and unsteady complex shock wave computations. Included is a systematic review of the basic design principle of the various related numerical methods. Special emphasis is on the construction of the basis nonlinear, spatially second and third order schemes for nonlinear scalar hyperbolic conservation laws and the methods of extending these nonlinear scalar schemes to nonlinear systems via the approximate Riemann solvers and the flux vector splitting approaches. Generalization of these methods to efficiently include equilibrium real gases and large systems of nonequilibrium flows are discussed. Some issues concerning the applicability of these methods that were designed for homogeneous hyperbolic conservation laws to problems containing stiff source terms and shock waves are also included. The performance of some of these schemes is illustrated by numerical examples for 1-, 2- and 3-dimensional gas dynamics problems.

  10. One-step leapfrog ADI-FDTD method for simulating electromagnetic wave propagation in general dispersive media.

    PubMed

    Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David

    2013-09-09

    The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.

  11. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  12. Copper-Mediated Formation of Aryl, Heteroaryl, Vinyl and Alkynyl Difluoromethylphosphonates: A General Approach to Fluorinated Phosphate Mimics.

    PubMed

    Ivanova, Maria V; Bayle, Alexandre; Besset, Tatiana; Poisson, Thomas; Pannecoucke, Xavier

    2015-11-02

    A general and efficient access to aryl, heteroaryl, vinyl and alkynyl difluoromethylphosphonates is described. The developed methodology using TMSCF2PO(OEt)2, iodonium salts and a copper salt provided a straightforward manifold to reach these highly relevant products. The reaction proved to be highly functional group tolerant and proceeded under mild conditions, giving the corresponding products in good to excellent yields. This method represents the first general synthetic route to this important class of fluorinated scaffolds, which are well-recognized as in vivo stable phosphate surrogates. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. On the generalized VIP time integral methodology for transient thermal problems

    NASA Technical Reports Server (NTRS)

    Mei, Youping; Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong

    1993-01-01

    The paper describes the development and applicability of a generalized VIrtual-Pulse (VIP) time integral method of computation for thermal problems. Unlike past approaches for general heat transfer computations, and with the advent of high speed computing technology and the importance of parallel computations for efficient use of computing environments, a major motivation via the developments described in this paper is the need for developing explicit computational procedures with improved accuracy and stability characteristics. As a consequence, a new and effective VIP methodology is described which inherits these improved characteristics. Numerical illustrative examples are provided to demonstrate the developments and validate the results obtained for thermal problems.

  14. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.

    PubMed

    Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben

    2017-06-06

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.

  15. Socio-economic and cultural differentials in contraceptive usage among Ghanaian women.

    PubMed

    Oheneba-sakyi, Y

    1990-01-01

    Data from the Ghana Fertility Survey of 2001 married women in 1979-1980 were subjected to logistic regression to determine the factors influencing contraceptive use. In this Ghanaian sample only 22 women and no men were sterilized, 11% used an efficient contraceptive method and 8% were using an inefficient method. The most prevalent methods were abstinence by 6% and pill by 5%. The variables analyzed were birth cohort, age at 1st marriage, education, occupation, religion, ethnicity, rural/urban residence, northern/southern residence and number of children desired number of living children. All these factors were dichotomized, e.g., cohort: born before or after 1950. Factors positively significant for contraceptive use were younger women (20% more likely), married at age 20 or older (82% more), education (150% for any method, 67% for an efficient method), professional occupations, protestants, urban residence, southern residence, desire fewer children. Factors negatively associated with contraception were agricultural work (50% as likely), non-Christian religion, both traditional and Moslems (75%), desiring more children and living in the north. Unexpectedly, living in the northern undeveloped region was strongly linked with use of an efficient contraceptive. A factor without significant effect was ethnicity, Akan or non-Akan. These results were discussed with a general review of the literature on determinants of contraceptive use.

  16. Dynamic frame resizing with convolutional neural network for efficient video compression

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon

    2017-09-01

    In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.

  17. Sparse Covariance Matrix Estimation With Eigenvalue Constraints

    PubMed Central

    LIU, Han; WANG, Lie; ZHAO, Tuo

    2014-01-01

    We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866

  18. An efficient computer based wavelets approximation method to solve Fuzzy boundary value differential equations

    NASA Astrophysics Data System (ADS)

    Alam Khan, Najeeb; Razzaq, Oyoon Abdul

    2016-03-01

    In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.

  19. The effectiveness of Nurse Practitioners working at a GP cooperative: a study protocol

    PubMed Central

    2012-01-01

    Background In many countries out-of-hours care faces serious challenges, including shortage of general practitioners, a high workload, reduced motivation to work out of hours, and increased demand for out-of-hours care. One response to these challenges is the introduction of nurse practitioner as doctor substitutes, in order to maintain the (high) accessibility and safety of out of hours care. Although nurse practitioners have proven to provide equally safe and efficient care during daytime primary care, it is unclear whether substitution is effective and efficient in the more complex out of hours primary care. This study aims to assess the effects of substitution of care from general practitioners to nurse practitioners in an out of hours primary care setting. Design A quasi experimental study is undertaken at one “general practitioner cooperative” to offer out-of-hours care for 304.000 people in the South East of the Netherlands. In the experimental condition patient care is provided by a team of one nurse practitioner and four general practitioners; where the nurse practitioner replaces one general practitioner during one day of the weekend from 10 am to 5 pm. In the control condition patient care is provided by a team of five general practitioners during the other day of the weekend, also from 10 am to 5 pm. The study period last 15 months, from April 2011 till July 2012. Methods Data will be collected on number of different outcomes using a range of methods. Our primary outcome is substitution of care. This is calculated using the number and characteristics of patients that have a consultation at the GP cooperative. We compare the number of patients seen by both professionals, type of complaints, resource utilization (e.g. prescription, tests, investigations, referrals) and waiting times in the experimental condition and control condition. This data is derived from patient electronic medical records. Secondary outcomes are: patient satisfaction; general practitioners workload; quality and safety of care and barriers and facilitators. Discussion The study will provide evidence whether substitution of care in out-of-hours setting is safe and efficient and give insight into barriers and facilitators related to the introduction of nurse practitioners in out-of-hours setting. Trial registration ClinicalTrials.gov ID NCT01388374 PMID:22870898

  20. Efficient determination of the Markovian time-evolution towards a steady-state of a complex open quantum system

    NASA Astrophysics Data System (ADS)

    Jonsson, Thorsteinn H.; Manolescu, Andrei; Goan, Hsi-Sheng; Abdullah, Nzar Rauf; Sitek, Anna; Tang, Chi-Shung; Gudmundsson, Vidar

    2017-11-01

    Master equations are commonly used to describe time evolution of open systems. We introduce a general computationally efficient method for calculating a Markovian solution of the Nakajima-Zwanzig generalized master equation. We do so for a time-dependent transport of interacting electrons through a complex nano scale system in a photon cavity. The central system, described by 120 many-body states in a Fock space, is weakly coupled to the external leads. The efficiency of the approach allows us to place the bias window defined by the external leads high into the many-body spectrum of the cavity photon-dressed states of the central system revealing a cascade of intermediate transitions as the system relaxes to a steady state. The very diverse relaxation times present in the open system, reflecting radiative or non-radiative transitions, require information about the time evolution through many orders of magnitude. In our approach, the generalized master equation is mapped from a many-body Fock space of states to a Liouville space of transitions. We show that this results in a linear equation which is solved exactly through an eigenvalue analysis, which supplies information on the steady state and the time evolution of the system.

  1. Automated sizing of large structures by mixed optimization methods

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.; Loendorf, D.

    1973-01-01

    A procedure for automating the sizing of wing-fuselage airframes was developed and implemented in the form of an operational program. The program combines fully stressed design to determine an overall material distribution with mass-strength and mathematical programming methods to design structural details accounting for realistic design constraints. The practicality and efficiency of the procedure is demonstrated for transport aircraft configurations. The methodology is sufficiently general to be applicable to other large and complex structures.

  2. Method Evaluations for Adsorption Free Energy Calculations at the Solid/Water Interface through Metadynamics, Umbrella Sampling, and Jarzynski's Equality.

    PubMed

    Wei, Qichao; Zhao, Weilong; Yang, Yang; Cui, Beiliang; Xu, Zhijun; Yang, Xiaoning

    2018-03-19

    Considerable interest in characterizing protein/peptide-surface interactions has prompted extensive computational studies on calculations of adsorption free energy. However, in many cases, each individual study has focused on the application of free energy calculations to a specific system; therefore, it is difficult to combine the results into a general picture for choosing an appropriate strategy for the system of interest. Herein, three well-established computational algorithms are systemically compared and evaluated to compute the adsorption free energy of small molecules on two representative surfaces. The results clearly demonstrate that the characteristics of studied interfacial systems have crucial effects on the accuracy and efficiency of the adsorption free energy calculations. For the hydrophobic surface, steered molecular dynamics exhibits the highest efficiency, which appears to be a favorable method of choice for enhanced sampling simulations. However, for the charged surface, only the umbrella sampling method has the ability to accurately explore the adsorption free energy surface. The affinity of the water layer to the surface significantly affects the performance of free energy calculation methods, especially at the region close to the surface. Therefore, a general principle of how to discriminate between methodological and sampling issues based on the interfacial characteristics of the system under investigation is proposed. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Assessment of masticatory performance by means of a color-changeable chewing gum.

    PubMed

    Tarkowska, Agnieszka; Katzer, Lukasz; Ahlers, Marcus Oliver

    2017-01-01

    Previous research determined the relevance of masticatory performance with regard to nutritional status, cognitive functions, or stress management. In addition, the measurement of masticatory efficiency contributes to the evaluation of therapeutic successes within the stomatognathic system. However, the question remains unanswered as to what extent modern techniques are able to reproduce the subtle differences in masticatory efficiency within various patient groups. The purpose of this review is to provide an extensive summary of the evaluation of masticatory performance by means of a color-changeable chewing gum with regard to its clinical relevance and applicability. A general overview describing the various methods available for this task has already been published. This review focuses in depth on the research findings available on the technique of measuring masticatory performance by means of color-changeable chewing gum. Described are the mechanism and the differentiability of the color change and methods to evaluate the color changes. Subsequently, research on masticatory performance is conducted with regard to patient age groups, the impact of general diseases and the effect of prosthetic and surgical treatment. The studies indicate that color-changeable chewing gum is a valid and reliable method for the evaluation of masticatory function. Apart from other methods, in clinical practice this technique can enhance dental diagnostics as well as the assessment of therapy outcomes. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  4. Comparison of the Computational Efficiency of the Original Versus Reformulated High-Fidelity Generalized Method of Cells

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M; Bednarcyk, Brett; Aboydi, Jacob

    2004-01-01

    The High-Fidelity Generalized Method of Cells (HFGMC) micromechanics model has recently been reformulated by Bansal and Pindera (in the context of elastic phases with perfect bonding) to maximize its computational efficiency. This reformulated version of HFGMC has now been extended to include both inelastic phases and imperfect fiber-matrix bonding. The present paper presents an overview of the HFGMC theory in both its original and reformulated forms and a comparison of the results of the two implementations. The objective is to establish the correlation between the two HFGMC formulations and document the improved efficiency offered by the reformulation. The results compare the macro and micro scale predictions of the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) versions of both formulations into the inelastic regime, and, in the case of the discontinuous reinforcement version, with both perfect and weak interfacial bonding. The results demonstrate that identical predictions are obtained using either the original or reformulated implementations of HFGMC aside from small numerical differences in the inelastic regime due to the different implementation schemes used for the inelastic terms present in the two formulations. Finally, a direct comparison of execution times is presented for the original formulation and reformulation code implementations. It is shown that as the discretization employed in representing the composite repeating unit cell becomes increasingly refined (requiring a larger number of sub-volumes), the reformulated implementation becomes significantly (approximately an order of magnitude at best) more computationally efficient in both the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) cases.

  5. Evaluation of Microwave Steam Bags for the Decontamination of Filtering Facepiece Respirators

    PubMed Central

    Fisher, Edward M.; Williams, Jessica L.; Shaffer, Ronald E.

    2011-01-01

    Reusing filtering facepiece respirators (FFRs) has been suggested as a strategy to conserve available supplies for home and healthcare environments during an influenza pandemic. For reuse to be possible, used FFRs must be decontaminated before redonning to reduce the risk of virus transmission; however, there are no approved methods for FFR decontamination. An effective method must reduce the microbial threat, maintain the function of the FFR, and present no residual chemical hazard. The method should be readily available, inexpensive and easily implemented by healthcare workers and the general public. Many of the general decontamination protocols used in healthcare and home settings are unable to address all of the desired qualities of an efficient FFR decontamination protocol. The goal of this study is to evaluate the use of two commercially available steam bags, marketed to the public for disinfecting infant feeding equipment, for FFR decontamination. The FFRs were decontaminated with microwave generated steam following the manufacturers' instructions then evaluated for water absorption and filtration efficiency for up to three steam exposures. Water absorption of the FFR was found to be model specific as FFRs constructed with hydrophilic materials absorbed more water. The steam had little effect on FFR performance as filtration efficiency of the treated FFRs remained above 95%. The decontamination efficacy of the steam bag was assessed using bacteriophage MS2 as a surrogate for a pathogenic virus. The tested steam bags were found to be 99.9% effective for inactivating MS2 on FFRs; however, more research is required to determine the effectiveness against respiratory pathogens. PMID:21525995

  6. Efficient modeling of interconnects and capacitive discontinuities in high-speed digital circuits. Thesis

    NASA Technical Reports Server (NTRS)

    Oh, K. S.; Schutt-Aine, J.

    1995-01-01

    Modeling of interconnects and associated discontinuities with the recent advances high-speed digital circuits has gained a considerable interest over the last decade although the theoretical bases for analyzing these structures were well-established as early as the 1960s. Ongoing research at the present time is focused on devising methods which can be applied to more general geometries than the ones considered in earlier days and, at the same time, improving the computational efficiency and accuracy of these methods. In this thesis, numerically efficient methods to compute the transmission line parameters of a multiconductor system and the equivalent capacitances of various strip discontinuities are presented based on the quasi-static approximation. The presented techniques are applicable to conductors embedded in an arbitrary number of dielectric layers with two possible locations of ground planes at the top and bottom of the dielectric layers. The cross-sections of conductors can be arbitrary as long as they can be described with polygons. An integral equation approach in conjunction with the collocation method is used in the presented methods. A closed-form Green's function is derived based on weighted real images thus avoiding nested infinite summations in the exact Green's function; therefore, this closed-form Green's function is numerically more efficient than the exact Green's function. All elements associated with the moment matrix are computed using the closed-form formulas. Various numerical examples are considered to verify the presented methods, and a comparison of the computed results with other published results showed good agreement.

  7. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  8. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less

  9. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  10. Eigenmodes of Multilayer Slit Structures

    NASA Astrophysics Data System (ADS)

    Kovalenko, A. N.

    2017-12-01

    We generalize the high-efficiency numerical-analytical method of calculating the eigenmodes of a microstrip line, which was proposed in [1], to multilayer slit structures. The obtained relationships make it possible to allow for the multilayer nature of the medium on the basis of solving the electrodynamic problem for a two-layer structure. The algebraic models of a single line and coupled slit lines in a multilayer dielectric medium are constructed. The matrix elements of the system of linear algebraic equations, which is used to determine the expansion coefficients of the electric field inside the slits in a Chebyshev basis, are converted to rapidly convergent series. The constructed models allow one to use computer simulation to obtain numerical results with high speed and accuracy, regardless of the number of dielectric layers. The presented results of a numerical study of the method convergence confirm high efficiency of the method.

  11. Community-based recruitment strategies for a longitudinal interventional study: the VECAT experience.

    PubMed

    Garrett, S K; Thomas, A P; Cicuttini, F; Silagy, C; Taylor, H R; McNeil, J J

    2000-05-01

    This article examines different recruitment strategies for the VECAT Study, a 4-year, double-masked, placebo-controlled, randomized clinical trial of vitamin E in the prevention of cataract and age-related maculopathy. Five recruitment methods were employed: newspaper advertising, radio advertising, approaches to community groups, approaches via general practices, and an electoral roll mail-out. Participants (1204) from the community in Melbourne, Australia were recruited and enrolled within 15 months (age range: 55-80 years, mean 66 years; gender ratio: 57% female, 43% male). The electoral roll mail-out and newspaper advertising were the most efficient methods of recruitment in terms of absolute numbers of participants recruited and cost per participant. Recruitment for the VECAT study was successfully completed within the planned period. Although the electoral roll mail-out and newspaper advertising were the most efficient for this study, other methods may be of value for studies with different subject selection criteria.

  12. Generalized characteristic ratios assignment for commensurate fractional order systems with one zero.

    PubMed

    Tabatabaei, Mohammad

    2017-07-01

    In this paper, a new method for determination of the desired characteristic equation and zero location of commensurate fractional order systems is presented. The concept of the characteristic ratio is extended for zero-including commensurate fractional order systems. The generalized version of characteristic ratios is defined such that the time-scaling property of characteristic ratios is also preserved. The monotonicity of the magnitude frequency response is employed to assign the generalized characteristic ratios for commensurate fractional order transfer functions with one zero. A simple pattern for characteristic ratios is proposed to reach a non-overshooting step response. Then, the proposed pattern is revisited to reach a low overshoot (say for example 2%) step response. Finally, zero-including controllers such as fractional order PI or lag (lead) controllers are designed using generalized characteristic ratios assignment method. Numerical simulations are provided to show the efficiency of the so designed controllers. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.

  14. Enhanced conformational sampling using replica exchange with concurrent solute scaling and hamiltonian biasing realized in one dimension.

    PubMed

    Yang, Mingjun; Huang, Jing; MacKerell, Alexander D

    2015-06-09

    Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.

  15. Estimating ice particle scattering properties using a modified Rayleigh-Gans approximation

    NASA Astrophysics Data System (ADS)

    Lu, Yinghui; Clothiaux, Eugene E.; Aydin, Kültegin; Verlinde, Johannes

    2014-09-01

    A modification to the Rayleigh-Gans approximation is made that includes self-interactions between different parts of an ice crystal, which both improves the accuracy of the Rayleigh-Gans approximation and extends its applicability to polarization-dependent parameters. This modified Rayleigh-Gans approximation is both efficient and reasonably accurate for particles with at least one dimension much smaller than the wavelength (e.g., dendrites at millimeter or longer wavelengths) or particles with sparse structures (e.g., low-density aggregates). Relative to the Generalized Multiparticle Mie method, backscattering reflectivities at horizontal transmit and receive polarization (HH) (ZHH) computed with this modified Rayleigh-Gans approach are about 3 dB more accurate than with the traditional Rayleigh-Gans approximation. For realistic particle size distributions and pristine ice crystals the modified Rayleigh-Gans approach agrees with the Generalized Multiparticle Mie method to within 0.5 dB for ZHH whereas for the polarimetric radar observables differential reflectivity (ZDR) and specific differential phase (KDP) agreement is generally within 0.7 dB and 13%, respectively. Compared to the A-DDA code, the modified Rayleigh-Gans approximation is several to tens of times faster if scattering properties for different incident angles and particle orientations are calculated. These accuracies and computational efficiencies are sufficient to make this modified Rayleigh-Gans approach a viable alternative to the Rayleigh-Gans approximation in some applications such as millimeter to centimeter wavelength radars and to other methods that assume simpler, less accurate shapes for ice crystals. This method should not be used on materials with dielectric properties much different from ice and on compact particles much larger than the wavelength.

  16. A multifractal detrended fluctuation analysis of financial market efficiency: Comparison using Dow Jones sector ETF indices

    NASA Astrophysics Data System (ADS)

    Tiwari, Aviral Kumar; Albulescu, Claudiu Tiberiu; Yoon, Seong-Min

    2017-10-01

    This study challenges the efficient market hypothesis, relying on the Dow Jones sector Exchange-Traded Fund (ETF) indices. For this purpose, we use the generalized Hurst exponent and multifractal detrended fluctuation analysis (MF-DFA) methods, using daily data over the timespan from 2000 to 2015. We compare the sector ETF indices in terms of market efficiency between short- and long-run horizons, small and large fluctuations, and before and after the global financial crisis (GFC). Our findings can be summarized as follows. First, there is clear evidence that the sector ETF markets are multifractal in nature. We also find a crossover in the multifractality of sector ETF market dynamics. Second, the utilities and consumer goods sector ETF markets are more efficient compared with the financial and telecommunications sector ETF markets, in terms of price prediction. Third, there are noteworthy discrepancies in terms of market efficiency, between the short- and long-term horizons. Fourth, the ETF market efficiency is considerably diminished after the global financial crisis.

  17. Improving membrane protein expression by optimizing integration efficiency

    PubMed Central

    2017-01-01

    The heterologous overexpression of integral membrane proteins in Escherichia coli often yields insufficient quantities of purifiable protein for applications of interest. The current study leverages a recently demonstrated link between co-translational membrane integration efficiency and protein expression levels to predict protein sequence modifications that improve expression. Membrane integration efficiencies, obtained using a coarse-grained simulation approach, robustly predicted effects on expression of the integral membrane protein TatC for a set of 140 sequence modifications, including loop-swap chimeras and single-residue mutations distributed throughout the protein sequence. Mutations that improve simulated integration efficiency were 4-fold enriched with respect to improved experimentally observed expression levels. Furthermore, the effects of double mutations on both simulated integration efficiency and experimentally observed expression levels were cumulative and largely independent, suggesting that multiple mutations can be introduced to yield higher levels of purifiable protein. This work provides a foundation for a general method for the rational overexpression of integral membrane proteins based on computationally simulated membrane integration efficiencies. PMID:28918393

  18. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  19. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  20. Weierstrass method for quaternionic polynomial root-finding

    NASA Astrophysics Data System (ADS)

    Falcão, M. Irene; Miranda, Fernando; Severino, Ricardo; Soares, M. Joana

    2018-01-01

    Quaternions, introduced by Hamilton in 1843 as a generalization of complex numbers, have found, in more recent years, a wealth of applications in a number of different areas which motivated the design of efficient methods for numerically approximating the zeros of quaternionic polynomials. In fact, one can find in the literature recent contributions to this subject based on the use of complex techniques, but numerical methods relying on quaternion arithmetic remain scarce. In this paper we propose a Weierstrass-like method for finding simultaneously {\\sl all} the zeros of unilateral quaternionic polynomials. The convergence analysis and several numerical examples illustrating the performance of the method are also presented.

  1. Methods for genetic transformation of filamentous fungi.

    PubMed

    Li, Dandan; Tang, Yu; Lin, Jun; Cai, Weiwen

    2017-10-03

    Filamentous fungi have been of great interest because of their excellent ability as cell factories to manufacture useful products for human beings. The development of genetic transformation techniques is a precondition that enables scientists to target and modify genes efficiently and may reveal the function of target genes. The method to deliver foreign nucleic acid into cells is the sticking point for fungal genome modification. Up to date, there are some general methods of genetic transformation for fungi, including protoplast-mediated transformation, Agrobacterium-mediated transformation, electroporation, biolistic method and shock-wave-mediated transformation. This article reviews basic protocols and principles of these transformation methods, as well as their advantages and disadvantages.

  2. The constraint method: A new finite element technique. [applied to static and dynamic loads on plates

    NASA Technical Reports Server (NTRS)

    Tsai, C.; Szabo, B. A.

    1973-01-01

    An approch to the finite element method which utilizes families of conforming finite elements based on complete polynomials is presented. Finite element approximations based on this method converge with respect to progressively reduced element sizes as well as with respect to progressively increasing orders of approximation. Numerical results of static and dynamic applications of plates are presented to demonstrate the efficiency of the method. Comparisons are made with plate elements in NASTRAN and the high-precision plate element developed by Cowper and his co-workers. Some considerations are given to implementation of the constraint method into general purpose computer programs such as NASTRAN.

  3. Semiempirical Quantum Mechanical Methods for Noncovalent Interactions for Chemical and Biochemical Applications

    PubMed Central

    2016-01-01

    Semiempirical (SE) methods can be derived from either Hartree–Fock or density functional theory by applying systematic approximations, leading to efficient computational schemes that are several orders of magnitude faster than ab initio calculations. Such numerical efficiency, in combination with modern computational facilities and linear scaling algorithms, allows application of SE methods to very large molecular systems with extensive conformational sampling. To reliably model the structure, dynamics, and reactivity of biological and other soft matter systems, however, good accuracy for the description of noncovalent interactions is required. In this review, we analyze popular SE approaches in terms of their ability to model noncovalent interactions, especially in the context of describing biomolecules, water solution, and organic materials. We discuss the most significant errors and proposed correction schemes, and we review their performance using standard test sets of molecular systems for quantum chemical methods and several recent applications. The general goal is to highlight both the value and limitations of SE methods and stimulate further developments that allow them to effectively complement ab initio methods in the analysis of complex molecular systems. PMID:27074247

  4. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  5. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  6. The Ethics behind Efficiency

    ERIC Educational Resources Information Center

    Wight, Jonathan B.

    2017-01-01

    The normative elements underlying efficiency are more complex than generally portrayed and rely upon ethical frameworks that are generally absent from classroom discussions. Most textbooks, for example, ignore the ethical differences between Pareto efficiency (based on voluntary win-win outcomes) and the modern Kaldor-Hicks efficiency used in…

  7. Merging K-means with hierarchical clustering for identifying general-shaped groups.

    PubMed

    Peterson, Anna D; Ghosh, Arka P; Maitra, Ranjan

    2018-01-01

    Clustering partitions a dataset such that observations placed together in a group are similar but different from those in other groups. Hierarchical and K -means clustering are two approaches but have different strengths and weaknesses. For instance, hierarchical clustering identifies groups in a tree-like structure but suffers from computational complexity in large datasets while K -means clustering is efficient but designed to identify homogeneous spherically-shaped clusters. We present a hybrid non-parametric clustering approach that amalgamates the two methods to identify general-shaped clusters and that can be applied to larger datasets. Specifically, we first partition the dataset into spherical groups using K -means. We next merge these groups using hierarchical methods with a data-driven distance measure as a stopping criterion. Our proposal has the potential to reveal groups with general shapes and structure in a dataset. We demonstrate good performance on several simulated and real datasets.

  8. High Fidelity CFD Analysis and Validation of Rotorcraft Gearbox Aerodynamics Under Operational and Oil-Out Conditions

    NASA Technical Reports Server (NTRS)

    Kunz, Robert F.

    2014-01-01

    This document represents the evolving formal documentation of the NPHASE-PSU computer code. Version 3.15 is being delivered along with the software to NASA in 2013.Significant upgrades to the NPHASE-PSU have been made since the first delivery of draft documentation to DARPA and USNRC in 2006. These include a much lighter, faster and memory efficient face based front end, support for arbitrary polyhedra in front end, flow-solver and back-end, a generalized homogeneous multiphase capability, and several two-fluid modelling and algorithmic elements. Specific capability installed for the NASA Gearbox Windage Aerodynamics NRA are included in this version: Hybrid Immersed Overset Boundary Method (HOIBM) [Noack et. al (2009)] Periodic boundary conditions for multiple frames of reference, Fully generalized immersed boundary method, Fully generalized conjugate heat transfer, Droplet deposition, bouncing, splashing models, and, Film transport and breakup.

  9. Non-Markovian generalization of the Lindblad theory of open quantum systems

    NASA Astrophysics Data System (ADS)

    Breuer, Heinz-Peter

    2007-02-01

    A systematic approach to the non-Markovian quantum dynamics of open systems is given by the projection operator techniques of nonequilibrium statistical mechanics. Combining these methods with concepts from quantum information theory and from the theory of positive maps, we derive a class of correlated projection superoperators that take into account in an efficient way statistical correlations between the open system and its environment. The result is used to develop a generalization of the Lindblad theory to the regime of highly non-Markovian quantum processes in structured environments.

  10. Williams Element with Generalized Degrees of Freedom for Fracture Analysis of Multiple-Cracked Beam

    NASA Astrophysics Data System (ADS)

    Xu, Hua; Wei, Quyang; Yang, Lufeng

    2017-10-01

    In this paper, the method of finite element with generalized degrees of freedom (FEDOFs) is used to calculate the stress intensity factor (SIF) of multiple cracked beam and analysed the effect of minor cracks on the main crack SIF in different cases. Williams element is insensitive to the size of singular region. So that calculation efficiency is highly improved. Examples analysis validates that the SIF near the crack tip can be obtained directly though FEDOFs. And the result is well consistent with ANSYS solution and has a satisfied accuracy.

  11. Generalized recursion relations for correlators in the gauge-gravity correspondence.

    PubMed

    Raju, Suvrat

    2011-03-04

    We show that a generalization of the Britto-Cachazo-Feng-Witten recursion relations gives a new and efficient method of computing correlation functions of the stress tensor or conserved currents in conformal field theories with an (d+1)-dimensional anti-de Sitter space dual, for d≥4, in the limit where the bulk theory is approximated by tree-level Yang-Mills theory or gravity. In supersymmetric theories, additional correlators of operators that live in the same multiplet as a conserved current or stress tensor can be computed by these means.

  12. A synchronized computational architecture for generalized bilateral control of robot arms

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Szakaly, Zoltan

    1987-01-01

    This paper describes a computational architecture for an interconnected high speed distributed computing system for generalized bilateral control of robot arms. The key method of the architecture is the use of fully synchronized, interrupt driven software. Since an objective of the development is to utilize the processing resources efficiently, the synchronization is done in the hardware level to reduce system software overhead. The architecture also achieves a balaced load on the communication channel. The paper also describes some architectural relations to trading or sharing manual and automatic control.

  13. EVALUATING THE EFFICIENCY AND EFFECTIVENESS OF SELF-INSTRUCTIONAL METHODS FOR SELECTED AREAS OF VOCATIONAL EDUCATION. FINAL REPORT.

    ERIC Educational Resources Information Center

    COFFEY, JOHN L.; AND OTHERS

    THE TWO MAJOR PHASES OF THIS RESEARCH WERE (1) ANALYZING TRADE AND INDUSTRIAL EDUCATION TO IDENTIFY AND DESCRIBE PRIMARY VOCATIONAL SKILLS, AND (2) DEVELOPING AND EVALUATING NINE SELF-INSTRUCTIONAL UNITS. THREE INSTRUMENTS WERE USED IN ANALYZING VOCATIONAL CONTENT SOURCES TO IDENTIFY AND DESCRIBE GENERAL BEHAVIORS AS WELL AS TRADE-SPECIFIC…

  14. Sanitation in School Housekeeping, A Training Course for School Custodians.

    ERIC Educational Resources Information Center

    Florida State Dept. of Education, Tallahassee. School Plant Management Section.

    The most efficient and modern methods for cleaning and sanitizing school facilities are presented for the benefit of school custodians. Careful attention to the total school environment can be supportive of the general education program and at the same time make a sound contribution to the health and health education. Topics discussed include--(1)…

  15. Novel Methods for Electromagnetic Simulation and Design

    DTIC Science & Technology

    2016-08-03

    The resulting discretized integral equations are compatible with fast multipoleaccelerated solvers and will form the basis for high fidelity...expansion”) which are high-order, efficient and easy to use on arbitrarily triangulated surfaces. The resulting discretized integral equations are...created a user interface compatible with both low and high order discretizations , and implemented the generalized Debye approach of [4]. The

  16. Algorithm Diversity for Resilent Systems

    DTIC Science & Technology

    2016-06-27

    data structures. 15. SUBJECT TERMS computer security, software diversity, program transformation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18...systematic method for transforming Datalog rules with general universal and existential quantification into efficient algorithms with precise complexity...worst case in the size of the ground rules. There are numerous choices during the transformation that lead to diverse algorithms and different

  17. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  18. 40 CFR Appendix D to Part 61 - Methods for Estimating Radionuclide Emissions

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... for decay Xenon 0.5/wk Based on xenon half-life of 5.3 days; Douglas bags: Released within one week Xenon 1 Provides no reduction of exposure to general public. Venturi scrubbers ParticulatesGases 0.051... precipitators Particulates 0.05 Not applicable for gaseous radionuclides Xenon traps Xenon 0.1 Efficiency is...

  19. 40 CFR Appendix D to Part 61 - Methods for Estimating Radionuclide Emissions

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... for decay Xenon 0.5/wk Based on xenon half-life of 5.3 days; Douglas bags: Released within one week Xenon 1 Provides no reduction of exposure to general public. Venturi scrubbers ParticulatesGases 0.051... precipitators Particulates 0.05 Not applicable for gaseous radionuclides Xenon traps Xenon 0.1 Efficiency is...

  20. Optimal Least-Squares Unidimensional Scaling: Improved Branch-and-Bound Procedures and Comparison to Dynamic Programming

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Stahl, Stephanie

    2005-01-01

    There are two well-known methods for obtaining a guaranteed globally optimal solution to the problem of least-squares unidimensional scaling of a symmetric dissimilarity matrix: (a) dynamic programming, and (b) branch-and-bound. Dynamic programming is generally more efficient than branch-and-bound, but the former is limited to matrices with…

  1. Error regions in quantum state tomography: computational complexity caused by geometry of quantum states

    NASA Astrophysics Data System (ADS)

    Suess, Daniel; Rudnicki, Łukasz; maciel, Thiago O.; Gross, David

    2017-09-01

    The outcomes of quantum mechanical measurements are inherently random. It is therefore necessary to develop stringent methods for quantifying the degree of statistical uncertainty about the results of quantum experiments. For the particularly relevant task of quantum state tomography, it has been shown that a significant reduction in uncertainty can be achieved by taking the positivity of quantum states into account. However—the large number of partial results and heuristics notwithstanding—no efficient general algorithm is known that produces an optimal uncertainty region from experimental data, while making use of the prior constraint of positivity. Here, we provide a precise formulation of this problem and show that the general case is NP-hard. Our result leaves room for the existence of efficient approximate solutions, and therefore does not in itself imply that the practical task of quantum uncertainty quantification is intractable. However, it does show that there exists a non-trivial trade-off between optimality and computational efficiency for error regions. We prove two versions of the result: one for frequentist and one for Bayesian statistics.

  2. General form of a cooperative gradual maximal covering location problem

    NASA Astrophysics Data System (ADS)

    Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh

    2018-07-01

    Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.

  3. FBC: a flat binary code scheme for fast Manhattan hash retrieval

    NASA Astrophysics Data System (ADS)

    Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun

    2018-04-01

    Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.

  4. Explicit symplectic algorithms based on generating functions for charged particle dynamics.

    PubMed

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H(x,p)=p_{i}f(x) or H(x,p)=x_{i}g(p). Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  5. Explicit symplectic algorithms based on generating functions for charged particle dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  6. Efficiently detecting outlying behavior in video-game players.

    PubMed

    Kim, Young Bin; Kang, Shin Jin; Lee, Sang Hyeok; Jung, Jang Young; Kam, Hyeong Ryeol; Lee, Jung; Kim, Young Sun; Lee, Joonsoo; Kim, Chang Hun

    2015-01-01

    In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players' characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players (i.e., data regarding adjustments to the volume and the use of the keyboard and mouse) was used to analyze high-dimensional game-player data. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. The proposed method can also be used for feedback analysis of various interactive content provided in PC environments.

  7. A diagonal algorithm for the method of pseudocompressibility. [for steady-state solution to incompressible Navier-Stokes equation

    NASA Technical Reports Server (NTRS)

    Rogers, S. E.; Kwak, D.; Chang, J. L. C.

    1986-01-01

    The method of pseudocompressibility has been shown to be an efficient method for obtaining a steady-state solution to the incompressible Navier-Stokes equations. Recent improvements to this method include the use of a diagonal scheme for the inversion of the equations at each iteration. The necessary transformations have been derived for the pseudocompressibility equations in generalized coordinates. The diagonal algorithm reduces the computing time necessary to obtain a steady-state solution by a factor of nearly three. Implicit viscous terms are maintained in the equations, and it has become possible to use fourth-order implicit dissipation. The steady-state solution is unchanged by the approximations resulting from the diagonalization of the equations. Computed results for flow over a two-dimensional backward-facing step and a three-dimensional cylinder mounted normal to a flat plate are presented for both the old and new algorithms. The accuracy and computing efficiency of these algorithms are compared.

  8. Efficiently detecting outlying behavior in video-game players

    PubMed Central

    Kim, Young Bin; Kang, Shin Jin; Lee, Sang Hyeok; Jung, Jang Young; Kam, Hyeong Ryeol; Lee, Jung; Kim, Young Sun; Lee, Joonsoo

    2015-01-01

    In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players’ characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players (i.e., data regarding adjustments to the volume and the use of the keyboard and mouse) was used to analyze high-dimensional game-player data. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. The proposed method can also be used for feedback analysis of various interactive content provided in PC environments. PMID:26713250

  9. Adaptive transmission disequilibrium test for family trio design.

    PubMed

    Yuan, Min; Tian, Xin; Zheng, Gang; Yang, Yaning

    2009-01-01

    The transmission disequilibrium test (TDT) is a standard method to detect association using family trio design. It is optimal for an additive genetic model. Other TDT-type tests optimal for recessive and dominant models have also been developed. Association tests using family data, including the TDT-type statistics, have been unified to a class of more comprehensive and flexable family-based association tests (FBAT). TDT-type tests have high efficiency when the genetic model is known or correctly specified, but may lose power if the model is mis-specified. Hence tests that are robust to genetic model mis-specification yet efficient are preferred. Constrained likelihood ratio test (CLRT) and MAX-type test have been shown to be efficiency robust. In this paper we propose a new efficiency robust procedure, referred to as adaptive TDT (aTDT). It uses the Hardy-Weinberg disequilibrium coefficient to identify the potential genetic model underlying the data and then applies the TDT-type test (or FBAT for general applications) corresponding to the selected model. Simulation demonstrates that aTDT is efficiency robust to model mis-specifications and generally outperforms the MAX test and CLRT in terms of power. We also show that aTDT has power close to, but much more robust, than the optimal TDT-type test based on a single genetic model. Applications to real and simulated data from Genetic Analysis Workshop (GAW) illustrate the use of our adaptive TDT.

  10. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries.

    PubMed

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.

  11. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries

    PubMed Central

    Ge, Liang; Sotiropoulos, Fotis

    2008-01-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533

  12. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Jianwei; Remsing, Richard C.; Zhang, Yubo

    2016-06-13

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and vanmore » der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.« less

  13. Efficient Implementations of the Quadrature-Free Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Atkins, Harold L.

    1999-01-01

    The efficiency of the quadrature-free form of the dis- continuous Galerkin method in two dimensions, and briefly in three dimensions, is examined. Most of the work for constant-coefficient, linear problems involves the volume and edge integrations, and the transformation of information from the volume to the edges. These operations can be viewed as matrix-vector multiplications. Many of the matrices are sparse as a result of symmetry, and blocking and specialized multiplication routines are used to account for the sparsity. By optimizing these operations, a 35% reduction in total CPU time is achieved. For nonlinear problems, the calculation of the flux becomes dominant because of the cost associated with polynomial products and inversion. This component of the work can be reduced by up to 75% when the products are approximated by truncating terms. Because the cost is high for nonlinear problems on general elements, it is suggested that simplified physics and the most efficient element types be used over most of the domain.

  14. Efficient anharmonic vibrational spectroscopy for large molecules using local-mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Xiaolu; Steele, Ryan P., E-mail: ryan.steele@utah.edu

    This article presents a general computational approach for efficient simulations of anharmonic vibrational spectra in chemical systems. An automated local-mode vibrational approach is presented, which borrows techniques from localized molecular orbitals in electronic structure theory. This approach generates spatially localized vibrational modes, in contrast to the delocalization exhibited by canonical normal modes. The method is rigorously tested across a series of chemical systems, ranging from small molecules to large water clusters and a protonated dipeptide. It is interfaced with exact, grid-based approaches, as well as vibrational self-consistent field methods. Most significantly, this new set of reference coordinates exhibits a well-behavedmore » spatial decay of mode couplings, which allows for a systematic, a priori truncation of mode couplings and increased computational efficiency. Convergence can typically be reached by including modes within only about 4 Å. The local nature of this truncation suggests particular promise for the ab initio simulation of anharmonic vibrational motion in large systems, where connection to experimental spectra is currently most challenging.« less

  15. Developing Discontinuous Galerkin Methods for Solving Multiphysics Problems in General Relativity

    NASA Astrophysics Data System (ADS)

    Kidder, Lawrence; Field, Scott; Teukolsky, Saul; Foucart, Francois; SXS Collaboration

    2016-03-01

    Multi-messenger observations of the merger of black hole-neutron star and neutron star-neutron star binaries, and of supernova explosions will probe fundamental physics inaccessible to terrestrial experiments. Modeling these systems requires a relativistic treatment of hydrodynamics, including magnetic fields, as well as neutrino transport and nuclear reactions. The accuracy, efficiency, and robustness of current codes that treat all of these problems is not sufficient to keep up with the observational needs. We are building a new numerical code that uses the Discontinuous Galerkin method with a task-based parallelization strategy, a promising combination that will allow multiphysics applications to be treated both accurately and efficiently on petascale and exascale machines. The code will scale to more than 100,000 cores for efficient exploration of the parameter space of potential sources and allowed physics, and the high-fidelity predictions needed to realize the promise of multi-messenger astronomy. I will discuss the current status of the development of this new code.

  16. Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data

    NASA Astrophysics Data System (ADS)

    Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar

    2017-04-01

    A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions.

  17. Efficient design of gain-flattened multi-pump Raman fiber amplifiers using least squares support vector regression

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Qiu, Xiaojie; Yin, Cunyi; Jiang, Hao

    2018-02-01

    An efficient method to design the broadband gain-flattened Raman fiber amplifier with multiple pumps is proposed based on least squares support vector regression (LS-SVR). A multi-input multi-output LS-SVR model is introduced to replace the complicated solving process of the nonlinear coupled Raman amplification equation. The proposed approach contains two stages: offline training stage and online optimization stage. During the offline stage, the LS-SVR model is trained. Owing to the good generalization capability of LS-SVR, the net gain spectrum can be directly and accurately obtained when inputting any combination of the pump wavelength and power to the well-trained model. During the online stage, we incorporate the LS-SVR model into the particle swarm optimization algorithm to find the optimal pump configuration. The design results demonstrate that the proposed method greatly shortens the computation time and enhances the efficiency of the pump parameter optimization for Raman fiber amplifier design.

  18. Efficient Isolation Protocol for B and T Lymphocytes from Human Palatine Tonsils

    PubMed Central

    Assadian, Farzaneh; Sandström, Karl; Laurell, Göran; Svensson, Catharina; Akusjärvi, Göran; Punga, Tanel

    2015-01-01

    Tonsils form a part of the immune system providing the first line of defense against inhaled pathogens. Usually the term “tonsils” refers to the palatine tonsils situated at the lateral walls of the oral part of the pharynx. Surgically removed palatine tonsils provide a convenient accessible source of B and T lymphocytes to study the interplay between foreign pathogens and the host immune system. This video protocol describes the dissection and processing of surgically removed human palatine tonsils, followed by the isolation of the individual B and T cell populations from the same tissue sample. We present a method, which efficiently separates tonsillar B and T lymphocytes using an antibody-dependent affinity protocol. Further, we use the method to demonstrate that human adenovirus infects specifically the tonsillar T cell fraction. The established protocol is generally applicable to efficiently and rapidly isolate tonsillar B and T cell populations to study the role of different types of pathogens in tonsillar immune responses. PMID:26650582

  19. Mechanism of Electro-Coagulation with Al/Fe Periodically Reversing Treating Berberine Pharmaceutical Wastewater

    NASA Astrophysics Data System (ADS)

    Sun, Zhaonan; Liu, Zheng; Hu, Xiaomin

    2017-05-01

    The method of treating pharmaceutical wastewater by electro-coagulation with Al/Fe periodically reversing (ECPR) was proposed based on traditional electrochemical method. The principle of ECPR was generalized. Mechanism of ECPR to treat berberine pharmaceutical wastewater was investigated. Treatability and mechanism studies were conducted under laboratory conditions. For berberine wastewater (800 mg/L), decolourization efficiency and COD removal efficiency were highest to 98% and 95% respectively when voltage was 8V, reaction time was 60 min, alternating period was 10 S electrolyte concentration was 0.015 mol/L, stirring speed was 750 rpm, pH value was 3-10 and distance between two plates was 0.6 cm. For removal berberine, flocculation, floatation and oxidation provided 73%, 8% and 18% removal efficiency, which can be inferred by analysing UV-visible absorption spectrum, acidification experiment, EDTA shielding experiment, structure-activity relationship, oxidation and floatation. Meanwhile decolourization and COD removal conformed to apparent pseudo-first order and zero-order kinetics for 200mg/L and 400-1000 mg/L berberine wastewater respectively.

  20. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional.

    PubMed

    Sun, Jianwei; Remsing, Richard C; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L; Perdew, John P

    2016-09-01

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.

  1. Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves

    NASA Astrophysics Data System (ADS)

    Qiu, Shi; Liu, Kuang; Eliasson, Veronica

    2016-10-01

    Geometrical shock dynamics (GSD) theory is an appealing method to predict the shock motion in the sense that it is more computationally efficient than solving the traditional Euler equations, especially for converging shock waves. However, to solve and optimize large scale configurations, the main bottleneck is the computational cost. Among the existing numerical GSD schemes, there is only one that has been implemented on parallel computers, with the purpose to analyze detonation waves. To extend the computational advantage of the GSD theory to more general applications such as converging shock waves, a numerical implementation using a spatial decomposition method has been coupled with a front tracking approach on parallel computers. In addition, an efficient tridiagonal system solver for massively parallel computers has been applied to resolve the most expensive function in this implementation, resulting in an efficiency of 0.93 while using 32 HPCC cores. Moreover, symmetric boundary conditions have been developed to further reduce the computational cost, achieving a speedup of 19.26 for a 12-sided polygonal converging shock.

  2. Optimization of a Small Scale Linear Reluctance Accelerator

    NASA Astrophysics Data System (ADS)

    Barrera, Thor; Beard, Robby

    2011-11-01

    Reluctance accelerators are extremely promising future methods of transportation. Several problems still plague these devices, most prominently low efficiency. Variables to overcoming efficiency problems are many and difficult to correlate how they affect our accelerator. The study examined several differing variables that present potential challenges in optimizing the efficiency of reluctance accelerators. These include coil and projectile design, power supplies, switching, and the elusive gradient inductance problem. Extensive research in these areas has been performed from computational and theoretical to experimental. Findings show that these parameters share significant similarity to transformer design elements, thus general findings show current optimized parameters the research suggests as a baseline for further research and design. Demonstration of these current findings will be offered at the time of presentation.

  3. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippi, Claudia, E-mail: c.filippi@utwente.nl; Assaraf, Roland, E-mail: assaraf@lct.jussieu.fr; Moroni, Saverio, E-mail: moroni@democritos.it

    2016-05-21

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, inmore » both all-electron and pseudopotential calculations.« less

  4. Coverage-maximization in networks under resource constraints.

    PubMed

    Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy

    2010-06-01

    Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.

  5. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    PubMed

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  6. Effective distances for epidemics spreading on complex networks.

    PubMed

    Iannelli, Flavio; Koher, Andreas; Brockmann, Dirk; Hövel, Philipp; Sokolov, Igor M

    2017-01-01

    We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations reveals a higher correlation compared to the shortest-path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant-generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach using only algebraic methods.

  7. Effective distances for epidemics spreading on complex networks

    NASA Astrophysics Data System (ADS)

    Iannelli, Flavio; Koher, Andreas; Brockmann, Dirk; Hövel, Philipp; Sokolov, Igor M.

    2017-01-01

    We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations reveals a higher correlation compared to the shortest-path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant-generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach using only algebraic methods.

  8. New-Sum: A Novel Online ABFT Scheme For General Iterative Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram

    Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less

  9. Physical-geometric optics method for large size faceted particles.

    PubMed

    Sun, Bingqiang; Yang, Ping; Kattawar, George W; Zhang, Xiaodong

    2017-10-02

    A new physical-geometric optics method is developed to compute the single-scattering properties of faceted particles. It incorporates a general absorption vector to accurately account for inhomogeneous wave effects, and subsequently yields the relevant analytical formulas effective and computationally efficient for absorptive scattering particles. A bundle of rays incident on a certain facet can be traced as a single beam. For a beam incident on multiple facets, a systematic beam-splitting technique based on computer graphics is used to split the original beam into several sub-beams so that each sub-beam is incident only on an individual facet. The new beam-splitting technique significantly reduces the computational burden. The present physical-geometric optics method can be generalized to arbitrary faceted particles with either convex or concave shapes and with a homogeneous or an inhomogeneous (e.g., a particle with a core) composition. The single-scattering properties of irregular convex homogeneous and inhomogeneous hexahedra are simulated and compared to their counterparts from two other methods including a numerically rigorous method.

  10. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    PubMed

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  12. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  13. Self-learning Monte Carlo with deep neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Huitao; Liu, Junwei; Fu, Liang

    2018-05-01

    The self-learning Monte Carlo (SLMC) method is a general algorithm to speedup MC simulations. Its efficiency has been demonstrated in various systems by introducing an effective model to propose global moves in the configuration space. In this paper, we show that deep neural networks can be naturally incorporated into SLMC, and without any prior knowledge can learn the original model accurately and efficiently. Demonstrated in quantum impurity models, we reduce the complexity for a local update from O (β2) in Hirsch-Fye algorithm to O (β lnβ ) , which is a significant speedup especially for systems at low temperatures.

  14. Efficient computation of PDF-based characteristics from diffusion MR signal.

    PubMed

    Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc

    2008-01-01

    We present a general method for the computation of PDF-based characteristics of the tissue micro-architecture in MR imaging. The approach relies on the approximation of the MR signal by a series expansion based on Spherical Harmonics and Laguerre-Gaussian functions, followed by a simple projection step that is efficiently done in a finite dimensional space. The resulting algorithm is generic, flexible and is able to compute a large set of useful characteristics of the local tissues structure. We illustrate the effectiveness of this approach by showing results on synthetic and real MR datasets acquired in a clinical time-frame.

  15. Atomic force microscopy analysis of human cornea surface after UV (λ=266 nm) laser irradiation

    NASA Astrophysics Data System (ADS)

    Spyratou, E.; Makropoulou, M.; Moutsouris, K.; Bacharis, C.; Serafetinides, A. A.

    2009-07-01

    Efficient cornea reshaping by laser irradiation for correcting refractive errors is still a major issue of interest and study. Although the excimer laser wavelength of 193 nm is generally recognized as successful in ablating corneal tissue for myopia correction, complications in excimer refractive surgery leads to alternative laser sources and methods for efficient cornea treatment. In this work, ablation experiments of human donor cornea flaps were conducted with the 4th harmonic of an Nd:YAG laser, with different laser pulses. AFM analysis was performed for examination of the ablated cornea flap morphology and surface roughness.

  16. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  17. Efficient hybrid-symbolic methods for quantum mechanical calculations

    NASA Astrophysics Data System (ADS)

    Scott, T. C.; Zhang, Wenxing

    2015-06-01

    We present hybrid symbolic-numerical tools to generate optimized numerical code for rapid prototyping and fast numerical computation starting from a computer algebra system (CAS) and tailored to any given quantum mechanical problem. Although a major focus concerns the quantum chemistry methods of H. Nakatsuji which has yielded successful and very accurate eigensolutions for small atoms and molecules, the tools are general and may be applied to any basis set calculation with a variational principle applied to its linear and non-linear parameters.

  18. On unified modeling, theory, and method for solving multi-scale global optimization problems

    NASA Astrophysics Data System (ADS)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  19. Automatic item generation implemented for measuring artistic judgment aptitude.

    PubMed

    Bezruczko, Nikolaus

    2014-01-01

    Automatic item generation (AIG) is a broad class of methods that are being developed to address psychometric issues arising from internet and computer-based testing. In general, issues emphasize efficiency, validity, and diagnostic usefulness of large scale mental testing. Rapid prominence of AIG methods and their implicit perspective on mental testing is bringing painful scrutiny to many sacred psychometric assumptions. This report reviews basic AIG ideas, then presents conceptual foundations, image model development, and operational application to artistic judgment aptitude testing.

  20. The Structural Insulated Panel SIP Hut: Preliminary Evaluation of Energy Efficiency and Indoor Air Quality

    DTIC Science & Technology

    2015-08-19

    laboratory analysis using EPA TO-15, and collection of gas samples in sorbent tubes for later analysis of aldehydes using NIOSH Method 2016. Total VOCs...measurement can be a general qualitative indicator of IAQ problems; formaldehyde and other aldehydes are common organic gases emitted from OSB; and...table in the middle of the hut. 5.1.2.3 Formaldehyde and other aldehydes Aldehydes were measured using both Dräger-tubes and by NIOSH Method 2016. The

  1. Advanced scoring method of eco-efficiency in European cities.

    PubMed

    Moutinho, Victor; Madaleno, Mara; Robaina, Margarita; Villar, José

    2018-01-01

    This paper analyzes a set of selected German and French cities' performance in terms of the relative behavior of their eco-efficiencies, computed as the ratio of their gross domestic product (GDP) over their CO 2 emissions. For this analysis, eco-efficiency scores of the selected cities are computed using the data envelopment analysis (DEA) technique, taking the eco-efficiencies as outputs, and the inputs being the energy consumption, the population density, the labor productivity, the resource productivity, and the patents per inhabitant. Once DEA results are analyzed, the Malmquist productivity indexes (MPI) are used to assess the time evolution of the technical efficiency, technological efficiency, and productivity of the cities over the window periods 2000 to 2005 and 2005 to 2008. Some of the main conclusions are that (1) most of the analyzed cities seem to have suboptimal scales, being one of the causes of their inefficiency; (2) there is evidence that high GDP over CO 2 emissions does not imply high eco-efficiency scores, meaning that DEA like approaches are useful to complement more simplistic ranking procedures, pointing out potential inefficiencies at the input levels; (3) efficiencies performed worse during the period 2000-2005 than during the period 2005-2008, suggesting the possibility of corrective actions taken during or at the end of the first period but impacting only on the second period, probably due to an increasing environmental awareness of policymakers and governors; and (4) MPI analysis shows a positive technological evolution of all cities, according to the general technological evolution of the reference cities, reflecting a generalized convergence of most cities to their technological frontier and therefore an evolution in the right direction.

  2. General Synthetic Method for Si-Fluoresceins and Si-Rhodamines

    PubMed Central

    2017-01-01

    The century-old fluoresceins and rhodamines persist as flexible scaffolds for fluorescent and fluorogenic compounds. Extensive exploration of these xanthene dyes has yielded general structure–activity relationships where the development of new probes is limited only by imagination and organic chemistry. In particular, replacement of the xanthene oxygen with silicon has resulted in new red-shifted Si-fluoresceins and Si-rhodamines, whose high brightness and photostability enable advanced imaging experiments. Nevertheless, efforts to tune the chemical and spectral properties of these dyes have been hindered by difficult synthetic routes. Here, we report a general strategy for the efficient preparation of Si-fluoresceins and Si-rhodamines from readily synthesized bis(2-bromophenyl)silane intermediates. These dibromides undergo metal/bromide exchange to give bis-aryllithium or bis(aryl Grignard) intermediates, which can then add to anhydride or ester electrophiles to afford a variety of Si-xanthenes. This strategy enabled efficient (3–5 step) syntheses of known and novel Si-fluoresceins, Si-rhodamines, and related dye structures. In particular, we discovered that previously inaccessible tetrafluorination of the bottom aryl ring of the Si-rhodamines resulted in dyes with improved visible absorbance in solution, and a convenient derivatization through fluoride-thiol substitution. This modular, divergent synthetic method will expand the palette of accessible xanthenoid dyes across the visible spectrum, thereby pushing further the frontiers of biological imaging. PMID:28979939

  3. Transverse vibrations of non-uniform beams. [combined finite element and Rayleigh-Ritz methods

    NASA Technical Reports Server (NTRS)

    Klein, L.

    1974-01-01

    The free vibrations of elastic beams with nonuniform characteristics are investigated theoretically by a new method. The new method is seen to combine the advantages of a finite element approach and of a Rayleigh-Ritz analysis. Comparison with the known analytical results for uniform beams shows good convergence of the method for natural frequencies and modes. For internal shear forces and bending moments, the rate of convergence is less rapid. Results from experiments conducted with a cantilevered helicopter blade with strong nonuniformities and also from alternative theoretical methods, indicate that the theory adequately predicts natural frequencies and mode shapes. General guidelines for efficient use of the method are presented.

  4. Some recent developments of the immersed interface method for flow simulation

    NASA Astrophysics Data System (ADS)

    Xu, Sheng

    2017-11-01

    The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.

  5. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-07-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalized waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalized model for extreme-mass-ratio inspirals constructed on deformed black hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  6. Continuing medical education for general practitioners: a practice format

    PubMed Central

    VanNieuwenborg, Lena; Goossens, Martine; De Lepeleire, Jan; Schoenmakers, Birgitte

    2016-01-01

    Introduction Our current knowledge-based society and the many actualisations within the medical profession require a great responsibility of physicians to continuously develop and refine their skills. In this article, we reflect on some recent findings in the field of continuing education for professional doctors (continuing medical education, CME). Second, we describe the development of a CME from the Academic Center for General Practice (ACHG) of the KU Leuven. Methods First, we performed a literature study and we used unpublished data of a need assessment performed (2013) in a selected group of general practitioners. Second, we describe the development of a proposal to establish a CME programme for general practitioners. Results CME should go beyond the sheer acquisition of knowledge, and also seek changes in practice, attitudes and behaviours of physicians. The continuing education offerings are subject to the goals of the organising institution, but even more to the needs and desires of the end user. Conclusions Integrated education is crucial to meet the conditions for efficient and effective continuing education. The ACHG KU Leuven decided to offer a postgraduate programme consisting of a combination of teaching methods: online courses (self-study), contact courses (traditional method) and a materials database. PMID:26850504

  7. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  8. ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.

    2011-04-20

    While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less

  9. Rise time analysis of pulsed klystron-modulator for efficiency improvement of linear colliders

    NASA Astrophysics Data System (ADS)

    Oh, J. S.; Cho, M. H.; Namkung, W.; Chung, K. H.; Shintake, T.; Matsumoto, H.

    2000-04-01

    In linear accelerators, the periods during the rise and fall of a klystron-modulator pulse cannot be used to generate RF power. Thus, these periods need to be minimized to get high efficiency, especially in large-scale machines. In this paper, we present a simplified and generalized voltage rise time function of a pulsed modulator with a high-power klystron load using the equivalent circuit analysis method. The optimum pulse waveform is generated when this pulsed power system is tuned with a damping factor of ˜0.85. The normalized rise time chart presented in this paper allows one to predict the rise time and pulse shape of the pulsed power system in general. The results can be summarized as follows: The large distributed capacitance in the pulse tank and operating parameters, Vs× Tp , where Vs is load voltage and Tp is the pulse width, are the main factors determining the pulse rise time in the high-power RF system. With an RF pulse compression scheme, up to ±3% ripple of the modulator voltage is allowed without serious loss of compressor efficiency, which allows the modulator efficiency to be improved as well. The wiring inductance should be minimized to get the fastest rise time.

  10. Dynamics of vascular volume and hemodilution of lactated Ringer’s solution in patients during induction of general and epidural anesthesia*

    PubMed Central

    Li, Yu-hong; Lou, Xian-feng; Bao, Fang-ping

    2006-01-01

    Objective: To investigate the dynamics of vascular volume and the plasma dilution of lactated Ringer’s solution in patients during the induction of general and epidural anesthesia. Methods: The hemodilution of i.v. infusion of 1000 ml of lactated Ringer’s solution over 60 min was studied in patients undergoing general (n=31) and epidural (n=22) anesthesia. Heart rate, arterial blood pressure and hemoglobin (Hb) concentration were measured every 5 min during the study. Surgery was not started until the study period had been completed. Results: General anesthesia caused the greater decrease of mean arterial blood pressure (MAP) (mean 15% versus 9%; P<0.01) and thereby followed by a more pronounced plasma dilution, blood volume expansion (VE) and blood volume expansion efficiency (VEE). A strong linear correlation between hemodilution and the reduction in MAP (r=−0.50; P<0.01) was found. At the end of infusion, patients undergoing general anesthesia retained 47% (SD 19%) of the infused fluid in the circulation, while epidural anesthesia retained 29% (SD 13%) (P<0.001). Correspondingly, a fewer urine output (mean 89 ml versus 156 ml; P<0.05) and extravascular expansion (454 ml versus 551 ml; P<0.05) were found during general anesthesia. Conclusion: We concluded that the induction of general anesthesia caused more hemodilution, volume expansion and volume expansion efficiency than epidural anesthesia, which was triggered only by the lower MAP. PMID:16909476

  11. Protein–protein docking by fast generalized Fourier transforms on 5D rotational manifolds

    PubMed Central

    Padhorny, Dzmitry; Kazennov, Andrey; Zerbe, Brandon S.; Porter, Kathryn A.; Xia, Bing; Mottarella, Scott E.; Kholodov, Yaroslav; Ritchie, David W.; Vajda, Sandor; Kozakov, Dima

    2016-01-01

    Energy evaluation using fast Fourier transforms (FFTs) enables sampling billions of putative complex structures and hence revolutionized rigid protein–protein docking. However, in current methods, efficient acceleration is achieved only in either the translational or the rotational subspace. Developing an efficient and accurate docking method that expands FFT-based sampling to five rotational coordinates is an extensively studied but still unsolved problem. The algorithm presented here retains the accuracy of earlier methods but yields at least 10-fold speedup. The improvement is due to two innovations. First, the search space is treated as the product manifold SO(3)×(SO(3)∖S1), where SO(3) is the rotation group representing the space of the rotating ligand, and (SO(3)∖S1) is the space spanned by the two Euler angles that define the orientation of the vector from the center of the fixed receptor toward the center of the ligand. This representation enables the use of efficient FFT methods developed for SO(3). Second, we select the centers of highly populated clusters of docked structures, rather than the lowest energy conformations, as predictions of the complex, and hence there is no need for very high accuracy in energy evaluation. Therefore, it is sufficient to use a limited number of spherical basis functions in the Fourier space, which increases the efficiency of sampling while retaining the accuracy of docking results. A major advantage of the method is that, in contrast to classical approaches, increasing the number of correlation function terms is computationally inexpensive, which enables using complex energy functions for scoring. PMID:27412858

  12. General chemical kinetics computer program for static and flow reactions, with application to combustion and shock-tube kinetics

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Scullin, V. J.

    1972-01-01

    A general chemical kinetics program is described for complex, homogeneous ideal-gas reactions in any chemical system. Its main features are flexibility and convenience in treating many different reaction conditions. The program solves numerically the differential equations describing complex reaction in either a static system or one-dimensional inviscid flow. Applications include ignition and combustion, shock wave reactions, and general reactions in a flowing or static system. An implicit numerical solution method is used which works efficiently for the extreme conditions of a very slow or a very fast reaction. The theory is described, and the computer program and users' manual are included.

  13. A general panel sizing computer code and its application to composite structural panels

    NASA Technical Reports Server (NTRS)

    Anderson, M. S.; Stroud, W. J.

    1978-01-01

    A computer code for obtaining the dimensions of optimum (least mass) stiffened composite structural panels is described. The procedure, which is based on nonlinear mathematical programming and a rigorous buckling analysis, is applicable to general cross sections under general loading conditions causing buckling. A simplified method of accounting for bow-type imperfections is also included. Design studies in the form of structural efficiency charts for axial compression loading are made with the code for blade and hat stiffened panels. The effects on panel mass of imperfections, material strength limitations, and panel stiffness requirements are also examined. Comparisons with previously published experimental data show that accounting for imperfections improves correlation between theory and experiment.

  14. Automatic movie skimming with general tempo analysis

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.

    2003-11-01

    Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.

  15. Decentralised control of continuous Petri nets

    NASA Astrophysics Data System (ADS)

    Wang, Liewei; Wang, Xu

    2017-05-01

    This paper focuses on decentralised control of systems modelled by continuous Petri nets, in which a target marking control problem is discussed. In some previous works, an efficient ON/OFF strategy-based minimum-time controller was developed. Nevertheless, the convergence is only proved for subclasses like Choice-Free nets. For a general net, the pre-conditions of applying the ON/OFF strategy are not given; therefore, the application scope of the method is unclear. In this work, we provide two sufficient conditions of applying the ON/OFF strategy-based controller to general nets. Furthermore, an extended algorithm for general nets is proposed, in which control laws are computed based on some limited information, without knowing the detailed structure of subsystems.

  16. Multiconfigurational short-range density-functional theory for open-shell systems

    NASA Astrophysics Data System (ADS)

    Hedegârd, Erik Donovan; Toulouse, Julien; Jensen, Hans Jørgen Aagaard

    2018-06-01

    Many chemical systems cannot be described by quantum chemistry methods based on a single-reference wave function. Accurate predictions of energetic and spectroscopic properties require a delicate balance between describing the most important configurations (static correlation) and obtaining dynamical correlation efficiently. The former is most naturally done through a multiconfigurational (MC) wave function, whereas the latter can be done by, e.g., perturbation theory. We have employed a different strategy, namely, a hybrid between multiconfigurational wave functions and density-functional theory (DFT) based on range separation. The method is denoted by MC short-range DFT (MC-srDFT) and is more efficient than perturbative approaches as it capitalizes on the efficient treatment of the (short-range) dynamical correlation by DFT approximations. In turn, the method also improves DFT with standard approximations through the ability of multiconfigurational wave functions to recover large parts of the static correlation. Until now, our implementation was restricted to closed-shell systems, and to lift this restriction, we present here the generalization of MC-srDFT to open-shell cases. The additional terms required to treat open-shell systems are derived and implemented in the DALTON program. This new method for open-shell systems is illustrated on dioxygen and [Fe(H2O)6]3+.

  17. Relation of exact Gaussian basis methods to the dephasing representation: Theory and application to time-resolved electronic spectra

    NASA Astrophysics Data System (ADS)

    Sulc, Miroslav; Hernandez, Henar; Martinez, Todd J.; Vanicek, Jiri

    2014-03-01

    We recently showed that the Dephasing Representation (DR) provides an efficient tool for computing ultrafast electronic spectra and that cellularization yields further acceleration [M. Šulc and J. Vaníček, Mol. Phys. 110, 945 (2012)]. Here we focus on increasing its accuracy by first implementing an exact Gaussian basis method (GBM) combining the accuracy of quantum dynamics and efficiency of classical dynamics. The DR is then derived together with ten other methods for computing time-resolved spectra with intermediate accuracy and efficiency. These include the Gaussian DR (GDR), an exact generalization of the DR, in which trajectories are replaced by communicating frozen Gaussians evolving classically with an average Hamiltonian. The methods are tested numerically on time correlation functions and time-resolved stimulated emission spectra in the harmonic potential, pyrazine S0 /S1 model, and quartic oscillator. Both the GBM and the GDR are shown to increase the accuracy of the DR. Surprisingly, in chaotic systems the GDR can outperform the presumably more accurate GBM, in which the two bases evolve separately. This research was supported by the Swiss NSF Grant No. 200021_124936/1 and NCCR Molecular Ultrafast Science & Technology (MUST), and by the EPFL.

  18. GIRAF: a method for fast search and flexible alignment of ligand binding interfaces in proteins at atomic resolution

    PubMed Central

    Kinjo, Akira R.; Nakamura, Haruki

    2012-01-01

    Comparison and classification of protein structures are fundamental means to understand protein functions. Due to the computational difficulty and the ever-increasing amount of structural data, however, it is in general not feasible to perform exhaustive all-against-all structure comparisons necessary for comprehensive classifications. To efficiently handle such situations, we have previously proposed a method, now called GIRAF. We herein describe further improvements in the GIRAF protein structure search and alignment method. The GIRAF method achieves extremely efficient search of similar structures of ligand binding sites of proteins by exploiting database indexing of structural features of local coordinate frames. In addition, it produces refined atom-wise alignments by iterative applications of the Hungarian method to the bipartite graph defined for a pair of superimposed structures. By combining the refined alignments based on different local coordinate frames, it is made possible to align structures involving domain movements. We provide detailed accounts for the database design, the search and alignment algorithms as well as some benchmark results. PMID:27493524

  19. Comparative analysis of autofocus functions in digital in-line phase-shifting holography.

    PubMed

    Fonseca, Elsa S R; Fiadeiro, Paulo T; Pereira, Manuela; Pinheiro, António

    2016-09-20

    Numerical reconstruction of digital holograms relies on a precise knowledge of the original object position. However, there are a number of relevant applications where this parameter is not known in advance and an efficient autofocusing method is required. This paper addresses the problem of finding optimal focusing methods for use in reconstruction of digital holograms of macroscopic amplitude and phase objects, using digital in-line phase-shifting holography in transmission mode. Fifteen autofocus measures, including spatial-, spectral-, and sparsity-based methods, were evaluated for both synthetic and experimental holograms. The Fresnel transform and the angular spectrum reconstruction methods were compared. Evaluation criteria included unimodality, accuracy, resolution, and computational cost. Autofocusing under angular spectrum propagation tends to perform better with respect to accuracy and unimodality criteria. Phase objects are, generally, more difficult to focus than amplitude objects. The normalized variance, the standard correlation, and the Tenenbaum gradient are the most reliable spatial-based metrics, combining computational efficiency with good accuracy and resolution. A good trade-off between focus performance and computational cost was found for the Fresnelet sparsity method.

  20. Testing for genetic association taking into account phenotypic information of relatives.

    PubMed

    Uh, Hae-Won; Wijk, Henk Jan van der; Houwing-Duistermaat, Jeanine J

    2009-12-15

    We investigated efficient case-control association analysis using family data. The outcome of interest was coronary heart disease. We employed existing and new methods that take into account the correlations among related individuals to obtain the proper type I error rates. The methods considered for autosomal single-nucleotide polymorphisms were: 1) generalized estimating equations-based methods, 2) variance-modified Cochran-Armitage (MCA) trend test incorporating kinship coefficients, and 3) genotypic modified quasi-likelihood score test. Additionally, for X-linked single-nucleotide polymorphisms we proposed a two-degrees-of-freedom test. Performance of these methods was tested using Framingham Heart Study 500 k array data.

Top