Sample records for requires efficient methods

  1. A Comparative Investigation of the Efficiency of Two Classroom Observational Methods.

    ERIC Educational Resources Information Center

    Kissel, Mary Ann

    The problem of this study was to determine whether Method A is a more efficient observational method for obtaining activity type behaviors in an individualized classroom than Method B. Method A requires the observer to record the activities of the entire class at given intervals while Method B requires only the activities of selected individuals…

  2. Impact of microbial efficiency to predict MP supply when estimating protein requirements of growing beef cattle from performance.

    PubMed

    Watson, A K; Klopfenstein, T J; Erickson, G E; MacDonald, J C; Wilkerson, V A

    2017-07-01

    Data from 16 trials were compiled to calculate microbial CP (MCP) production and MP requirements of growing cattle on high-forage diets. All cattle were individually fed diets with 28% to 72% corn cobs in addition to either alfalfa, corn silage, or sorghum silage at 18% to 60% of the diet (DM basis). The remainder of the diet consisted of protein supplement. Source of protein within the supplement varied and included urea, blood meal, corn gluten meal, dry distillers grains, feather meal, meat and bone meal, poultry by-product meal, soybean meal, and wet distillers grains. All trials included a urea-only treatment. Intake of all cattle within an experiment was held constant, as a percentage of BW, established by the urea-supplemented group. In each trial the base diet (forage and urea supplement) was MP deficient. Treatments consisted of increasing amounts of test protein replacing the urea supplement. As protein in the diet increased, ADG plateaued. Among experiments, ADG ranged from 0.11 to 0.73 kg. Three methods of calculating microbial efficiency were used to determine MP supply. Gain was then regressed against calculated MP supply to determine MP requirement for maintenance and gain. Method 1 (based on a constant 13% microbial efficiency as used by the beef NRC model) predicted an MP maintenance requirement of 3.8 g/kg BW and 385 g MP/kg gain. Method 2 calculated microbial efficiency using low-quality forage diets and predicted MP requirements of 3.2 g/kg BW for maintenance and 448 g/kg for gain. Method 3 (based on an equation predicting MCP yield from TDN intake, proposed by the Beef Cattle Nutrient Requirements Model [BCNRM]) predicted MP requirements of 3.1 g/kg BW for maintenance and 342 g/kg for gain. The factorial method of calculating MP maintenance requirements accounts for scurf, endogenous urinary, and metabolic fecal protein losses and averaged 4.2 g/kg BW. Cattle performance data demonstrate formulating diets to meet the beef NRC model recommended MP maintenance requirement (3.8 g/kg S) works well when using 13% microbial efficiency. Therefore, a change in how microbial efficiency is calculated necessitates a change in the proposed MP maintenance requirement to not oversupply or undersupply RUP. Using the 2016 BCNRM to predict MCP production and formulate diets to meet MP requirements also requires changing the MP maintenance requirement to 3.1 g/kg BW.

  3. Efficient linear algebra routines for symmetric matrices stored in packed form.

    PubMed

    Ahlrichs, Reinhart; Tsereteli, Kakha

    2002-01-30

    Quantum chemistry methods require various linear algebra routines for symmetric matrices, for example, diagonalization or Cholesky decomposition for positive matrices. We present a small set of these basic routines that are efficient and minimize memory requirements.

  4. Computational efficiency for the surface renewal method

    NASA Astrophysics Data System (ADS)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  5. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...

  6. 40 CFR Appendix D to Part 60 - Required Emission Inventory Information

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...

  7. 40 CFR Appendix D to Part 60 - Required Emission Inventory Information

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...

  8. 40 CFR Appendix D to Part 60 - Required Emission Inventory Information

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...

  9. 40 CFR Appendix D to Part 60 - Required Emission Inventory Information

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...

  10. 40 CFR Appendix D to Part 60 - Required Emission Inventory Information

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...

  11. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    NASA Astrophysics Data System (ADS)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  12. Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells

    NASA Astrophysics Data System (ADS)

    Zimmerman, A. H.

    1987-09-01

    The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.

  13. Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells

    NASA Technical Reports Server (NTRS)

    Zimmerman, A. H.

    1987-01-01

    The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.

  14. Preparation of dart tags for use in the field

    USGS Publications Warehouse

    Higham, Joseph R.

    1966-01-01

    Tagging in the field requires an efficient method of preparing the tags for dispensation under a wide range of conditions. The method described here was very efficient in an extensive tagging program on Oahe Reservoir, South Dakota.

  15. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  16. Research on Matching Method of Power Supply Parameters for Dual Energy Source Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Jiang, Q.; Luo, M. J.; Zhang, S. K.; Liao, M. W.

    2018-03-01

    A new type of power source is proposed, which is based on the traffic signal matching method of the dual energy source power supply composed of the batteries and the supercapacitors. First, analyzing the power characteristics is required to meet the excellent dynamic characteristics of EV, studying the energy characteristics is required to meet the mileage requirements and researching the physical boundary characteristics is required to meet the physical conditions of the power supply. Secondly, the parameter matching design with the highest energy efficiency is adopted to select the optimal parameter group with the method of matching deviation. Finally, the simulation analysis of the vehicle is carried out in MATLABSimulink, The mileage and energy efficiency of dual energy sources are analyzed in different parameter models, and the rationality of the matching method is verified.

  17. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  18. How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods

    DTIC Science & Technology

    2007-08-01

    Attack Trees for Modeling and Analysis 10 2.8 Misuse and Abuse Cases 10 2.9 Formal Methods 11 2.9.1 Software Cost Reduction 12 2.9.2 Common...modern or efficient techniques. • Requirements analysis typically is either not performed at all (identified requirements are directly specified without...any analysis or modeling) or analysis is restricted to functional re- quirements and ignores quality requirements, other nonfunctional requirements

  19. Efficient Methods of Estimating Switchgrass Biomass Supplies

    USDA-ARS?s Scientific Manuscript database

    Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...

  20. Geometric multigrid for an implicit-time immersed boundary method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.

    2014-10-12

    The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less

  1. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan

    2016-04-28

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less

  2. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    PubMed Central

    Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost

    2016-01-01

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167

  3. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  4. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  5. Calorimetric Measurement for Internal Conversion Efficiency of Photovoltaic Cells/Modules Based on Electrical Substitution Method

    NASA Astrophysics Data System (ADS)

    Saito, Terubumi; Tatsuta, Muneaki; Abe, Yamato; Takesawa, Minato

    2018-02-01

    We have succeeded in the direct measurement for solar cell/module internal conversion efficiency based on a calorimetric method or electrical substitution method by which the absorbed radiant power is determined by replacing the heat absorbed in the cell/module with the electrical power. The technique is advantageous in that the reflectance and transmittance measurements, which are required in the conventional methods, are not necessary. Also, the internal quantum efficiency can be derived from conversion efficiencies by using the average photon energy. Agreements of the measured data with the values estimated from the nominal values support the validity of this technique.

  6. Prefield methods: streamlining forest or nonforest determinations to increase inventory efficiency

    Treesearch

    Sara Goeking; Gretchen Moisen; Kevin Megown; Jason Toombs

    2009-01-01

    Interior West Forest Inventory and Analysis has developed prefield protocols to distinguish forested plots that require field visits from nonforested plots that do not require field visits. Recent innovations have increased the efficiency of the prefield process. First, the incorporation of periodic inventory data into a prefield database increased the amount of...

  7. Design of spur gears for improved efficiency

    NASA Technical Reports Server (NTRS)

    Anderson, N. E.; Loewenthal, S. H.

    1981-01-01

    A method to calculate spur gear system power loss for a wide range of gear geometries and operating conditions is used to determine design requirements for an efficient gearset. The effects of spur gear size, pitch, ratio, pitch-line-velocity and load on efficiency are shown. A design example is given to illustrate how the method is to be applied. In general, peak efficiencies were found to be greater for larger diameter and fine pitched gears and tare (no-load) losses were found to be significant.

  8. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Lin, Lin; Shao, Meiyue

    We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less

  9. 40 CFR 63.9322 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... capture system efficiency? 63.9322 Section 63.9322 Protection of Environment ENVIRONMENTAL PROTECTION... capture system efficiency? You must use the procedures and test methods in this section to determine capture efficiency as part of the performance test required by § 63.9310. (a) Assuming 100 percent capture...

  10. 40 CFR 63.9322 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... capture system efficiency? 63.9322 Section 63.9322 Protection of Environment ENVIRONMENTAL PROTECTION... capture system efficiency? You must use the procedures and test methods in this section to determine capture efficiency as part of the performance test required by § 63.9310. (a) Assuming 100 percent capture...

  11. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, C.E.; Boothe, R.W.

    1994-02-15

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements. 6 figures.

  12. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, C.E.; Boothe, R.W.

    1996-01-23

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements. 6 figs.

  13. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, Charles E.; Boothe, Richard W.

    1996-01-01

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements.

  14. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, Charles E.; Boothe, Richard W.

    1994-01-01

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements.

  15. A modified indirect mathematical model for evaluation of ethanol production efficiency in industrial-scale continuous fermentation processes.

    PubMed

    Canseco Grellet, M A; Castagnaro, A; Dantur, K I; De Boeck, G; Ahmed, P M; Cárdenas, G J; Welin, B; Ruiz, R M

    2016-10-01

    To calculate fermentation efficiency in a continuous ethanol production process, we aimed to develop a robust mathematical method based on the analysis of metabolic by-product formation. This method is in contrast to the traditional way of calculating ethanol fermentation efficiency, where the ratio between the ethanol produced and the sugar consumed is expressed as a percentage of the theoretical conversion yield. Comparison between the two methods, at industrial scale and in sensitivity studies, showed that the indirect method was more robust and gave slightly higher fermentation efficiency values, although fermentation efficiency of the industrial process was found to be low (~75%). The traditional calculation method is simpler than the indirect method as it only requires a few chemical determinations in samples collected. However, a minor error in any measured parameter will have an important impact on the calculated efficiency. In contrast, the indirect method of calculation requires a greater number of determinations but is much more robust since an error in any parameter will only have a minor effect on the fermentation efficiency value. The application of the indirect calculation methodology in order to evaluate the real situation of the process and to reach an optimum fermentation yield for an industrial-scale ethanol production is recommended. Once a high fermentation yield has been reached the traditional method should be used to maintain the control of the process. Upon detection of lower yields in an optimized process the indirect method should be employed as it permits a more accurate diagnosis of causes of yield losses in order to correct the problem rapidly. The low fermentation efficiency obtained in this study shows an urgent need for industrial process optimization where the indirect calculation methodology will be an important tool to determine process losses. © 2016 The Society for Applied Microbiology.

  16. Comparative study on antibody immobilization strategies for efficient circulating tumor cell capture.

    PubMed

    Ates, Hatice Ceren; Ozgur, Ebru; Kulah, Haluk

    2018-03-23

    Methods for isolation and quantification of circulating tumor cells (CTCs) are attracting more attention every day, as the data for their unprecedented clinical utility continue to grow. However, the challenge is that CTCs are extremely rare (as low as 1 in a billion of blood cells) and a highly sensitive and specific technology is required to isolate CTCs from blood cells. Methods utilizing microfluidic systems for immunoaffinity-based CTC capture are preferred, especially when purity is the prime requirement. However, antibody immobilization strategy significantly affects the efficiency of such systems. In this study, two covalent and two bioaffinity antibody immobilization methods were assessed with respect to their CTC capture efficiency and selectivity, using an anti-epithelial cell adhesion molecule (EpCAM) as the capture antibody. Surface functionalization was realized on plain SiO 2 surfaces, as well as in microfluidic channels. Surfaces functionalized with different antibody immobilization methods are physically and chemically characterized at each step of functionalization. MCF-7 breast cancer and CCRF-CEM acute lymphoblastic leukemia cell lines were used as EpCAM positive and negative cell models, respectively, to assess CTC capture efficiency and selectivity. Comparisons reveal that bioaffinity based antibody immobilization involving streptavidin attachment with glutaraldehyde linker gave the highest cell capture efficiency. On the other hand, a covalent antibody immobilization method involving direct antibody binding by N-(3-dimethylaminopropyl)-N'-ethylcarbodiimide hydrochloride (EDC)-N-hydroxysuccinimide (NHS) reaction was found to be more time and cost efficient with a similar cell capture efficiency. All methods provided very high selectivity for CTCs with EpCAM expression. It was also demonstrated that antibody immobilization via EDC-NHS reaction in a microfluidic channel leads to high capture efficiency and selectivity.

  17. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  18. Application of the conjugate-gradient method to ground-water models

    USGS Publications Warehouse

    Manteuffel, T.A.; Grove, D.B.; Konikow, Leonard F.

    1984-01-01

    The conjugate-gradient method can solve efficiently and accurately finite-difference approximations to the ground-water flow equation. An aquifer-simulation model using the conjugate-gradient method was applied to a problem of ground-water flow in an alluvial aquifer at the Rocky Mountain Arsenal, Denver, Colorado. For this application, the accuracy and efficiency of the conjugate-gradient method compared favorably with other available methods for steady-state flow. However, its efficiency relative to other available methods depends on the nature of the specific problem. The main advantage of the conjugate-gradient method is that it does not require the use of iteration parameters, thereby eliminating this partly subjective procedure. (USGS)

  19. Hubless satellite communications networks

    NASA Technical Reports Server (NTRS)

    Robinson, Peter Alan

    1994-01-01

    Frequency Comb Multiple Access (FCMA) is a new combined modulation and multiple access method which will allow cheap hubless Very Small Aperture Terminal (VSAT) networks to be constructed. Theoretical results show bandwidth efficiency and power efficiency improvements over other modulation and multiple access methods. Costs of the VSAT network are reduced dramatically since a hub station is not required.

  20. Multiscale Reactive Molecular Dynamics

    DTIC Science & Technology

    2012-08-15

    biology cannot be described without considering electronic and nuclear-level dynamics and their coupling to slower, cooperative motions of the system ...coupling to slower, cooperative motions of the system . These inherently multiscale problems require computationally efficient and accurate methods to...condensed phase systems with computational efficiency orders of magnitudes greater than currently possible with ab initio simulation methods, thus

  1. Establishment of a new method to quantitatively evaluate hyphal fusion ability in Aspergillus oryzae.

    PubMed

    Tsukasaki, Wakako; Maruyama, Jun-Ichi; Kitamoto, Katsuhiko

    2014-01-01

    Hyphal fusion is involved in the formation of an interconnected colony in filamentous fungi, and it is the first process in sexual/parasexual reproduction. However, it was difficult to evaluate hyphal fusion efficiency due to the low frequency in Aspergillus oryzae in spite of its industrial significance. Here, we established a method to quantitatively evaluate the hyphal fusion ability of A. oryzae with mixed culture of two different auxotrophic strains, where the ratio of heterokaryotic conidia growing without the auxotrophic requirements reflects the hyphal fusion efficiency. By employing this method, it was demonstrated that AoSO and AoFus3 are required for hyphal fusion, and that hyphal fusion efficiency of A. oryzae was increased by depleting nitrogen source, including large amounts of carbon source, and adjusting pH to 7.0.

  2. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  3. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  4. An efficient multilevel optimization method for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.

    1988-01-01

    An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.

  5. 40 CFR 63.4566 - How do I determine the add-on control device emission destruction or removal efficiency?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... device emission destruction or removal efficiency? 63.4566 Section 63.4566 Protection of Environment... efficiency? You must use the procedures and test methods in this section to determine the add-on control device emission destruction or removal efficiency as part of the performance test required by § 63.4560...

  6. 40 CFR 63.4566 - How do I determine the add-on control device emission destruction or removal efficiency?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... device emission destruction or removal efficiency? 63.4566 Section 63.4566 Protection of Environment... efficiency? You must use the procedures and test methods in this section to determine the add-on control device emission destruction or removal efficiency as part of the performance test required by § 63.4560...

  7. On the enhanced sampling over energy barriers in molecular dynamics simulations.

    PubMed

    Gao, Yi Qin; Yang, Lijiang

    2006-09-21

    We present here calculations of free energies of multidimensional systems using an efficient sampling method. The method uses a transformed potential energy surface, which allows an efficient sampling of both low and high energy spaces and accelerates transitions over barriers. It allows efficient sampling of the configuration space over and only over the desired energy range(s). It does not require predetermined or selected reaction coordinate(s). We apply this method to study the dynamics of slow barrier crossing processes in a disaccharide and a dipeptide system.

  8. 40 CFR Table 1 to Subpart Hhhhhh... - Applicability of General Provisions to Subpart HHHHHH of Part 63

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .../Reporting Waiver Yes § 63.11 Control Device Requirements/Flares No Subpart HHHHHH does not require the use... Control Agencies and EPA Regional Offices Yes § 63.14 Incorporation by Reference Yes Test methods for measuring paint booth filter efficiency and spray gun transfer efficiency in § 63.11173(e)(2) and (3) are...

  9. 40 CFR Table 1 to Subpart Hhhhhh... - Applicability of General Provisions to Subpart HHHHHH of Part 63

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .../Reporting Waiver Yes § 63.11 Control Device Requirements/Flares No Subpart HHHHHH does not require the use... Control Agencies and EPA Regional Offices Yes § 63.14 Incorporation by Reference Yes Test methods for measuring paint booth filter efficiency and spray gun transfer efficiency in § 63.11173(e)(2) and (3) are...

  10. 40 CFR Table 1 to Subpart Hhhhhh... - Applicability of General Provisions to Subpart HHHHHH of Part 63

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .../Reporting Waiver Yes § 63.11 Control Device Requirements/Flares No Subpart HHHHHH does not require the use... Control Agencies and EPA Regional Offices Yes § 63.14 Incorporation by Reference Yes Test methods for measuring paint booth filter efficiency and spray gun transfer efficiency in § 63.11173(e)(2) and (3) are...

  11. 40 CFR Table 1 to Subpart Hhhhhh... - Applicability of General Provisions to Subpart HHHHHH of Part 63

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .../Reporting Waiver Yes § 63.11 Control Device Requirements/Flares No Subpart HHHHHH does not require the use... Control Agencies and EPA Regional Offices Yes § 63.14 Incorporation by Reference Yes Test methods for measuring paint booth filter efficiency and spray gun transfer efficiency in § 63.11173(e)(2) and (3) are...

  12. Stratified Diffractive Optic Approach for Creating High Efficiency Gratings

    NASA Technical Reports Server (NTRS)

    Chambers, Diana M.; Nordin, Gregory P.

    1998-01-01

    Gratings with high efficiency in a single diffracted order can be realized with both volume holographic and diffractive optical elements. However, each method has limitations that restrict the applications in which they can be used. For example, high efficiency volume holographic gratings require an appropriate combination of thickness and permittivity modulation throughout the bulk of the material. Possible combinations of those two characteristics are limited by properties of currently available materials, thus restricting the range of applications for volume holographic gratings. Efficiency of a diffractive optic grating is dependent on its approximation of an ideal analog profile using discrete features. The size of constituent features and, consequently, the number that can be used within a required grating period restricts the applications in which diffractive optic gratings can be used. These limitations imply that there are applications which cannot be addressed by either technology. In this paper we propose to address a number of applications in this category with a new method of creating high efficiency gratings which we call stratified diffractive optic gratings. In this approach diffractive optic techniques are used to create an optical structure that emulates volume grating behavior. To illustrate the stratified diffractive optic grating concept we consider a specific application, a scanner for a space-based coherent wind lidar, with requirements that would be difficult to meet by either volume holographic or diffractive optic methods. The lidar instrument design specifies a transmissive scanner element with the input beam normally incident and the exiting beam deflected at a fixed angle from the optical axis. The element will be rotated about the optical axis to produce a conical scan pattern. The wavelength of the incident beam is 2.06 microns and the required deflection angle is 30 degrees, implying a grating period of approximately 4 microns. Creating a high efficiency volume grating with these parameters would require a grating thickness that cannot be attained with current photosensitive materials. For a diffractive optic grating, the number of binary steps necessary to produce high efficiency combined with the grating period requires feature sizes and alignment tolerances that are also unattainable with current techniques. Rotation of the grating and integration into a space-based lidar system impose the additional requirements that it be insensitive to polarization orientation, that its mass be minimized and that it be able to withstand launch and space environments.

  13. Research on the technical requirements standards of high efficiency precipitator in power industries for assessment

    NASA Astrophysics Data System (ADS)

    Jin, Huang; Ling, Lin; Jun, Guo; Jianguo, Li; Yongzhong, Wang

    2017-11-01

    Facing the increasingly severe situation of air pollution, China are now positively promoting the evaluation of high efficiency air pollution control equipments and the research of the relative national standards. This paper showed the significance and the effect of formulating the technical requirements of high efficiency precipitator equipments for assessment national standards in power industries as well as the research thoughts and principle of these standards. It introduce the qualitative and quantitative evaluation requirements of high efficiency precipitators using in power industries and the core technical content such as testing, calculating, evaluation methods and so on. The implementation of a series of national standards is in order to lead and promote the production and application of high efficiency precipitator equipments in the field of the prevention of air pollution in national power industries.

  14. A fast sequence assembly method based on compressed data structures.

    PubMed

    Liang, Peifeng; Zhang, Yancong; Lin, Kui; Hu, Jinglu

    2014-01-01

    Assembling a large genome using next generation sequencing reads requires large computer memory and a long execution time. To reduce these requirements, a memory and time efficient assembler is presented from applying FM-index in JR-Assembler, called FMJ-Assembler, where FM stand for FMR-index derived from the FM-index and BWT and J for jumping extension. The FMJ-Assembler uses expanded FM-index and BWT to compress data of reads to save memory and jumping extension method make it faster in CPU time. An extensive comparison of the FMJ-Assembler with current assemblers shows that the FMJ-Assembler achieves a better or comparable overall assembly quality and requires lower memory use and less CPU time. All these advantages of the FMJ-Assembler indicate that the FMJ-Assembler will be an efficient assembly method in next generation sequencing technology.

  15. Optimization of the multi-turn injection efficiency for a medical synchrotron

    NASA Astrophysics Data System (ADS)

    Kim, J.; Yoon, M.; Yim, H.

    2016-09-01

    We present a method for optimizing the multi-turn injection efficiency for a medical synchrotron. We show that for a given injection energy, the injection efficiency can be greatly enhanced by choosing transverse tunes appropriately and by optimizing the injection bump and the number of turns required for beam injection. We verify our study by applying the method to the Korea Heavy Ion Medical Accelerator (KHIMA) synchrotron which is currently being built at the campus of Dongnam Institute of Radiological and Medical Sciences (DIRAMS) in Busan, Korea. First the frequency map analysis was performed with the help of the ELEGANT and the ACCSIM codes. The tunes that yielded good injection efficiency were then selected. With these tunes, the injection bump and the number of turns required for injection were then optimized by tracking a number of particles for up to one thousand turns after injection, beyond which no further beam loss occurred. Results for the optimization of the injection efficiency for proton ions are presented.

  16. An efficient numerical technique for calculating thermal spreading resistance

    NASA Technical Reports Server (NTRS)

    Gale, E. H., Jr.

    1977-01-01

    An efficient numerical technique for solving the equations resulting from finite difference analyses of fields governed by Poisson's equation is presented. The method is direct (noniterative)and the computer work required varies with the square of the order of the coefficient matrix. The computational work required varies with the cube of this order for standard inversion techniques, e.g., Gaussian elimination, Jordan, Doolittle, etc.

  17. Front panel engineering with CAD simulation tool

    NASA Astrophysics Data System (ADS)

    Delacour, Jacques; Ungar, Serge; Mathieu, Gilles; Hasna, Guenther; Martinez, Pascal; Roche, Jean-Christophe

    1999-04-01

    THe progress made recently in display technology covers many fields of application. The specification of radiance, colorimetry and lighting efficiency creates some new challenges for designers. Photometric design is limited by the capability of correctly predicting the result of a lighting system, to save on the costs and time taken to build multiple prototypes or bread board benches. The second step of the research carried out by company OPTIS is to propose an optimization method to be applied to the lighting system, developed in the software SPEOS. The main features of the tool requires include the CAD interface, to enable fast and efficient transfer between mechanical and light design software, the source modeling, the light transfer model and an optimization tool. The CAD interface is mainly a prototype of transfer, which is not the subjects here. Photometric simulation is efficiently achieved by using the measured source encoding and a simulation by the Monte Carlo method. Today, the advantages and the limitations of the Monte Carlo method are well known. The noise reduction requires a long calculation time, which increases with the complexity of the display panel. A successful optimization is difficult to achieve, due to the long calculation time required for each optimization pass including a Monte Carlo simulation. The problem was initially defined as an engineering method of study. The experience shows that good understanding and mastering of the phenomenon of light transfer is limited by the complexity of non sequential propagation. The engineer must call for the help of a simulation and optimization tool. The main point needed to be able to perform an efficient optimization is a quick method for simulating light transfer. Much work has been done in this area and some interesting results can be observed. It must be said that the Monte Carlo method wastes time calculating some results and information which are not required for the needs of the simulation. Low efficiency transfer system cost a lot of lost time. More generally, the light transfer simulation can be treated efficiently when the integrated result is composed of elementary sub results that include quick analytical calculated intersections. The first axis of research appear. The quick integration research and the quick calculation of geometric intersections. The first axis of research brings some general solutions also valid for multi-reflection systems. The second axis requires some deep thinking on the intersection calculation. An interesting way is the subdivision of space in VOXELS. This is an adapted method of 3D division of space according to the objects and their location. An experimental software has been developed to provide a validation of the method. The gain is particularly high in complex systems. An important reduction in the calculation time has been achieved.

  18. Agarose droplet microfluidics for highly parallel and efficient single molecule emulsion PCR.

    PubMed

    Leng, Xuefei; Zhang, Wenhua; Wang, Chunming; Cui, Liang; Yang, Chaoyong James

    2010-11-07

    An agarose droplet method was developed for highly parallel and efficient single molecule emulsion PCR. The method capitalizes on the unique thermoresponsive sol-gel switching property of agarose for highly efficient DNA amplification and amplicon trapping. Uniform agarose solution droplets generated via a microfluidic chip serve as robust and inert nanolitre PCR reactors for single copy DNA molecule amplification. After PCR, agarose droplets are gelated to form agarose beads, trapping all amplicons in each reactor to maintain the monoclonality of each droplet. This method does not require cocapsulation of primer labeled microbeads, allows high throughput generation of uniform droplets and enables high PCR efficiency, making it a promising platform for many single copy genetic studies.

  19. 40 CFR Table 3 to Subpart Lllll of... - Requirements for Performance Tests a,b

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., THC destruction efficiency, THC outlet concentration, or combustion efficiency standards, the sampling... combustion efficiency or THC standards a. Measure the concentration of carbon dioxideb. Measure the... method 25A in appendix A to part 60 of this chapter 10. Each control device used to comply with the THC...

  20. 40 CFR Table 3 to Subpart Lllll of... - Requirements for Performance Tests a,b

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., THC destruction efficiency, THC outlet concentration, or combustion efficiency standards, the sampling... combustion efficiency or THC standards a. Measure the concentration of carbon dioxideb. Measure the... method 25A in appendix A to part 60 of this chapter 10. Each control device used to comply with the THC...

  1. 40 CFR Table 3 to Subpart Lllll of... - Requirements for Performance Tests a,b

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., THC destruction efficiency, THC outlet concentration, or combustion efficiency standards, the sampling... combustion efficiency or THC standards a. Measure the concentration of carbon dioxideb. Measure the... method 25A in appendix A to part 60 of this chapter 10. Each control device used to comply with the THC...

  2. 40 CFR Table 3 to Subpart Lllll of... - Requirements for Performance Tests a b

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., THC destruction efficiency, THC outlet concentration, or combustion efficiency standards, the sampling... combustion efficiency or THC standards a. Measure the concentration of carbon dioxideb. Measure the... method 25A in appendix A to part 60 of this chapter 10. Each control device used to comply with the THC...

  3. 40 CFR Table 3 to Subpart Lllll of... - Requirements for Performance Tests a,b

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., THC destruction efficiency, THC outlet concentration, or combustion efficiency standards, the sampling... combustion efficiency or THC standards a. Measure the concentration of carbon dioxideb. Measure the... method 25A in appendix A to part 60 of this chapter 10. Each control device used to comply with the THC...

  4. Indirect synthesis of multi-degree of freedom transient systems. [linear programming for a kinematically linear system

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Chen, Y. H.

    1974-01-01

    An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.

  5. Single Spore Isolation as a Simple and Efficient Technique to obtain fungal pure culture

    NASA Astrophysics Data System (ADS)

    Noman, E.; Al-Gheethi, AA; Rahman, N. K.; Talip, B.; Mohamed, R.; H, N.; Kadir, O. A.

    2018-04-01

    The successful identification of fungi by phenotypic methods or molecular technique depends mainly on the using an advanced technique for purifying the isolates. The most efficient is the single spore technique due to the simple requirements and the efficiency in preventing the contamination by yeast, mites or bacteria. The method described in the present work is depends on the using of a light microscope to transfer one spore into a new culture medium. The present work describes a simple and efficient procedure for single spore isolation to purify of fungi recovered from the clinical wastes.

  6. Research on Generating Method of Embedded Software Test Document Based on Dynamic Model

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.

  7. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  8. Evaluation of counting methods for oceanic radium-228

    NASA Astrophysics Data System (ADS)

    Orr, James C.

    1988-07-01

    Measurement of open ocean 228Ra is difficult, typically requiring at least 200 L of seawater. The burden of collecting and processing these large-volume samples severely limits the widespread use of this promising tracer. To use smaller-volume samples, a more sensitive means of analysis is required. To seek out new and improved counting method(s), conventional 228Ra counting methods have been compared with some promising techniques which are currently used for other radionuclides. Of the conventional methods, α spectrometry possesses the highest efficiency (3-9%) and lowest background (0.0015 cpm), but it suffers from the need for complex chemical processing after sampling and the need to allow about 1 year for adequate ingrowth of 228Th granddaughter. The other two conventional counting methods measure the short-lived 228Ac daughter while it remains supported by 228Ra, thereby avoiding the complex sample processing and the long delay before counting. The first of these, high-resolution γ spectrometry, offers the simplest processing and an efficiency (4.8%) comparable to α spectrometry; yet its high background (0.16 cpm) and substantial equipment cost (˜30,000) limit its widespread use. The second no-wait method, β-γ coincidence spectrometry, also offers comparable efficiency (5.3%), but it possesses both lower background (0.0054 cpm) and lower initial cost (˜12,000). Three new (i.e., untried for 228Ra) techniques all seem to promise about a fivefold increase in efficiency over conventional methods. By employing liquid scintillation methods, both α spectrometry and β-γ coincidence spectrometry can improve their counter efficiency while retaining low background. The third new 228Ra counting method could be adapted from a technique which measures 224Ra by 220Rn emanation. After allowing for ingrowth and then counting for the 224Ra great-granddaughter, 228Ra could be back calculated, thereby yielding a method with high efficiency, where no sample processing is required. The efficiency and background of each of the three new methods have been estimated and are compared with those of the three methods currently employed to measure oceanic 228Ra. From efficiency and background, the relative figure of merit and the detection limit have been determined for each of the six counters. These data suggest that the new counting methods have the potential to measure most 228Ra samples with just 30 L of seawater, to better than 5% precision. Not only would this reduce the time, effort, and expense involved in sample collection, but 228Ra could then be measured on many small-volume samples (20-30 L) previously collected with only 226Ra in mind. By measuring 228Ra quantitatively on such small-volume samples, three analyses (large-volume 228Ra, large-volume 226Ra, and small-volume 226Ra) could be reduced to one, thereby dramatically improving analytical precision.

  9. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  10. Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure

    USGS Publications Warehouse

    Hill, Mary C.

    1990-01-01

    The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.

  11. Rapid one-step recombinational cloning

    PubMed Central

    Fu, Changlin; Wehr, Daniel R.; Edwards, Janice; Hauge, Brian

    2008-01-01

    As an increasing number of genes and open reading frames of unknown function are discovered, expression of the encoded proteins is critical toward establishing function. Accordingly, there is an increased need for highly efficient, high-fidelity methods for directional cloning. Among the available methods, site-specific recombination-based cloning techniques, which eliminate the use of restriction endonucleases and ligase, have been widely used for high-throughput (HTP) procedures. We have developed a recombination cloning method, which uses truncated recombination sites to clone PCR products directly into destination/expression vectors, thereby bypassing the requirement for first producing an entry clone. Cloning efficiencies in excess of 80% are obtained providing a highly efficient method for directional HTP cloning. PMID:18424799

  12. Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.

    PubMed

    Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A

    2017-04-01

    Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.

  13. Shuttle Ground Operations Efficiencies/Technologies (SGOE/T) study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Scholz, A. L.; Hart, M. T.; Lowry, D. J.

    1987-01-01

    Methods and technolgoy were defined to reduce the overall operations cost of a major space program. Space Shuttle processing at Kennedy Space Center (KSC) was designed as the working model that would be the source of the operational information. Methods of improving efficiency of ground operations were assessed and technology elements that could reduce cost identified. Emphasis is on: (1) specific technology items and (2) management approaches required to develop and support efficient ground operations. Prime study results are to be recommendations on how to achieve more efficient operations and identification of existing or new technology that would make vehicle processing in both the current program and future programs more efficient and, therefore, less costly.

  14. Design of Spur Gears for Improved Efficiency

    NASA Technical Reports Server (NTRS)

    Anderson, N. E.; Loewenthal, S. H.

    1981-01-01

    A method to calculate spur gear system loss for a wide range of gear geometries and operating conditions was used to determine design requirements for an efficient gearset. The effects of spur gear size, pitch, ratio, pitch line velocity and load on efficiency were determined. Peak efficiencies were found to be greater for large diameter and fine pitched gears and tare (no-load) losses were found to be significant.

  15. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  16. Efficient analytical implementation of the DOT Riemann solver for the de Saint Venant-Exner morphodynamic model

    NASA Astrophysics Data System (ADS)

    Carraro, F.; Valiani, A.; Caleffi, V.

    2018-03-01

    Within the framework of the de Saint Venant equations coupled with the Exner equation for morphodynamic evolution, this work presents a new efficient implementation of the Dumbser-Osher-Toro (DOT) scheme for non-conservative problems. The DOT path-conservative scheme is a robust upwind method based on a complete Riemann solver, but it has the drawback of requiring expensive numerical computations. Indeed, to compute the non-linear time evolution in each time step, the DOT scheme requires numerical computation of the flux matrix eigenstructure (the totality of eigenvalues and eigenvectors) several times at each cell edge. In this work, an analytical and compact formulation of the eigenstructure for the de Saint Venant-Exner (dSVE) model is introduced and tested in terms of numerical efficiency and stability. Using the original DOT and PRICE-C (a very efficient FORCE-type method) as reference methods, we present a convergence analysis (error against CPU time) to study the performance of the DOT method with our new analytical implementation of eigenstructure calculations (A-DOT). In particular, the numerical performance of the three methods is tested in three test cases: a movable bed Riemann problem with analytical solution; a problem with smooth analytical solution; a test in which the water flow is characterised by subcritical and supercritical regions. For a given target error, the A-DOT method is always the most efficient choice. Finally, two experimental data sets and different transport formulae are considered to test the A-DOT model in more practical case studies.

  17. Chapter 11: Sample Design Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Khawaja, M. Sami; Rushton, Josh

    Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.

  18. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    NASA Astrophysics Data System (ADS)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  19. Semi-automating the manual literature search for systematic reviews increases efficiency.

    PubMed

    Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald

    2010-03-01

    To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.

  20. Recommendations for Developing Alternative Test Methods for Developmental Neurotoxicity

    EPA Science Inventory

    There is great interest in developing alternative methods for developmental neurotoxicity testing (DNT) that are cost-efficient, use fewer animals and are based on current scientific knowledge of the developing nervous system. Alternative methods will require demonstration of the...

  1. Efficiency determination of an electrostatic lunar dust collector by discrete element method

    NASA Astrophysics Data System (ADS)

    Afshar-Mohajer, Nima; Wu, Chang-Yu; Sorloaica-Hickman, Nicoleta

    2012-07-01

    Lunar grains become charged by the sun's radiation in the tenuous atmosphere of the moon. This leads to lunar dust levitation and particle deposition which often create serious problems in the costly system deployed in lunar exploration. In this study, an electrostatic lunar dust collector (ELDC) is proposed to address the issue and the discrete element method (DEM) is used to investigate the effects of electrical particle-particle interactions, non-uniformity of the electrostatic field, and characteristics of the ELDC. The simulations on 20-μm-sized lunar particles reveal the electrical particle-particle interactions of the dust particles within the ELDC plates require 29% higher electrostatic field strength than that without the interactions for 100% collection efficiency. For the given ELDC geometry, consideration of non-uniformity of the electrostatic field along with electrical interactions between particles on the same ELDC geometry leads to a higher requirement of ˜3.5 kV/m to ensure 100% particle collection. Notably, such an electrostatic field is about 103 times less than required for electrodynamic self-cleaning methods. Finally, it is shown for a "half-size" system that the DEM model predicts greater collection efficiency than the Eulerian-based model at all voltages less than required for 100% efficiency. Halving the ELDC dimensions boosts the particle concentration inside the ELDC, as well as the resulting field strength for a given voltage. Though a lunar photovoltaic system was the subject, the results of this study are useful for evaluation of any system for collecting charged particles in other high vacuum environment using an electrostatic field.

  2. Geospatial Representation, Analysis and Computing Using Bandlimited Functions

    DTIC Science & Technology

    2010-02-19

    navigation of aircraft and missiles require detailed representations of gravity and efficient methods for determining orbits and trajectories. However, many...efficient on today’s computers. Under this grant new, computationally efficient, localized representations of gravity have been developed and tested. As a...step in developing a new approach to estimating gravitational potentials, a multiresolution representation for gravity estimation has been proposed

  3. 34 CFR 361.12 - Methods of administration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 2 2012-07-01 2012-07-01 false Methods of administration. 361.12 Section 361.12... State Plan and Other Requirements for Vocational Rehabilitation Services Administration § 361.12 Methods... applicable, employs methods of administration found necessary by the Secretary for the proper and efficient...

  4. An incremental block-line-Gauss-Seidel method for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Napolitano, M.; Walters, R. W.

    1985-01-01

    A block-line-Gauss-Seidel (LGS) method is developed for solving the incompressible and compressible Navier-Stokes equations in two dimensions. The method requires only one block-tridiagonal solution process per iteration and is consequently faster per step than the linearized block-ADI methods. Results are presented for both incompressible and compressible separated flows: in all cases the proposed block-LGS method is more efficient than the block-ADI methods. Furthermore, for high Reynolds number weakly separated incompressible flow in a channel, which proved to be an impossible task for a block-ADI method, solutions have been obtained very efficiently by the new scheme.

  5. Analysis of entropy extraction efficiencies in random number generation systems

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu

    2016-05-01

    Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.

  6. A new method for flight test determination of propulsive efficiency and drag coefficient

    NASA Technical Reports Server (NTRS)

    Bull, G.; Bridges, P. D.

    1983-01-01

    A flight test method is described from which propulsive efficiency as well as parasite and induced drag coefficients can be directly determined using relatively simple instrumentation and analysis techniques. The method uses information contained in the transient response in airspeed for a small power change in level flight in addition to the usual measurement of power required for level flight. Measurements of pitch angle and longitudinal and normal acceleration are eliminated. The theoretical basis for the method, the analytical techniques used, and the results of application of the method to flight test data are presented.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newhouse, P. F.; Guevarra, D.; Umehara, M.

    Energy technologies are enabled by materials innovations, requiring efficient methods to search high dimensional parameter spaces, such as multi-element alloying for enhancing solar fuels photoanodes.

  8. A Method for Increasing the Training Effectiveness of Marine Corps Tactical Exercises: A Pilot Study.

    ERIC Educational Resources Information Center

    Rocklyn, Eugene H.; And Others

    Methods for better utilizing simulated combat systems for training officers are required by the Marine Corps to ensure efficient acquisition of combat decision-making skills. In support of this requirement, a review and analysis of several combat training systems helped to identify a set of major training problems. These included the small number…

  9. Navier-Stokes and viscous-inviscid interaction

    NASA Technical Reports Server (NTRS)

    Steger, Joseph L.; Vandalsem, William R.

    1989-01-01

    Some considerations toward developing numerical procedures for simulating viscous compressible flows are discussed. Both Navier-Stokes and boundary layer field methods are considered. Because efficient viscous-inviscid interaction methods have been difficult to extend to complex 3-D flow simulations, Navier-Stokes procedures are more frequently being utilized even though they require considerably more work per grid point. It would seem a mistake, however, not to make use of the more efficient approximate methods in those regions in which they are clearly valid. Ideally, a general purpose compressible flow solver that can optionally take advantage of approximate solution methods would suffice, both to improve accuracy and efficiency. Some potentially useful steps toward this goal are described: a generalized 3-D boundary layer formulation and the fortified Navier-Stokes procedure.

  10. Tensor Factorization for Low-Rank Tensor Completion.

    PubMed

    Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao

    2018-03-01

    Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.

  11. Method for exfoliation of hexagonal boron nitride

    NASA Technical Reports Server (NTRS)

    Lin, Yi (Inventor); Connell, John W. (Inventor)

    2012-01-01

    A new method is disclosed for the exfoliation of hexagonal boron nitride into mono- and few-layered nanosheets (or nanoplatelets, nanomesh, nanoribbons). The method does not necessarily require high temperature or vacuum, but uses commercially available h-BN powders (or those derived from these materials, bulk crystals) and only requires wet chemical processing. The method is facile, cost efficient, and scalable. The resultant exfoliated h-BN is dispersible in an organic solvent or water thus amenable for solution processing for unique microelectronic or composite applications.

  12. 40 CFR 60.747 - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... efficiency falls below the applicable level as follows: (A) For those affected facilities demonstrating... efficiency falls below the applicable level as follows: (A) For those affected facilities demonstrating..., demonstrating compliance by the test methods described in § 60.743(a)(3) (liquid-liquid material balance) shall...

  13. Development of a novel and highly efficient method of isolating bacteriophages from water.

    PubMed

    Liu, Weili; Li, Chao; Qiu, Zhi-Gang; Jin, Min; Wang, Jing-Feng; Yang, Dong; Xiao, Zhong-Hai; Yuan, Zhao-Kang; Li, Jun-Wen; Xu, Qun-Ying; Shen, Zhi-Qiang

    2017-08-01

    Bacteriophages are widely used to the treatment of drug-resistant bacteria and the improvement of food safety through bacterial lysis. However, the limited investigations on bacteriophage restrict their further application. In this study, a novel and highly efficient method was developed for isolating bacteriophage from water based on the electropositive silica gel particles (ESPs) method. To optimize the ESPs method, we evaluated the eluent type, flow rate, pH, temperature, and inoculation concentration of bacteriophage using bacteriophage f2. The quantitative detection reported that the recovery of the ESPs method reached over 90%. The qualitative detection demonstrated that the ESPs method effectively isolated 70% of extremely low-concentration bacteriophage (10 0 PFU/100L). Based on the host bacteria composed of 33 standard strains and 10 isolated strains, the bacteriophages in 18 water samples collected from the three sites in the Tianjin Haihe River Basin were isolated by the ESPs and traditional methods. Results showed that the ESPs method was significantly superior to the traditional method. The ESPs method isolated 32 strains of bacteriophage, whereas the traditional method isolated 15 strains. The sample isolation efficiency and bacteriophage isolation efficiency of the ESPs method were 3.28 and 2.13 times higher than those of the traditional method. The developed ESPs method was characterized by high isolation efficiency, efficient handling of large water sample size and low requirement on water quality. Copyright © 2017. Published by Elsevier B.V.

  14. On Efficient Multigrid Methods for Materials Processing Flows with Small Particles

    NASA Technical Reports Server (NTRS)

    Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael

    2004-01-01

    Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.

  15. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited bandwidth available between CPU and memory. In terms of parallel efficiency/scaling the different studied methods perform similarly, with the greedy algorithm being the bottleneck. In terms of absolute computational work the adaptive methods are better for the cases studied due to their more efficient selection of the control points. By automating most of the RBF mesh deformation, a robust, efficient and almost user-independent mesh deformation method is presented.

  16. Applications of multigrid software in the atmospheric sciences

    NASA Technical Reports Server (NTRS)

    Adams, J.; Garcia, R.; Gross, B.; Hack, J.; Haidvogel, D.; Pizzo, V.

    1992-01-01

    Elliptic partial differential equations from different areas in the atmospheric sciences are efficiently and easily solved utilizing the multigrid software package named MUDPACK. It is demonstrated that the multigrid method is more efficient than other commonly employed techniques, such as Gaussian elimination and fixed-grid relaxation. The efficiency relative to other techniques, both in terms of storage requirement and computational time, increases quickly with grid size.

  17. Formal Verification Toolkit for Requirements and Early Design Stages

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Miller, Sheena Judson

    2011-01-01

    Efficient flight software development from natural language requirements needs an effective way to test designs earlier in the software design cycle. A method to automatically derive logical safety constraints and the design state space from natural language requirements is described. The constraints can then be checked using a logical consistency checker and also be used in a symbolic model checker to verify the early design of the system. This method was used to verify a hybrid control design for the suit ports on NASA Johnson Space Center's Space Exploration Vehicle against safety requirements.

  18. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  19. Method and apparatus for improving the quality and efficiency of ultrashort-pulse laser machining

    DOEpatents

    Stuart, Brent C.; Nguyen, Hoang T.; Perry, Michael D.

    2001-01-01

    A method and apparatus for improving the quality and efficiency of machining of materials with laser pulse durations shorter than 100 picoseconds by orienting and maintaining the polarization of the laser light such that the electric field vector is perpendicular relative to the edges of the material being processed. Its use is any machining operation requiring remote delivery and/or high precision with minimal collateral dames.

  20. Design, Development, and Testing of a Water Vapor Exchanger for Spacecraft Life Support Systems

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Micka, Daniel J.; Chepko, Ariane B.; Rule, Kyle C.; Anderson, Molly S.

    2016-01-01

    Thermal and environmental control systems for future exploration spacecraft must meet challenging requirements for efficient operation and conservation of resources. Maximizing the use of regenerative systems and conserving water are critical considerations. This paper describes the design, development, and testing of an innovative water vapor exchanger (WVX) that can minimize the amount of water absorbed in, and vented from, regenerative CO2 removal systems. Key design requirements for the WVX are high air flow capacity (suitable for a crew of six), very high water recovery, and very low pressure losses. We developed fabrication and assembly methods that enable high-efficiency mass transfer in a uniform and stable array of Nafion tubes. We also developed analysis and design methods to compute mass transfer and pressure losses. We built and tested subscale units sized for flow rates of 2 and 5 cu ft/min (3.4–8.5 cu m/hr). Durability testing demonstrated that a stable core geometry was sustained over many humid/dry cycles. Pressure losses were very low (less than 0.5 in. H2O (125 Pa) total) and met requirements at prototypical flow rates. We measured water recovery efficiency across a range of flow rates and humidity levels that simulate the range of possible cabin conditions. We measured water recovery efficiencies in the range of 80 to 90%, with the best efficiency at lower flow rates and higher cabin humidity levels. We compared performance of the WVX with similar units built using an unstructured Nafion tube bundle. The WVX achieves higher water recovery efficiency with nearly an order of magnitude lower pressure drop than unstructured tube bundles. These results show that the WVX provides uniform flow through flow channels for both the humid and dry streams and can meet requirements for service on future exploration spacecraft. The WVX technology will be best suited for long-duration exploration vehicles that require regenerative CO2 removal systems while needing to conserve water.

  1. A rapid and efficient branched DNA hybridization assay to titer lentiviral vectors.

    PubMed

    Nair, Ayyappan; Xie, Jinger; Joshi, Sarasijam; Harden, Paul; Davies, Joan; Hermiston, Terry

    2008-11-01

    A robust assay to titer lentiviral vectors is imperative to qualifying their use in drug discovery, target validation and clinical applications. In this study, a novel branched DNA based hybridization assay was developed to titer lentiviral vectors by quantifying viral RNA genome copy numbers from viral lysates without having to purify viral RNA, and this approach was compared with other non-functional (p24 protein ELISA and viral RT-qPCR) and a functional method (reporter gene expression) used commonly. The RT-qPCR method requires purification of viral RNA and the accuracy of titration therefore depends on the efficiency of purification; this requirement is ameliorated in the hybridization assay as RNA is measured directly in viral lysates. The present study indicates that the hybridization based titration assay performed on viral lysates was more accurate and has additional advantages of being rapid, robust and not dependent on transduction efficiency in different cell types.

  2. Efficient high-quality volume rendering of SPH data.

    PubMed

    Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger

    2010-01-01

    High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.

  3. Development of Quenching-qPCR (Q-Q) assay for measuring absolute intracellular cleavage efficiency of ribozyme.

    PubMed

    Kim, Min Woo; Sun, Gwanggyu; Lee, Jung Hyuk; Kim, Byung-Gee

    2018-06-01

    Ribozyme (Rz) is a very attractive RNA molecule in metabolic engineering and synthetic biology fields where RNA processing is required as a control unit or ON/OFF signal for its cleavage reaction. In order to use Rz for such RNA processing, Rz must have highly active and specific catalytic activity. However, current methods for assessing the intracellular activity of Rz have limitations such as difficulty in handling and inaccuracies in the evaluation of correct cleavage activity. In this paper, we proposed a simple method to accurately measure the "intracellular cleavage efficiency" of Rz. This method deactivates unwanted activity of Rz which may consistently occur after cell lysis using DNA quenching method, and calculates the cleavage efficiency by analyzing the cleaved fraction of mRNA by Rz from the total amount of mRNA containing Rz via quantitative real-time PCR (qPCR). The proposed method was applied to measure "intracellular cleavage efficiency" of sTRSV, a representative Rz, and its mutant, and their intracellular cleavage efficiencies were calculated as 89% and 93%, respectively. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Pollutant Emissions and Energy Efficiency under Controlled Conditions for Household Biomass Cookstoves and Implications for Metrics Useful in Setting International Test Standards

    EPA Science Inventory

    Realistic metrics and methods for testing household biomass cookstoves are required to develop standards needed by international policy makers, donors, and investors. Application of consistent test practices allows emissions and energy efficiency performance to be benchmarked and...

  5. Design and Development of a Regenerative Blower for EVA Suit Ventilation

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Chen, Weibo; Hill, Roger W.; Phillips, Scott D.; Paul, Heather L.

    2011-01-01

    Ventilation subsystems in future space suits require a dedicated ventilation fan. The unique requirements for the ventilation fan - including stringent safety requirements and the ability to increase output to operate in buddy mode - combine to make a regenerative blower an attractive choice. This paper describes progress in the design, development, and testing of a regenerative blower designed to meet requirements for ventilation subsystems in future space suits. We have developed analysis methods for the blower s complex, internal flows and identified impeller geometries that enable significant improvements in blower efficiency. We verified these predictions by test, measuring aerodynamic efficiencies of 45% at operating conditions that correspond to the ventilation fan s design point. We have developed a compact motor/controller to drive the blower efficiently at low rotating speed (4500 rpm). Finally, we have assembled a low-pressure oxygen test loop to demonstrate the blower s reliability under prototypical conditions.

  6. A highly efficient bead extraction technique with low bead number for digital microfluidic immunoassay

    PubMed Central

    Tsai, Po-Yen; Lee, I-Chin; Hsu, Hsin-Yun; Huang, Hong-Yuan; Fan, Shih-Kang; Liu, Cheng-Hsien

    2016-01-01

    Here, we describe a technique to manipulate a low number of beads to achieve high washing efficiency with zero bead loss in the washing process of a digital microfluidic (DMF) immunoassay. Previously, two magnetic bead extraction methods were reported in the DMF platform: (1) single-side electrowetting method and (2) double-side electrowetting method. The first approach could provide high washing efficiency, but it required a large number of beads. The second approach could reduce the required number of beads, but it was inefficient where multiple washes were required. More importantly, bead loss during the washing process was unavoidable in both methods. Here, an improved double-side electrowetting method is proposed for bead extraction by utilizing a series of unequal electrodes. It is shown that, with proper electrode size ratio, only one wash step is required to achieve 98% washing rate without any bead loss at bead number less than 100 in a droplet. It allows using only about 25 magnetic beads in DMF immunoassay to increase the number of captured analytes on each bead effectively. In our human soluble tumor necrosis factor receptor I (sTNF-RI) model immunoassay, the experimental results show that, comparing to our previous results without using the proposed bead extraction technique, the immunoassay with low bead number significantly enhances the fluorescence signal to provide a better limit of detection (3.14 pg/ml) with smaller reagent volumes (200 nl) and shorter analysis time (<1 h). This improved bead extraction technique not only can be used in the DMF immunoassay but also has great potential to be used in any other bead-based DMF systems for different applications. PMID:26858807

  7. A superhydrophobic cone to facilitate the xenomonitoring of filarial parasites, malaria, and trypanosomes using mosquito excreta/feces.

    PubMed

    Cook, Darren A N; Pilotte, Nils; Minetti, Corrado; Williams, Steven A; Reimer, Lisa J

    2017-11-06

    Background: Molecular xenomonitoring (MX), the testing of insect vectors for the presence of human pathogens, has the potential to provide a non-invasive and cost-effective method for monitoring the prevalence of disease within a community. Current MX methods require the capture and processing of large numbers of mosquitoes, particularly in areas of low endemicity, increasing the time, cost and labour required. Screening the excreta/feces (E/F) released from mosquitoes, rather than whole carcasses, improves the throughput by removing the need to discriminate vector species since non-vectors release ingested pathogens in E/F. It also enables larger numbers of mosquitoes to be processed per pool. However, this new screening approach requires a method of efficiently collecting E/F. Methods: We developed a cone with a superhydrophobic surface to allow for the efficient collection of E/F. Using mosquitoes exposed to either Plasmodium falciparum , Brugia malayi or Trypanosoma brucei brucei, we tested the performance of the superhydrophobic cone alongside two other collection methods. Results: All collection methods enabled the detection of DNA from the three parasites. Using the superhydrophobic cone to deposit E/F into a small tube provided the highest number of positive samples (16 out of 18) and facilitated detection of parasite DNA in E/F from individual mosquitoes. Further tests showed that following a simple washing step, the cone can be reused multiple times, further improving its cost-effectiveness. Conclusions: Incorporating the superhydrophobic cone into mosquito traps or holding containers could provide a simple and efficient method for collecting E/F. Where this is not possible, swabbing the container or using the washing method facilitates the detection of the three parasites used in this study.

  8. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  9. An efficient method to compute spurious end point contributions in PO solutions. [Physical Optics

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.; Pistorius, Carl W. I.

    1987-01-01

    A method is given to compute the spurious endpoint contributions in the physical optics solution for electromagnetic scattering from conducting bodies. The method is applicable to general three-dimensional structures. The only information required to use the method is the radius of curvature of the body at the shadow boundary. Thus, the method is very efficient for numerical computations. As an illustration, the method is applied to several bodies of revolution to compute the endpoint contributions for backscattering in the case of axial incidence. It is shown that in high-frequency situations, the endpoint contributions obtained using the method are equal to the true endpoint contributions.

  10. Estimating Energy Consumption of Mobile Fluid Power in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Lauren; Zigler, Bradley T.

    This report estimates the market size and energy consumption of mobile off-road applications utilizing hydraulic fluid power, and summarizes technology gaps and implementation barriers. Mobile fluid power is the use of hydraulic fluids under pressure to transmit power in mobile equipment applications. The mobile off-road fluid power sector includes various uses of hydraulic fluid power equipment with fundamentally diverse end-use application and operational requirements, such as a skid steer loader, a wheel loader or an agriculture tractor. The agriculture and construction segments dominate the mobile off-road fluid power market in component unit sales volume. An estimated range of energy consumedmore » by the mobile off-road fluid power sector is 0.36 - 1.8 quads per year, which was 1.3 percent - 6.5 percent of the total energy consumed in 2016 by the transportation sector. Opportunities for efficiency improvements within the fluid power system result from needs to level and reduce the peak system load requirements and develop new technologies to reduce fluid power system level losses, both of which may be facilitated by characterizing duty cycles to define standardized performance test methods. There are currently no commonly accepted standardized test methods for evaluating equipment level efficiency over a duty cycle. The off-road transportation sector currently meets criteria emissions requirements, and there are no efficiency regulations requiring original equipment manufacturers (OEM) to invest in new architecture development to improve the fuel economy of mobile off-road fluid power systems. In addition, the end-user efficiency interests are outweighed by low equipment purchase or lease price concerns, required payback periods, and reliability and durability requirements of new architecture. Current economics, low market volumes with high product diversity, and regulation compliance challenge OEM investment in commercialization of new architecture development.« less

  11. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.

  12. [Mechanical Shimming Method and Implementation for Permanent Magnet of MRI System].

    PubMed

    Xue, Tingqiang; Chen, Jinjun

    2015-03-01

    A mechanical shimming method and device for permanent magnet of MRI system has been developed to meet its stringent homogeneity requirement without time-consuming passive shimming on site, installation and adjustment efficiency has been increased.

  13. The development of strategy use in elementary school children: working memory and individual differences.

    PubMed

    Imbo, Ineke; Vandierendonck, André

    2007-04-01

    The current study tested the development of working memory involvement in children's arithmetic strategy selection and strategy efficiency. To this end, an experiment in which the dual-task method and the choice/no-choice method were combined was administered to 10- to 12-year-olds. Working memory was needed in retrieval, transformation, and counting strategies, but the ratio between available working memory resources and arithmetic task demands changed across development. More frequent retrieval use, more efficient memory retrieval, and more efficient counting processes reduced the working memory requirements. Strategy efficiency and strategy selection were also modified by individual differences such as processing speed, arithmetic skill, gender, and math anxiety. Short-term memory capacity, in contrast, was not related to children's strategy selection or strategy efficiency.

  14. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells

    PubMed Central

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized. PMID:28207765

  15. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells.

    PubMed

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang; Baifeng, Li

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized.

  16. Extracting Communities from Complex Networks by the k-Dense Method

    NASA Astrophysics Data System (ADS)

    Saito, Kazumi; Yamada, Takeshi; Kazama, Kazuhiro

    To understand the structural and functional properties of large-scale complex networks, it is crucial to efficiently extract a set of cohesive subnetworks as communities. There have been proposed several such community extraction methods in the literature, including the classical k-core decomposition method and, more recently, the k-clique based community extraction method. The k-core method, although computationally efficient, is often not powerful enough for uncovering a detailed community structure and it produces only coarse-grained and loosely connected communities. The k-clique method, on the other hand, can extract fine-grained and tightly connected communities but requires a substantial amount of computational load for large-scale complex networks. In this paper, we present a new notion of a subnetwork called k-dense, and propose an efficient algorithm for extracting k-dense communities. We applied our method to the three different types of networks assembled from real data, namely, from blog trackbacks, word associations and Wikipedia references, and demonstrated that the k-dense method could extract communities almost as efficiently as the k-core method, while the qualities of the extracted communities are comparable to those obtained by the k-clique method.

  17. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.

  18. Technique for Very High Order Nonlinear Simulation and Validation

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2001-01-01

    Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.

  19. Simplified dichromated gelatin hologram recording process

    NASA Technical Reports Server (NTRS)

    Georgekutty, Tharayil G.; Liu, Hua-Kuang

    1987-01-01

    A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.

  20. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  1. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    PubMed Central

    Southern, James A.; Plank, Gernot; Vigmond, Edward J.; Whiteley, Jonathan P.

    2017-01-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time whilst still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counter-intuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks it is shown that the coupled method is up to 80% faster than the conventional uncoupled method — and that parallel performance is better for the larger coupled problem. PMID:19457741

  2. Comparison of three newton-like nonlinear least-squares methods for estimating parameters of ground-water flow models

    USGS Publications Warehouse

    Cooley, R.L.; Hill, M.C.

    1992-01-01

    Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.

  3. Space Radiation Transport Methods Development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.

  4. Application of genetic algorithms to focal mechanism determination

    NASA Astrophysics Data System (ADS)

    Kobayashi, Reiji; Nakanishi, Ichiro

    1994-04-01

    Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.

  5. Economics of hardwood silviculture using skyline and conventional logging

    Treesearch

    John E. Baumgras; Gary W. Miller; Chris B. LeDoux

    1995-01-01

    Managing Appalachian hardwood forests to satisfy the growing and diverse demands on this resource will require alternatives to traditional silvicultural methods and harvesting systems. Determining the relative economic efficiency of these alternative methods and systems with respect to harvest cash flows is essential. The effects of silvicultural methods and roundwood...

  6. Comparing three sampling techniques for estimating fine woody down dead biomass

    Treesearch

    Robert E. Keane; Kathy Gray

    2013-01-01

    Designing woody fuel sampling methods that quickly, accurately and efficiently assess biomass at relevant spatial scales requires extensive knowledge of each sampling method's strengths, weaknesses and tradeoffs. In this study, we compared various modifications of three common sampling methods (planar intercept, fixed-area microplot and photoload) for estimating...

  7. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  8. 40 CFR 52.2220 - Identification of plan.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Coke Battery Underfire (combustion) Stacks 06/07/92 08/15/97, 62 FR 43643 CHAPTER 1200-3-6NON-PROCESS... Destruction or Removal Efficiency and Monitoring Requirements 05/18/93 02/27/95, 60 FR 10504 Section 1200-3-18-.84 Test Methods and Compliance Procedures: Determining the Destruction or Removal Efficiency of a...

  9. Rapid protein concentration, efficient fluorescence labeling and purification on a micro/nanofluidics chip.

    PubMed

    Wang, Chen; Ouyang, Jun; Ye, De-Kai; Xu, Jing-Juan; Chen, Hong-Yuan; Xia, Xing-Hua

    2012-08-07

    Fluorescence analysis has proved to be a powerful detection technique for achieving single molecule analysis. However, it usually requires the labeling of targets with bright fluorescent tags since most chemicals and biomolecules lack fluorescence. Conventional fluorescence labeling methods require a considerable quantity of biomolecule samples, long reaction times and extensive chromatographic purification procedures. Herein, a micro/nanofluidics device integrating a nanochannel in a microfluidics chip has been designed and fabricated, which achieves rapid protein concentration, fluorescence labeling, and efficient purification of product in a miniaturized and continuous manner. As a demonstration, labeling of the proteins bovine serum albumin (BSA) and IgG with fluorescein isothiocyanate (FITC) is presented. Compared to conventional methods, the present micro/nanofluidics device performs about 10(4)-10(6) times faster BSA labeling with 1.6 times higher yields due to the efficient nanoconfinement effect, improved mass, and heat transfer in the chip device. The results demonstrate that the present micro/nanofluidics device promises rapid and facile fluorescence labeling of small amount of reagents such as proteins, nucleic acids and other biomolecules with high efficiency.

  10. Efficient l1 -norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method.

    PubMed

    Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai

    2015-02-01

    Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.

  11. Efficient calculation of the polarizability: a simplified effective-energy technique

    NASA Astrophysics Data System (ADS)

    Berger, J. A.; Reining, L.; Sottile, F.

    2012-09-01

    In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.

  12. Genetic engineering of stem cells for enhanced therapy.

    PubMed

    Nowakowski, Adam; Andrzejewska, Anna; Janowski, Miroslaw; Walczak, Piotr; Lukomska, Barbara

    2013-01-01

    Stem cell therapy is a promising strategy for overcoming the limitations of current treatment methods. The modification of stem cell properties may be necessary to fully exploit their potential. Genetic engineering, with an abundance of methodology to induce gene expression in a precise and well-controllable manner, is particularly attractive for this purpose. There are virus-based and non-viral methods of genetic manipulation. Genome-integrating viral vectors are usually characterized by highly efficient and long-term transgene expression, at a cost of safety. Non-integrating viruses are also highly efficient in transduction, and, while safer, offer only a limited duration of transgene expression. There is a great diversity of transfectable forms of nucleic acids; however, for efficient shuttling across cell membranes, additional manipulation is required. Both physical and chemical methods have been employed for this purpose. Stem cell engineering for clinical applications is still in its infancy and requires further research. There are two main strategies for inducing transgene expression in therapeutic cells: transient and permanent expression. In many cases, including stem cell trafficking and using cell therapy for the treatment of rapid-onset disease with a short healing process, transient transgene expression may be a sufficient and optimal approach. For that purpose, mRNA-based methods seem ideally suited, as they are characterized by a rapid, highly efficient transfection, with outstanding safety. Permanent transgene expression is primarily based on the application of viral vectors, and, due to safety concerns, these methods are more challenging. There is active, ongoing research toward the development of non-viral methods that would induce permanent expression, such as transposons and mammalian artificial chromosomes.

  13. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  14. Methods for understanding super-efficient data envelopment analysis results with an application to hospital inpatient surgery.

    PubMed

    O'Neill, Liam; Dexter, Franklin

    2005-11-01

    We compare two techniques for increasing the transparency and face validity of Data Envelopment Analysis (DEA) results for managers at a single decision-making unit: multifactor efficiency (MFE) and non-radial super-efficiency (NRSE). Both methods incorporate the slack values from the super-efficient DEA model to provide a more robust performance measure than radial super-efficiency scores. MFE and NRSE are equivalent for unique optimal solutions and a single output. MFE incorporates the slack values from multiple output variables, whereas NRSE does not. MFE can be more transparent to managers since it involves no additional optimization steps beyond the DEA, whereas NRSE requires several. We compare results for operating room managers at an Iowa hospital evaluating its growth potential for multiple surgical specialties. In addition, we address the problem of upward bias of the slack values of the super-efficient DEA model.

  15. UAS remote sensing for precision agriculture: An independent assessment

    USDA-ARS?s Scientific Manuscript database

    Small Unmanned Aircraft Systems (sUAS) are recognized as potentially important remote-sensing platforms for precision agriculture. However, research is required to determine which sensors and data processing methods are required to use sUAS in an efficient and cost-effective manner. Oregon State U...

  16. Particle and Photon Detection: Counting and Energy Measurement

    PubMed Central

    Janesick, James; Tower, John

    2016-01-01

    Fundamental limits for photon counting and photon energy measurement are reviewed for CCD and CMOS imagers. The challenges to extend photon counting into the visible/nIR wavelengths and achieve energy measurement in the UV with specific read noise requirements are discussed. Pixel flicker and random telegraph noise sources are highlighted along with various methods used in reducing their contribution on the sensor’s read noise floor. Practical requirements for quantum efficiency, charge collection efficiency, and charge transfer efficiency that interfere with photon counting performance are discussed. Lastly we will review current efforts in reducing flicker noise head-on, in hopes to drive read noise substantially below 1 carrier rms. PMID:27187398

  17. Development of the Next Generation of Biogeochemistry Simulations Using EMSL's NWChem Molecular Modeling Software

    NASA Astrophysics Data System (ADS)

    Bylaska, E. J.; Kowalski, K.; Apra, E.; Govind, N.; Valiev, M.

    2017-12-01

    Methods of directly simulating the behavior of complex strongly interacting atomic systems (molecular dynamics, Monte Carlo) have provided important insight into the behavior of nanoparticles, biogeochemical systems, mineral/fluid systems, nanoparticles, actinide systems and geofluids. The limitation of these methods to even wider applications is the difficulty of developing accurate potential interactions in these systems at the molecular level that capture their complex chemistry. The well-developed tools of quantum chemistry and physics have been shown to approach the accuracy required. However, despite the continuous effort being put into improving their accuracy and efficiency, these tools will be of little value to condensed matter problems without continued improvements in techniques to traverse and sample the high-dimensional phase space needed to span the ˜10^12 time scale differences between molecular simulation and chemical events. In recent years, we have made considerable progress in developing electronic structure and AIMD methods tailored to treat biochemical and geochemical problems, including very efficient implementations of many-body methods, fast exact exchange methods, electron-transfer methods, excited state methods, QM/MM, and new parallel algorithms that scale to +100,000 cores. The poster will focus on the fundamentals of these methods and the realities in terms of system size, computational requirements and simulation times that are required for their application to complex biogeochemical systems.

  18. Structural reliability analysis under evidence theory using the active learning kriging model

    NASA Astrophysics Data System (ADS)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  19. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby; Wang, Zhen; Berrill, Mark A

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  20. A Simple and Efficient Computational Approach to Chafed Cable Time-Domain Reflectometry Signature Prediction

    NASA Technical Reports Server (NTRS)

    Kowalski, Marc Edward

    2009-01-01

    A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.

  1. Design of A Cyclone Separator Using Approximation Method

    NASA Astrophysics Data System (ADS)

    Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee

    2017-12-01

    A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.

  2. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    PubMed

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  3. Invariant Imbedded T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios

    NASA Technical Reports Server (NTRS)

    Pelissier, Craig; Kuo, Kwo-Sen; Clune, Thomas; Adams, Ian; Munchak, Stephen

    2017-01-01

    The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM IITM+SOV software to the community under an open source license.

  4. Invariant Imbedding T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios

    NASA Astrophysics Data System (ADS)

    Pelissier, C.; Clune, T.; Kuo, K. S.; Munchak, S. J.; Adams, I. S.

    2017-12-01

    The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM & IITM+SOV software to the community under an open source license.

  5. A synthetic visual plane algorithm for visibility computation in consideration of accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang

    2017-12-01

    Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.

  6. Efficient method for the calculation of mean extinction. II. Analyticity of the complex extinction efficiency of homogeneous spheroids and finite cylinders.

    PubMed

    Xing, Z F; Greenberg, J M

    1994-08-20

    The analyticity of the complex extinction efficiency is examined numerically in the size-parameter domain for homogeneous prolate and oblate spheroids and finite cylinders. The T-matrix code, which is the most efficient program available to date, is employed to calculate the individual particle-extinction efficiencies. Because of its computational limitations in the size-parameter range, a slightly modified Hilbert-transform algorithm is required to establish the analyticity numerically. The findings concerning analyticity that we reported for spheres (Astrophys. J. 399, 164-175, 1992) apply equally to these nonspherical particles.

  7. Scaling of ratings: Concepts and methods

    Treesearch

    Thomas C. Brown; Terry C. Daniel

    1990-01-01

    Rating scales provide an efficient and widely used means of recording judgments. This paper reviews scaling issues within the context of a psychometric model of the rating process, describes several methods of scaling rating data, and compares the methods in terms of the assumptions they require about the rating process and the information they provide about the...

  8. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  9. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  10. Detection of nitrogen deficiency in potatoes using small unmanned aircraft systems

    USDA-ARS?s Scientific Manuscript database

    Small Unmanned Aircraft Systems (sUAS) are recognized as potentially important remote-sensing platforms for precision agriculture. However, research is required to determine which sensors and data processing methods are required to use sUAS in an efficient and cost-effective manner. We set up a ni...

  11. Preservation of live cultures of basidiomycetes - recent methods.

    PubMed

    Homolka, Ladislav

    2014-02-01

    Basidiomycetes are used in industrial processes, in basic or applied research, teaching, systematic and biodiversity studies. Efficient work with basidiomycete cultures requires their reliable source, which is ensured by their safe long-term storage. Repeated subculturing, frequently used for the preservation, is time-consuming, prone to contamination, and does not prevent genetic and physiological changes during long-term maintenance. Various storage methods have been developed in order to eliminate these disadvantages. Besides lyophilization (unsuitable for the majority of basidiomycetes), cryopreservation at low temperatures seems to be a very efficient way to attain this goal. Besides survival, another requirement for successful maintenance of fungal strains is the ability to preserve their features unchanged. An ideal method has not been created so far. Therefore it is highly desirable to develop new or improve the current preservation methods, combining advantages and eliminate disadvantages of individual techniques. Many reviews on preservation of microorganisms including basidiomycetes have been published, but the progress in the field requires an update. Although herbaria specimens of fungi (and of basidiomycetes in particular) are very important for taxonomic and especially typological studies, this review is limited to live fungal cultures. Copyright © 2013 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  12. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  13. Protein immobilization onto various surfaces using a polymer-bound isocyanate

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Jin; Cha, Eun Ji; Park, Hee-Deung

    2015-01-01

    Silane coupling agents have been widely used for immobilizing proteins onto inorganic surfaces. However, the immobilization method using silane coupling agents requires several treatment steps, and its application is limited to only surfaces containing hydroxyl groups. The aim of this study was to develop a novel method to overcome the limitations of the silane-based immobilization method using a polymer-bound isocyanate. Initially, polymer-bound isocyanate was dissolved in organic solvent and then was used to dip-coat inorganic surfaces. Proteins were then immobilized onto the dip-coated surfaces by the formation of urea bonds between the isocyanate groups of the polymer and the amine groups of the protein. The reaction was verified by FT-IR in which NCO stretching peaks disappeared, and CO and NH stretching peaks appeared after immobilization. The immobilization efficiency of the newly developed method was insensitive to reaction temperatures (4-50 °C), but the efficiency increased with reaction time and reached a maximum after 4 h. Furthermore, the method showed comparable immobilization efficiency to the silane-based immobilization method and was applicable to surfaces that cannot form hydroxyl groups. Taken together, the newly developed method provides a simple and efficient platform for immobilizing proteins onto surfaces.

  14. Efficient implementation of neural network deinterlacing

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee

    2009-02-01

    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  15. Efficiency Enhancement for an Inductive Wireless Power Transfer System by Optimizing the Impedance Matching Networks.

    PubMed

    Miao, Zhidong; Liu, Dake; Gong, Chen

    2017-10-01

    Inductive wireless power transfer (IWPT) is a promising power technology for implantable biomedical devices, where the power consumption is low and the efficiency is the most important consideration. In this paper, we propose an optimization method of impedance matching networks (IMN) to maximize the IWPT efficiency. The IMN at the load side is designed to achieve the optimal load, and the IMN at the source side is designed to deliver the required amount of power (no-more-no-less) from the power source to the load. The theoretical analyses and design procedure are given. An IWPT system for an implantable glaucoma therapeutic prototype is designed as an example. Compared with the efficiency of the resonant IWPT system, the efficiency of our optimized system increases with a factor of 1.73. Besides, the efficiency of our optimized IWPT system is 1.97 times higher than that of the IWPT system optimized by the traditional maximum power transfer method. All the discussions indicate that the optimization method proposed in this paper could achieve a high efficiency and long working time when the system is powered by a battery.

  16. Inferring Biological Structures from Super-Resolution Single Molecule Images Using Generative Models

    PubMed Central

    Maji, Suvrajit; Bruchez, Marcel P.

    2012-01-01

    Localization-based super resolution imaging is presently limited by sampling requirements for dynamic measurements of biological structures. Generating an image requires serial acquisition of individual molecular positions at sufficient density to define a biological structure, increasing the acquisition time. Efficient analysis of biological structures from sparse localization data could substantially improve the dynamic imaging capabilities of these methods. Using a feature extraction technique called the Hough Transform simple biological structures are identified from both simulated and real localization data. We demonstrate that these generative models can efficiently infer biological structures in the data from far fewer localizations than are required for complete spatial sampling. Analysis at partial data densities revealed efficient recovery of clathrin vesicle size distributions and microtubule orientation angles with as little as 10% of the localization data. This approach significantly increases the temporal resolution for dynamic imaging and provides quantitatively useful biological information. PMID:22629348

  17. Efficient method for assessing channel instability near bridges

    USGS Publications Warehouse

    Robinson, Bret A.; Thompson, R.E.

    1993-01-01

    Efficient methods for data collection and processing are required to complete channel-instability assessments at 5,600 bridge sites in Indiana at an affordable cost and within a reasonable time frame while maintaining the quality of the assessments. To provide this needed efficiency and quality control, a data-collection form was developed that specifies the data to be collected and the order of data collection. This form represents a modification of previous forms that grouped variables according to type rather than by order of collection. Assessments completed during two field seasons showed that greater efficiency was achieved by using a fill-in-the-blank form that organizes the data to be recorded in a specified order: in the vehicle, from the roadway, in the upstream channel, under the bridge, and in the downstream channel.

  18. In search of standards to support circularity in product policies: A systematic approach.

    PubMed

    Tecchio, Paolo; McAlister, Catriona; Mathieux, Fabrice; Ardente, Fulvio

    2017-12-01

    The aspiration of a circular economy is to shift material flows toward a zero waste and pollution production system. The process of shifting to a circular economy has been initiated by the European Commission in their action plan for the circular economy. The EU Ecodesign Directive is a key policy in this transition. However, to date the focus of access to market requirements on products has primarily been upon energy efficiency. The absence of adequate metrics and standards has been a key barrier to the inclusion of resource efficiency requirements. This paper proposes a framework to boost sustainable engineering and resource use by systematically identifying standardization needs and features. Standards can then support the setting of appropriate material efficiency requirements in EU product policy. Three high-level policy goals concerning material efficiency of products were identified: embodied impact reduction, lifetime extension and residual waste reduction. Through a lifecycle perspective, a matrix of interactions among material efficiency topics (recycled content, re-used content, relevant material content, durability, upgradability, reparability, re-manufacturability, reusability, recyclability, recoverability, relevant material separability) and policy goals was created. The framework was tested on case studies for electronic displays and washing machines. For potential material efficiency requirements, specific standardization needs were identified, such as adequate metrics for performance measurements, reliable and repeatable tests, and calculation procedures. The proposed novel framework aims to provide a method by which to identify key material efficiency considerations within the policy context, and to map out the generic and product-specific standardisation needs to support ecodesign. Via such an approach, many different stakeholders (industry, academics, policy makers, non-governmental organizations etc.) can be involved in material efficiency standards and regulations. Requirements and standards concerning material efficiency would compel product manufacturers, but also help designers and interested parties in addressing the sustainable resource use issue.

  19. Combinatorial alloying improves bismuth vanadate photoanodes via reduced monoclinic distortion

    DOE PAGES

    Newhouse, P. F.; Guevarra, D.; Umehara, M.; ...

    2018-01-01

    Energy technologies are enabled by materials innovations, requiring efficient methods to search high dimensional parameter spaces, such as multi-element alloying for enhancing solar fuels photoanodes.

  20. Estimation of the effective heating systems radius as a method of the reliability improving and energy efficiency

    NASA Astrophysics Data System (ADS)

    Akhmetova, I. G.; Chichirova, N. D.

    2017-11-01

    When conducting an energy survey of heat supply enterprise operating several boilers located not far from each other, it is advisable to assess the degree of heat supply efficiency from individual boiler, the possibility of energy consumption reducing in the whole enterprise by switching consumers to a more efficient source, to close in effective boilers. It is necessary to consider the temporal dynamics of perspective load connection, conditions in the market changes. To solve this problem the radius calculation of the effective heat supply from the thermal energy source can be used. The disadvantage of existing methods is the high complexity, the need to collect large amounts of source data and conduct a significant amount of computational efforts. When conducting an energy survey of heat supply enterprise operating a large number of thermal energy sources, rapid assessment of the magnitude of the effective heating radius requires. Taking into account the specifics of conduct and objectives of the energy survey method of calculation of effective heating systems radius, to use while conducting the energy audit should be based on data available heat supply organization in open access, minimize efforts, but the result should be to match the results obtained by other methods. To determine the efficiency radius of Kazan heat supply system were determined share of cost for generation and transmission of thermal energy, capital investment to connect new consumers. The result were compared with the values obtained with the previously known methods. The suggested Express-method allows to determine the effective radius of the centralized heat supply from heat sources, in conducting energy audits with the effort minimum and the required accuracy.

  1. Efficiency of reactant site sampling in network-free simulation of rule-based models for biochemical systems

    PubMed Central

    Yang, Jin; Hlavacek, William S.

    2011-01-01

    Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie’s method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e., long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e., time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that ligand-induced aggregation of receptors yields a large connected receptor cluster, the rejection-free method is more efficient. PMID:21832806

  2. Multiconfigurational short-range density-functional theory for open-shell systems

    NASA Astrophysics Data System (ADS)

    Hedegârd, Erik Donovan; Toulouse, Julien; Jensen, Hans Jørgen Aagaard

    2018-06-01

    Many chemical systems cannot be described by quantum chemistry methods based on a single-reference wave function. Accurate predictions of energetic and spectroscopic properties require a delicate balance between describing the most important configurations (static correlation) and obtaining dynamical correlation efficiently. The former is most naturally done through a multiconfigurational (MC) wave function, whereas the latter can be done by, e.g., perturbation theory. We have employed a different strategy, namely, a hybrid between multiconfigurational wave functions and density-functional theory (DFT) based on range separation. The method is denoted by MC short-range DFT (MC-srDFT) and is more efficient than perturbative approaches as it capitalizes on the efficient treatment of the (short-range) dynamical correlation by DFT approximations. In turn, the method also improves DFT with standard approximations through the ability of multiconfigurational wave functions to recover large parts of the static correlation. Until now, our implementation was restricted to closed-shell systems, and to lift this restriction, we present here the generalization of MC-srDFT to open-shell cases. The additional terms required to treat open-shell systems are derived and implemented in the DALTON program. This new method for open-shell systems is illustrated on dioxygen and [Fe(H2O)6]3+.

  3. Mining Quality Phrases from Massive Text Corpora

    PubMed Central

    Liu, Jialu; Shang, Jingbo; Wang, Chi; Ren, Xiang; Han, Jiawei

    2015-01-01

    Text data are ubiquitous and play an essential role in big data applications. However, text data are mostly unstructured. Transforming unstructured text into structured units (e.g., semantically meaningful phrases) will substantially reduce semantic ambiguity and enhance the power and efficiency at manipulating such data using database technology. Thus mining quality phrases is a critical research problem in the field of databases. In this paper, we propose a new framework that extracts quality phrases from text corpora integrated with phrasal segmentation. The framework requires only limited training but the quality of phrases so generated is close to human judgment. Moreover, the method is scalable: both computation time and required space grow linearly as corpus size increases. Our experiments on large text corpora demonstrate the quality and efficiency of the new method. PMID:26705375

  4. Accurate measurement of transgene copy number in crop plants using droplet digital PCR

    USDA-ARS?s Scientific Manuscript database

    Technical abstract: Genetic transformation is a powerful means for the improvement of crop plants, but requires labor and resource intensive methods. An efficient method for identifying single copy transgene insertion events from a population of independent transgenic lines is desirable. Currently ...

  5. Accurate measure of transgene copy number in crop plants using droplet digital PCR

    USDA-ARS?s Scientific Manuscript database

    Genetic transformation is a powerful means for the improvement of crop plants, but requires labor- and resource-intensive methods. An efficient method for identifying single-copy transgene insertion events from a population of independent transgenic lines is desirable. Currently, transgene copy numb...

  6. Efficient numerical method for analyzing optical bistability in photonic crystal microcavities.

    PubMed

    Yuan, Lijun; Lu, Ya Yan

    2013-05-20

    Nonlinear optical effects can be enhanced by photonic crystal microcavities and be used to develop practical ultra-compact optical devices with low power requirements. The finite-difference time-domain method is the standard numerical method for simulating nonlinear optical devices, but it has limitations in terms of accuracy and efficiency. In this paper, a rigorous and efficient frequency-domain numerical method is developed for analyzing nonlinear optical devices where the nonlinear effect is concentrated in the microcavities. The method replaces the linear problem outside the microcavities by a rigorous and numerically computed boundary condition, then solves the nonlinear problem iteratively in a small region around the microcavities. Convergence of the iterative method is much easier to achieve since the size of the problem is significantly reduced. The method is presented for a specific two-dimensional photonic crystal waveguide-cavity system with a Kerr nonlinearity, using numerical methods that can take advantage of the geometric features of the structure. The method is able to calculate multiple solutions exhibiting the optical bistability phenomenon in the strongly nonlinear regime.

  7. IMPROVEMENT OF EFFICIENCY OF CUT AND OVERLAY ASPHALT WORKS BY USING MOBILE MAPPING SYSTEM

    NASA Astrophysics Data System (ADS)

    Yabuki, Nobuyoshi; Nakaniwa, Kazuhide; Kidera, Hiroki; Nishi, Daisuke

    When the cut-and-overlay asphalt work is done for improving road pavement, conventional road surface elevation survey with levels often requires traffic regulation and takes much time and effort. Recently, although new surveying methods using non-prismatic total stations or fixed 3D laser scanners have been proposed in industry, they have not been adopted much due to their high cost. In this research, we propose a new method using Mobile Mapping Systems (MMS) in order to increase the efficiency and to reduce the cost. In this method, small white marks are painted at the intervals of 10m along the road to identify cross sections and to modify the elevations of the white marks with accurate survey data. To verify this proposed method, we executed an experiment and compared this method with the conventional level survey method and the fixed 3D laser scanning method at a road of Osaka University. The result showed that the proposed method had a similar accuracy with other methods and it was more efficient.

  8. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  9. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  10. A novel technique based on in vitro oocyte injection to improve CRISPR/Cas9 gene editing in zebrafish

    PubMed Central

    Xie, Shao-Lin; Bian, Wan-Ping; Wang, Chao; Junaid, Muhammad; Zou, Ji-Xing; Pei, De-Sheng

    2016-01-01

    Contemporary improvements in the type II clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) system offer a convenient way for genome editing in zebrafish. However, the low efficiencies of genome editing and germline transmission require a time-intensive and laborious screening work. Here, we reported a method based on in vitro oocyte storage by injecting oocytes in advance and incubating them in oocyte storage medium to significantly improve the efficiencies of genome editing and germline transmission by in vitro fertilization (IVF) in zebrafish. Compared to conventional methods, the prior micro-injection of zebrafish oocytes improved the efficiency of genome editing, especially for the sgRNAs with low targeting efficiency. Due to high throughputs, simplicity and flexible design, this novel strategy will provide an efficient alternative to increase the speed of generating heritable mutants in zebrafish by using CRISPR/Cas9 system. PMID:27680290

  11. Framework for Architecture Trade Study Using MBSE and Performance Simulation

    NASA Technical Reports Server (NTRS)

    Ryan, Jessica; Sarkani, Shahram; Mazzuchim, Thomas

    2012-01-01

    Increasing complexity in modern systems as well as cost and schedule constraints require a new paradigm of system engineering to fulfill stakeholder needs. Challenges facing efficient trade studies include poor tool interoperability, lack of simulation coordination (design parameters) and requirements flowdown. A recent trend toward Model Based System Engineering (MBSE) includes flexible architecture definition, program documentation, requirements traceability and system engineering reuse. As a new domain MBSE still lacks governing standards and commonly accepted frameworks. This paper proposes a framework for efficient architecture definition using MBSE in conjunction with Domain Specific simulation to evaluate trade studies. A general framework is provided followed with a specific example including a method for designing a trade study, defining candidate architectures, planning simulations to fulfill requirements and finally a weighted decision analysis to optimize system objectives.

  12. 77 FR 28790 - Medical Loss Ratio Requirements Under the Patient Protection and Affordable Care Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-16

    ... information will be available on the HHS Web site, HealthCare.gov , providing an efficient method of public... Sources, Methods, and Limitations On December 1, 2010, we published an interim final rule (75 FR 74864... impacts of the MLR rule, the data contain certain limitations; we developed imputation methods to account...

  13. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    PubMed

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  14. A Simulation Approach to Assessing Sampling Strategies for Insect Pests: An Example with the Balsam Gall Midge

    PubMed Central

    Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.

    2013-01-01

    Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556

  15. A space radiation transport method development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2004-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.

  16. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  17. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  18. A Single-Lap Joint Adhesive Bonding Optimization Method Using Gradient and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Smeltzer, Stanley S., III; Finckenor, Jeffrey L.

    1999-01-01

    A natural process for any engineer, scientist, educator, etc. is to seek the most efficient method for accomplishing a given task. In the case of structural design, an area that has a significant impact on the structural efficiency is joint design. Unless the structure is machined from a solid block of material, the individual components which compose the overall structure must be joined together. The method for joining a structure varies depending on the applied loads, material, assembly and disassembly requirements, service life, environment, etc. Using both metallic and fiber reinforced plastic materials limits the user to two methods or a combination of these methods for joining the components into one structure. The first is mechanical fastening and the second is adhesive bonding. Mechanical fastening is by far the most popular joining technique; however, in terms of structural efficiency, adhesive bonding provides a superior joint since the load is distributed uniformly across the joint. The purpose of this paper is to develop a method for optimizing single-lap joint adhesive bonded structures using both gradient and genetic algorithms and comparing the solution process for each method. The goal of the single-lap joint optimization is to find the most efficient structure that meets the imposed requirements while still remaining as lightweight, economical, and reliable as possible. For the single-lap joint, an optimum joint is determined by minimizing the weight of the overall joint based on constraints from adhesive strengths as well as empirically derived rules. The analytical solution of the sin-le-lap joint is determined using the classical Goland-Reissner technique for case 2 type adhesive joints. Joint weight minimization is achieved using a commercially available routine, Design Optimization Tool (DOT), for the gradient solution while an author developed method is used for the genetic algorithm solution. Results illustrate the critical design variables as a function of adhesive properties and convergences of different joints based on the two optimization methods.

  19. Predicting internal red oak (Quercus rubra) log defect features using surface defect defect measurements

    Treesearch

    R. Edward Thomas

    2013-01-01

    Determining the defects located within a log is crucial to understanding the tree/log resource for efficient processing. However, existing means of doing this non-destructively requires the use of expensive x-ray/CT (computerized tomography), MRI (magnetic resonance imaging), or microwave technology. These methods do not lend themselves to fast, efficient, and cost-...

  20. Predicting internal yellow-poplar log defect features using surface indicators

    Treesearch

    R. Edward Thomas

    2008-01-01

    Determining the defects that are located within the log is crucial to understanding the tree/log resource for efficient processing. However, existing means of doing this non-destructively requires the use of expensive X-ray/CT, MRI, or microwave technology. These methods do not lend themselves to fast, efficient, and cost-effective analysis of logs and tree stems in...

  1. 40 CFR 63.2262 - How do I conduct performance tests and establish operating requirements?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... method detection limit is less than or equal to 1 parts per million by volume, dry basis (ppmvd..., percent (determined for reconstituted wood product presses and board coolers as required in Table 4 to... = capture efficiency, percent (determined for reconstituted wood product presses and board coolers as...

  2. 40 CFR 63.2262 - How do I conduct performance tests and establish operating requirements?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... method detection limit is less than or equal to 1 parts per million by volume, dry basis (ppmvd..., percent (determined for reconstituted wood product presses and board coolers as required in Table 4 to... = capture efficiency, percent (determined for reconstituted wood product presses and board coolers as...

  3. 40 CFR 63.2262 - How do I conduct performance tests and establish operating requirements?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... method detection limit is less than or equal to 1 parts per million by volume, dry basis (ppmvd..., percent (determined for reconstituted wood product presses and board coolers as required in Table 4 to... = capture efficiency, percent (determined for reconstituted wood product presses and board coolers as...

  4. Quantum and electromagnetic propagation with the conjugate symmetric Lanczos method.

    PubMed

    Acevedo, Ramiro; Lombardini, Richard; Turner, Matthew A; Kinsey, James L; Johnson, Bruce R

    2008-02-14

    The conjugate symmetric Lanczos (CSL) method is introduced for the solution of the time-dependent Schrodinger equation. This remarkably simple and efficient time-domain algorithm is a low-order polynomial expansion of the quantum propagator for time-independent Hamiltonians and derives from the time-reversal symmetry of the Schrodinger equation. The CSL algorithm gives forward solutions by simply complex conjugating backward polynomial expansion coefficients. Interestingly, the expansion coefficients are the same for each uniform time step, a fact that is only spoiled by basis incompleteness and finite precision. This is true for the Krylov basis and, with further investigation, is also found to be true for the Lanczos basis, important for efficient orthogonal projection-based algorithms. The CSL method errors roughly track those of the short iterative Lanczos method while requiring fewer matrix-vector products than the Chebyshev method. With the CSL method, only a few vectors need to be stored at a time, there is no need to estimate the Hamiltonian spectral range, and only matrix-vector and vector-vector products are required. Applications using localized wavelet bases are made to harmonic oscillator and anharmonic Morse oscillator systems as well as electrodynamic pulse propagation using the Hamiltonian form of Maxwell's equations. For gold with a Drude dielectric function, the latter is non-Hermitian, requiring consideration of corrections to the CSL algorithm.

  5. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.

    1986-01-01

    The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.

  6. A novel way to establish fertilization recommendations based on agronomic efficiency and a sustainable yield index for rice crops.

    PubMed

    Liu, Chuang; Liu, Yi; Li, Zhiguo; Zhang, Guoshi; Chen, Fang

    2017-04-24

    A simpler approach for establishing fertilizer recommendations for major crops is urgently required to improve the application efficiency of commercial fertilizers in China. To address this need, we developed a method based on field data drawn from the China Program of the International Plant Nutrition Institute (IPNI) rice experiments and investigations carried out in southeastern China during 2001 to 2012. Our results show that, using agronomic efficiencies and a sustainable yield index (SYI), this new method for establishing fertilizer recommendations robustly estimated the mean rice yield (7.6 t/ha) and mean nutrient supply capacities (186, 60, and 96 kg/ha of N, P 2 O 5 , and K 2 O, respectively) of fertilizers in the study region. In addition, there were significant differences in rice yield response, economic cost/benefit ratio, and nutrient-use efficiencies associated with agronomic efficiencies ranked as high, medium and low. Thus, ranking agronomic efficiency could strengthen linear models relating rice yields and SYI. Our results also indicate that the new method provides better recommendations in terms of rice yield, SYI, and profitability than previous methods. Hence, we believe it is an effective approach for improving recommended applications of commercial fertilizers to rice (and potentially other crops).

  7. Sampling bee communities using pan traps: alternative methods increase sample size

    USDA-ARS?s Scientific Manuscript database

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  8. Constructing I[subscript h] Symmetrical Fullerenes from Pentagons

    ERIC Educational Resources Information Center

    Gan, Li-Hua

    2008-01-01

    Twelve pentagons are sufficient and necessary to form a fullerene cage. According to this structural feature of fullerenes, we propose a simple and efficient method for the construction of I[subscript h] symmetrical fullerenes from pentagons. This method does not require complicated mathematical knowledge; yet it provides an excellent paradigm for…

  9. Method for fabricating pixelated silicon device cells

    DOEpatents

    Nielson, Gregory N.; Okandan, Murat; Cruz-Campa, Jose Luis; Nelson, Jeffrey S.; Anderson, Benjamin John

    2015-08-18

    A method, apparatus and system for flexible, ultra-thin, and high efficiency pixelated silicon or other semiconductor photovoltaic solar cell array fabrication is disclosed. A structure and method of creation for a pixelated silicon or other semiconductor photovoltaic solar cell array with interconnects is described using a manufacturing method that is simplified compared to previous versions of pixelated silicon photovoltaic cells that require more microfabrication steps.

  10. Method and apparatus for energy efficient self-aeration in chemical, biochemical, and wastewater treatment processes

    DOEpatents

    Gao, Johnway [Richland, WA; Skeen, Rodney S [Pendleton, OR

    2002-05-28

    The present invention is a pulse spilling self-aerator (PSSA) that has the potential to greatly lower the installation, operation, and maintenance cost associated with aerating and mixing aqueous solutions. Currently, large quantities of low-pressure air are required in aeration systems to support many biochemical production processes and wastewater treatment plants. Oxygen is traditionally supplied and mixed by a compressor or blower and a mechanical agitator. These systems have high-energy requirements and high installation and maintenance costs. The PSSA provides a mixing and aeration capability that can increase operational efficiency and reduce overall cost.

  11. High efficiency direct thermal to electric energy conversion from radioisotope decay using selective emitters and spectrally tuned solar cells

    NASA Technical Reports Server (NTRS)

    Chubb, Donald L.; Flood, Dennis J.; Lowe, Roland A.

    1993-01-01

    Thermophotovoltaic (TPV) systems are attractive possibilities for direct thermal-to-electric energy conversion, but have typically required the use of black body radiators operating at high temperatures. Recent advances in both the understanding and performance of solid rare-earth oxide selective emitters make possible the use of TPV at temperatures as low as 1200K. Both selective emitter and filter system TPV systems are feasible. However, requirements on the filter system are severe in order to attain high efficiency. A thin-film of a rare-earth oxide is one method for producing an efficient, rugged selective emitter. An efficiency of 0.14 and power density of 9.2 W/KG at 1200K is calculated for a hypothetical thin-film neodymia (Nd2O3) selective emitter TPV system that uses radioisotope decay as the thermal energy source.

  12. Multiplex SNaPshot-a new simple and efficient CYP2D6 and ADRB1 genotyping method.

    PubMed

    Ben, Songtao; Cooper-DeHoff, Rhonda M; Flaten, Hanna K; Evero, Oghenero; Ferrara, Tracey M; Spritz, Richard A; Monte, Andrew A

    2016-04-23

    Reliable, inexpensive, high-throughput genotyping methods are required for clinical trials. Traditional assays require numerous enzyme digestions or are too expensive for large sample volumes. Our objective was to develop an inexpensive, efficient, and reliable assay for CYP2D6 and ADRB1 accounting for numerous polymorphisms including gene duplications. We utilized the multiplex SNaPshot® custom genotype method to genotype CYP2D6 and ADRB1. We compared the method to reference standards genotyped using the Taqman Copy Number Variant Assay followed by pyrosequencing quantification and determined assigned genotype concordance. We genotyped 119 subjects. Seven (5.9 %) were found to be CYP2D6 poor metabolizers (PMs), 18 (15.1 %) intermediate metabolizers (IMs), 89 (74.8 %) extensive metabolizers (EMs), and 5 (4.2 %) ultra-rapid metabolizers (UMs). We genotyped two variants in the β1-adrenoreceptor, rs1801253 (Gly389Arg) and rs1801252 (Ser49Gly). The Gly389Arg genotype is Gly/Gly 18 (15.1 %), Gly/Arg 58 (48.7 %), and Arg/Arg 43 (36.1 %). The Ser49Gly genotype is Ser/Ser 82 (68.9 %), Ser/Gly 32 (26.9), and Gly/Gly 5 (4.2 %). The multiplex SNaPshot method was concordant with genotypes in reference samples. The multiplex SNaPshot method allows for specific and accurate detection of CYP2D6 genotypes and ADRB1 genotypes and haplotypes. This platform is simple and efficient and suited for high throughput.

  13. Easi-CRISPR for creating knock-in and conditional knockout mouse models using long ssDNA donors.

    PubMed

    Miura, Hiromi; Quadros, Rolen M; Gurumurthy, Channabasavaiah B; Ohtsuka, Masato

    2018-01-01

    CRISPR/Cas9-based genome editing can easily generate knockout mouse models by disrupting the gene sequence, but its efficiency for creating models that require either insertion of exogenous DNA (knock-in) or replacement of genomic segments is very poor. The majority of mouse models used in research involve knock-in (reporters or recombinases) or gene replacement (e.g., conditional knockout alleles containing exons flanked by LoxP sites). A few methods for creating such models have been reported that use double-stranded DNA as donors, but their efficiency is typically 1-10% and therefore not suitable for routine use. We recently demonstrated that long single-stranded DNAs (ssDNAs) serve as very efficient donors, both for insertion and for gene replacement. We call this method efficient additions with ssDNA inserts-CRISPR (Easi-CRISPR) because it is a highly efficient technology (efficiency is typically 30-60% and reaches as high as 100% in some cases). The protocol takes ∼2 months to generate the founder mice.

  14. A methodology for designing aircraft to low sonic boom constraints

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Needleman, Kathy E.

    1991-01-01

    A method for designing conceptual supersonic cruise aircraft to meet low sonic boom requirements is outlined and described. The aircraft design is guided through a systematic evolution from initial three view drawing to a final numerical model description, while the designer using the method controls the integration of low sonic boom, high supersonic aerodynamic efficiency, adequate low speed handling, and reasonable structure and materials technologies. Some experience in preliminary aircraft design and in the use of various analytical and numerical codes is required for integrating the volume and lift requirements throughout the design process.

  15. An Assessment of Artificial Compressibility and Pressure Projection Methods for Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.

  16. Efficient Homodifunctional Bimolecular Ring-Closure Method for Cyclic Polymers by Combining RAFT and Self-Accelerating Click Reaction.

    PubMed

    Qu, Lin; Sun, Peng; Wu, Ying; Zhang, Ke; Liu, Zhengping

    2017-08-01

    An efficient metal-free homodifunctional bimolecular ring-closure method is developed for the formation of cyclic polymers by combining reversible addition-fragmentation chain transfer (RAFT) polymerization and self-accelerating click reaction. In this approach, α,ω-homodifunctional linear polymers with azide terminals are prepared by RAFT polymerization and postmodification of polymer chain end groups. By virtue of sym-dibenzo-1,5-cyclooctadiene-3,7-diyne (DBA) as small linkers, well-defined cyclic polymers are then prepared using the self-accelerating double strain-promoted azide-alkyne click (DSPAAC) reaction to ring-close the azide end-functionalized homodifunctional linear polymer precursors. Due to the self-accelerating property of DSPAAC ring-closing reaction, this novel method eliminates the requirement of equimolar amounts of telechelic polymers and small linkers in traditional bimolecular ring-closure methods. It facilitates this method to efficiently and conveniently produce varied pure cyclic polymers by employing an excess molar amount of DBA small linkers. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Highly efficient and autocatalytic H2₂ dissociation for CO₂ reduction into formic acid with zinc.

    PubMed

    Jin, Fangming; Zeng, Xu; Liu, Jianke; Jin, Yujia; Wang, Lunying; Zhong, Heng; Yao, Guodong; Huo, Zhibao

    2014-03-28

    Artificial photosynthesis, specifically H2O dissociation for CO2 reduction with solar energy, is regarded as one of the most promising methods for sustainable energy and utilisation of environmental resources. However, a highly efficient conversion still remains extremely challenging. The hydrogenation of CO2 is regarded as the most commercially feasible method, but this method requires either exotic catalysts or high-purity hydrogen and hydrogen storage, which are regarded as an energy-intensive process. Here we report a highly efficient method of H2O dissociation for reducing CO2 into chemicals with Zn powder that produces formic acid with a high yield of approximately 80%, and this reaction is revealed for the first time as an autocatalytic process in which an active intermediate, ZnH(-) complex, serves as the active hydrogen. The proposed process can assist in developing a new concept for improving artificial photosynthetic efficiency by coupling geochemistry, specifically the metal-based reduction of H2O and CO2, with solar-driven thermochemistry for reducing metal oxide into metal.

  18. Highly efficient and autocatalytic H2O dissociation for CO2 reduction into formic acid with zinc

    PubMed Central

    Jin, Fangming; Zeng, Xu; Liu, Jianke; Jin, Yujia; Wang, Lunying; Zhong, Heng; Yao, Guodong; Huo, Zhibao

    2014-01-01

    Artificial photosynthesis, specifically H2O dissociation for CO2 reduction with solar energy, is regarded as one of the most promising methods for sustainable energy and utilisation of environmental resources. However, a highly efficient conversion still remains extremely challenging. The hydrogenation of CO2 is regarded as the most commercially feasible method, but this method requires either exotic catalysts or high-purity hydrogen and hydrogen storage, which are regarded as an energy-intensive process. Here we report a highly efficient method of H2O dissociation for reducing CO2 into chemicals with Zn powder that produces formic acid with a high yield of approximately 80%, and this reaction is revealed for the first time as an autocatalytic process in which an active intermediate, ZnH− complex, serves as the active hydrogen. The proposed process can assist in developing a new concept for improving artificial photosynthetic efficiency by coupling geochemistry, specifically the metal-based reduction of H2O and CO2, with solar-driven thermochemistry for reducing metal oxide into metal. PMID:24675820

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, K. A.; Schoefer, V.; Tomizawa, M.

    The new accelerator complex at J-PARC will operate with both high energy and very high intensity proton beams. With a design slow extraction efficiency of greater than 99% this facility will still be depositing significant beam power onto accelerator components [2]. To achieve even higher efficiencies requires some new ideas. The design of the extraction system and the accelerator lattice structure leaves little room for improvement using conventional techniques. In this report we will present one method for improving the slow extraction efficiency at J-PARC by adding duodecapoles or octupoles to the slow extraction system. We will review the theorymore » of resonant extraction, describe simulation methods, and present the results of detailed simulations. From our investigations we find that we can improve extraction efficiency and thereby reduce the level of residual activation in the accelerator components and surrounding shielding.« less

  20. Optimization of the efficiency of search operations in the relational database of radio electronic systems

    NASA Astrophysics Data System (ADS)

    Wajszczyk, Bronisław; Biernacki, Konrad

    2018-04-01

    The increase of interoperability of radio electronic systems used in the Armed Forces requires the processing of very large amounts of data. Requirements for the integration of information from many systems and sensors, including radar recognition, electronic and optical recognition, force to look for more efficient methods to support information retrieval in even-larger database resources. This paper presents the results of research on methods of improving the efficiency of databases using various types of indexes. The data structure indexing technique is a solution used in RDBMS systems (relational database management system). However, the analysis of the performance of indices, the description of potential applications, and in particular the presentation of a specific scale of performance growth for individual indices are limited to few studies in this field. This paper contains analysis of methods affecting the work efficiency of a relational database management system. As a result of the research, a significant increase in the efficiency of operations on data was achieved through the strategy of indexing data structures. The presentation of the research topic discussed in this paper mainly consists of testing the operation of various indexes against the background of different queries and data structures. The conclusions from the conducted experiments allow to assess the effectiveness of the solutions proposed and applied in the research. The results of the research indicate the existence of a real increase in the performance of operations on data using indexation of data structures. In addition, the level of this growth is presented, broken down by index types.

  1. Table-top job analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-12-01

    The purpose of this Handbook is to establish general training program guidelines for training personnel in developing training for operation, maintenance, and technical support personnel at Department of Energy (DOE) nuclear facilities. TTJA is not the only method of job analysis; however, when conducted properly TTJA can be cost effective, efficient, and self-validating, and represents an effective method of defining job requirements. The table-top job analysis is suggested in the DOE Training Accreditation Program manuals as an acceptable alternative to traditional methods of analyzing job requirements. DOE 5480-20A strongly endorses and recommends it as the preferred method for analyzing jobsmore » for positions addressed by the Order.« less

  2. [Development method of healthcare information system integration based on business collaboration model].

    PubMed

    Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2015-02-01

    Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.

  3. Photoswitchable method for the ordered attachment of proteins to surfaces

    DOEpatents

    Camarero, Julio A [Livermore, CA; DeYoreo, James J [Clayton, CA; Kwon, Youngeun [Livermore, CA

    2011-07-05

    Described herein is a method for the attachment of proteins to any solid support with control over the orientation of the attachment. The method is extremely efficient, not requiring the previous purification of the protein to be attached, and can be activated by UV-light. Spatially addressable arrays of multiple protein components can be generated by using standard photolithographic techniques.

  4. Photoswitchable method for the ordered attachment of proteins to surfaces

    DOEpatents

    Camarero, Julio A.; De Yoreo, James J.; Kwon, Youngeun

    2010-04-20

    Described herein is a method for the attachment of proteins to any solid support with control over the orientation of the attachment. The method is extremely efficient, not requiring the previous purification of the protein to be attached, and can be activated by UV-light. Spatially addressable arrays of multiple protein components can be generated by using standard photolithographic techniques.

  5. The calculation of viscosity of liquid n-decane and n-hexadecane by the Green-Kubo method

    NASA Astrophysics Data System (ADS)

    Cui, S. T.; Cummings, P. T.; Cochran, H. D.

    This short commentary presents the result of long molecular dynamics simulation calculations of the shear viscosity of liquid n-decane and n-hexadecane using the Green-Kubo integration method. The relaxation time of the stress-stress correlation function is compared with those of rotation and diffusion. The rotational and diffusional relaxation times, which are easy to calculate, provide useful guides for the required simulation time in viscosity calculations. Also, the computational time required for viscosity calculations of these systems by the Green-Kubo method is compared with the time required for previous non-equilibrium molecular dynamics calculations of the same systems. The method of choice for a particular calculation is determined largely by the properties of interest, since the efficiencies of the two methods are comparable for calculation of the zero strain rate viscosity.

  6. Discriminative graph embedding for label propagation.

    PubMed

    Nguyen, Canh Hao; Mamitsuka, Hiroshi

    2011-09-01

    In many applications, the available information is encoded in graph structures. This is a common problem in biological networks, social networks, web communities and document citations. We investigate the problem of classifying nodes' labels on a similarity graph given only a graph structure on the nodes. Conventional machine learning methods usually require data to reside in some Euclidean spaces or to have a kernel representation. Applying these methods to nodes on graphs would require embedding the graphs into these spaces. By embedding and then learning the nodes on graphs, most methods are either flexible with different learning objectives or efficient enough for large scale applications. We propose a method to embed a graph into a feature space for a discriminative purpose. Our idea is to include label information into the embedding process, making the space representation tailored to the task. We design embedding objective functions that the following learning formulations become spectral transforms. We then reformulate these spectral transforms into multiple kernel learning problems. Our method, while being tailored to the discriminative tasks, is efficient and can scale to massive data sets. We show the need of discriminative embedding on some simulations. Applying to biological network problems, our method is shown to outperform baselines.

  7. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  8. Reuse of imputed data in microarray analysis increases imputation efficiency

    PubMed Central

    Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su

    2004-01-01

    Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240

  9. 40 CFR Appendix B to Part 50 - Reference Method for the Determination of Suspended Particulate Matter in the Atmosphere (High...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... filters used are specified to have a minimum collection efficiency of 99 percent for 0.3 µm (DOP... electronic timers have much better set-point resolution than mechanical timers, but require a battery backup... Collection efficiency: 99 percent minimum as measured by the DOP test (ASTM-2986) for particles of 0.3 µm...

  10. 40 CFR Appendix B to Part 50 - Reference Method for the Determination of Suspended Particulate Matter in the Atmosphere (High...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... filters used are specified to have a minimum collection efficiency of 99 percent for 0.3 µm (DOP... electronic timers have much better set-point resolution than mechanical timers, but require a battery backup... Collection efficiency: 99 percent minimum as measured by the DOP test (ASTM-2986) for particles of 0.3 µm...

  11. Fast Computation and Assessment Methods in Power System Analysis

    NASA Astrophysics Data System (ADS)

    Nagata, Masaki

    Power system analysis is essential for efficient and reliable power system operation and control. Recently, online security assessment system has become of importance, as more efficient use of power networks is eagerly required. In this article, fast power system analysis techniques such as contingency screening, parallel processing and intelligent systems application are briefly surveyed from the view point of their application to online dynamic security assessment.

  12. Nonnegative least-squares image deblurring: improved gradient projection approaches

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.

    2010-02-01

    The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.

  13. A time-efficient algorithm for implementing the Catmull-Clark subdivision method

    NASA Astrophysics Data System (ADS)

    Ioannou, G.; Savva, A.; Stylianou, V.

    2015-10-01

    Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.

  14. Improvement of seawater salt quality by hydro-extraction and re-crystallization methods

    NASA Astrophysics Data System (ADS)

    Sumada, K.; Dewati, R.; Suprihatin

    2018-01-01

    Indonesia is one of the salt producing countries that use sea water as a source of raw materials, the quality of salt produced is influenced by the quality of sea water. The resulting average salt quality contains 85-90% NaCl. The Indonesian National Standard (SNI) for human salt’s consumption sodium chloride content is 94.7 % (dry base) and for industrial salt 98,5 %. In this study developed the re-crystallization without chemical and hydro-extraction method. The objective of this research to choose the best methods based on efficiency. The results showed that re-crystallization method can produce salt with NaCl content 99,21%, while hydro-extraction method content 99,34 % NaCl. The salt produced through both methods can be used as a consumption and industrial salt, Hydro-extraction method is more efficient than re-crystallization method because re-crystallization method requires heat energy.

  15. A new technique for thermodynamic engine modeling

    NASA Astrophysics Data System (ADS)

    Matthews, R. D.; Peters, J. E.; Beckel, S. A.; Shizhi, M.

    1983-12-01

    Reference is made to the equations given by Matthews (1983) for piston engine performance, which show that this performance depends on four fundamental engine efficiencies (combustion, thermodynamic cycle or indicated thermal, volumetric, and mechanical) as well as on engine operation and design parameters. This set of equations is seen to suggest a different technique for engine modeling; that is, that each efficiency should be modeled individually and the efficiency submodels then combined to obtain an overall engine model. A simple method for predicting the combustion efficiency of piston engines is therefore required. Various methods are proposed here and compared with experimental results. These combustion efficiency models are then combined with various models for the volumetric, mechanical, and indicated thermal efficiencies to yield three different engine models of varying degrees of sophistication. Comparisons are then made of the predictions of the resulting engine models with experimental data. It is found that combustion efficiency is almost independent of load, speed, and compression ratio and is not strongly dependent on fuel type, at least so long as the hydrogen-to-carbon ratio is reasonably close to that for isooctane.

  16. Does the use of automated fetal biometry improve clinical work flow efficiency?

    PubMed

    Espinoza, Jimmy; Good, Sara; Russell, Evie; Lee, Wesley

    2013-05-01

    This study was designed to compare the work flow efficiency of manual measurements of 5 fetal parameters with a novel technique that automatically measures these parameters from 2-dimensional sonograms. This prospective study included 200 singleton pregnancies between 15 and 40 weeks' gestation. Patients were randomly allocated to either manual (n = 100) or automatic (n = 100) fetal biometry. The automatic measurement was performed using a commercially available software application. A digital video recorder captured all on-screen activity associated with the sonographic examination. The examination time and number of steps required to obtain fetal measurements were compared between manual and automatic methods. The mean time required to obtain the biometric measurements was significantly shorter using the automated technique than the manual approach (P < .001 for all comparisons). Similarly, the mean number of steps required to perform these measurements was significantly fewer with automatic measurements compared to the manual technique (P < .001). In summary, automated biometry reduced the examination time required for standard fetal measurements. This approach may improve work flow efficiency in busy obstetric sonography practices.

  17. A Modified Method for Isolation of Rhein from Senna

    PubMed Central

    Mehta, Namita; Laddha, K. S.

    2009-01-01

    A simple and efficient method for the isolation of rhein from Cassia angustifolia (senna) leaves is described in which the hydrolysis of the sennosides and extraction of the hydrolysis products (free anthraquinones) is carried out in one step. Further isolation of rhein is achieved from the anthraquinone mixture. This method reduces the number of steps required for isolation of rhein as compared to conventional methods. PMID:20336207

  18. Finite difference time domain (FDTD) method for modeling the effect of switched gradients on the human body in MRI.

    PubMed

    Zhao, Huawei; Crozier, Stuart; Liu, Feng

    2002-12-01

    Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model. Copyright 2002 Wiley-Liss, Inc.

  19. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  20. A methodology for quadrilateral finite element mesh coarsening

    DOE PAGES

    Staten, Matthew L.; Benzley, Steven; Scott, Michael

    2008-03-27

    High fidelity finite element modeling of continuum mechanics problems often requires using all quadrilateral or all hexahedral meshes. The efficiency of such models is often dependent upon the ability to adapt a mesh to the physics of the phenomena. Adapting a mesh requires the ability to both refine and/or coarsen the mesh. The algorithms available to refine and coarsen triangular and tetrahedral meshes are very robust and efficient. However, the ability to locally and conformally refine or coarsen all quadrilateral and all hexahedral meshes presents many difficulties. Some research has been done on localized conformal refinement of quadrilateral and hexahedralmore » meshes. However, little work has been done on localized conformal coarsening of quadrilateral and hexahedral meshes. A general method which provides both localized conformal coarsening and refinement for quadrilateral meshes is presented in this paper. This method is based on restructuring the mesh with simplex manipulations to the dual of the mesh. Finally, this method appears to be extensible to hexahedral meshes in three dimensions.« less

  1. A fast Fourier transform on multipoles (FFTM) algorithm for solving Helmholtz equation in acoustics analysis.

    PubMed

    Ong, Eng Teo; Lee, Heow Pueh; Lim, Kian Meng

    2004-09-01

    This article presents a fast algorithm for the efficient solution of the Helmholtz equation. The method is based on the translation theory of the multipole expansions. Here, the speedup comes from the convolution nature of the translation operators, which can be evaluated rapidly using fast Fourier transform algorithms. Also, the computations of the translation operators are accelerated by using the recursive formulas developed recently by Gumerov and Duraiswami [SIAM J. Sci. Comput. 25, 1344-1381(2003)]. It is demonstrated that the algorithm can produce good accuracy with a relatively low order of expansion. Efficiency analyses of the algorithm reveal that it has computational complexities of O(Na), where a ranges from 1.05 to 1.24. However, this method requires substantially more memory to store the translation operators as compared to the fast multipole method. Hence, despite its simplicity in implementation, this memory requirement issue may limit the application of this algorithm to solving very large-scale problems.

  2. Efficient preparation of shuffled DNA libraries through recombination (Gateway) cloning.

    PubMed

    Lehtonen, Soili I; Taskinen, Barbara; Ojala, Elina; Kukkurainen, Sampo; Rahikainen, Rolle; Riihimäki, Tiina A; Laitinen, Olli H; Kulomaa, Markku S; Hytönen, Vesa P

    2015-01-01

    Efficient and robust subcloning is essential for the construction of high-diversity DNA libraries in the field of directed evolution. We have developed a more efficient method for the subcloning of DNA-shuffled libraries by employing recombination cloning (Gateway). The Gateway cloning procedure was performed directly after the gene reassembly reaction, without additional purification and amplification steps, thus simplifying the conventional DNA shuffling protocols. Recombination-based cloning, directly from the heterologous reassembly reaction, conserved the high quality of the library and reduced the time required for the library construction. The described method is generally compatible for the construction of DNA-shuffled gene libraries. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Research on key technology of the verification system of steel rule based on vision measurement

    NASA Astrophysics Data System (ADS)

    Jia, Siyuan; Wang, Zhong; Liu, Changjie; Fu, Luhua; Li, Yiming; Lu, Ruijun

    2018-01-01

    The steel rule plays an important role in quantity transmission. However, the traditional verification method of steel rule based on manual operation and reading brings about low precision and low efficiency. A machine vison based verification system of steel rule is designed referring to JJG1-1999-Verificaiton Regulation of Steel Rule [1]. What differentiates this system is that it uses a new calibration method of pixel equivalent and decontaminates the surface of steel rule. Experiments show that these two methods fully meet the requirements of the verification system. Measuring results strongly prove that these methods not only meet the precision of verification regulation, but also improve the reliability and efficiency of the verification system.

  4. Numerical method of lines for the relaxational dynamics of nematic liquid crystals.

    PubMed

    Bhattacharjee, A K; Menon, Gautam I; Adhikari, R

    2008-08-01

    We propose an efficient numerical scheme, based on the method of lines, for solving the Landau-de Gennes equations describing the relaxational dynamics of nematic liquid crystals. Our method is computationally easy to implement, balancing requirements of efficiency and accuracy. We benchmark our method through the study of the following problems: the isotropic-nematic interface, growth of nematic droplets in the isotropic phase, and the kinetics of coarsening following a quench into the nematic phase. Our results, obtained through solutions of the full coarse-grained equations of motion with no approximations, provide a stringent test of the de Gennes ansatz for the isotropic-nematic interface, illustrate the anisotropic character of droplets in the nucleation regime, and validate dynamical scaling in the coarsening regime.

  5. Low rank approximation method for efficient Green's function calculation of dissipative quantum transport

    NASA Astrophysics Data System (ADS)

    Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann

    2013-06-01

    In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.

  6. Some problems of the calculation of three-dimensional boundary layer flows on general configurations

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Mosinskis, G. J.; Rehn, J. A.

    1973-01-01

    An accurate solution of the three-dimensional boundary layer equations over general configurations such as those encountered in aircraft and space shuttle design requires a very efficient, fast, and accurate numerical method with suitable turbulence models for the Reynolds stresses. The efficiency, speed, and accuracy of a three-dimensional numerical method together with the turbulence models for the Reynolds stresses are examined. The numerical method is the implicit two-point finite difference approach (Box Method) developed by Keller and applied to the boundary layer equations by Keller and Cebeci. In addition, a study of some of the problems that may arise in the solution of these equations for three-dimensional boundary layer flows over general configurations.

  7. Global Search Capabilities of Indirect Methods for Impulsive Transfers

    NASA Astrophysics Data System (ADS)

    Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong

    2015-09-01

    An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.

  8. Storage and computationally efficient permutations of factorized covariance and square-root information matrices

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).

  9. Methods to estimate irrigated reference crop evapotranspiration - a review.

    PubMed

    Kumar, R; Jat, M K; Shankar, V

    2012-01-01

    Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.

  10. Systematically Retrieving Research: A Case Study Evaluating Seven Databases

    ERIC Educational Resources Information Center

    Taylor, Brian; Wylie, Emma; Dempster, Martin; Donnelly, Michael

    2007-01-01

    Objective: Developing the scientific underpinnings of social welfare requires effective and efficient methods of retrieving relevant items from the increasing volume of research. Method: We compared seven databases by running the nearest equivalent search on each. The search topic was chosen for relevance to social work practice with older people.…

  11. Global challenges/chemistry solutions: Promoting personal safety and national security

    USDA-ARS?s Scientific Manuscript database

    Joe Alper: Can you provide a little background about why there is a need for this type of assay? Mark Carter: Ricin is considered a biosecurity threat agent. A more efficient detection method was required. Joe Alper: How are these type of assays done today, or are current methods unsuitable for ...

  12. Assessment of Air Emissions from Oil and Natural Gas Well Pads Using Mobile Remote and Onsite Direct Measurements

    EPA Science Inventory

    An enhanced ability to efficiently detect large maintenance related emissions is required to ensure sustainable oil and gas development. To help achieve this goal, a new remote inspection method, Other Test Method (OTM) 33A, was developed and utilized to quantify short-term metha...

  13. Solar-powered unmanned aerial vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinhardt, K.C.; Lamp, T.R.; Geis, J.W.

    1996-12-31

    An analysis was performed to determine the impact of various power system components and mission requirements on the size of solar-powered high altitude long endurance (HALE)-type aircraft. The HALE unmanned aerial vehicle (UAV) has good potential for use in many military and civil applications. The primary power system components considered in this study were photovoltaic (PV) modules for power generation and regenerative fuel cells for energy storage. The impact of relevant component performance on UAV size and capability were considered; including PV module efficiency and mass, power electronics efficiency, and fuel cell specific energy. Mission parameters such as time ofmore » year, flight altitude, flight latitude, and payload mass and power were also varied to determine impact on UAV size. The aircraft analysis method used determines the required aircraft wing aspect ratio, wing area, and total mass based on maximum endurance or minimum required power calculations. The results indicate that the capacity of the energy storage system employed, fuel cells in this analysis, greatly impacts aircraft size, whereas the impact of PV module efficiency and mass is much less important. It was concluded that an energy storage specific energy (total system) of 250--500 Whr/kg is required to enable most useful missions, and that PV cells with efficiencies greater than {approximately} 12% are suitable for use.« less

  14. Live minimal path for interactive segmentation of medical images

    NASA Astrophysics Data System (ADS)

    Chartrand, Gabriel; Tang, An; Chav, Ramnada; Cresson, Thierry; Chantrel, Steeve; De Guise, Jacques A.

    2015-03-01

    Medical image segmentation is nowadays required for medical device development and in a growing number of clinical and research applications. Since dedicated automatic segmentation methods are not always available, generic and efficient interactive tools can alleviate the burden of manual segmentation. In this paper we propose an interactive segmentation tool based on image warping and minimal path segmentation that is efficient for a wide variety of segmentation tasks. While the user roughly delineates the desired organs boundary, a narrow band along the cursors path is straightened, providing an ideal subspace for feature aligned filtering and minimal path algorithm. Once the segmentation is performed on the narrow band, the path is warped back onto the original image, precisely delineating the desired structure. This tool was found to have a highly intuitive dynamic behavior. It is especially efficient against misleading edges and required only coarse interaction from the user to achieve good precision. The proposed segmentation method was tested for 10 difficult liver segmentations on CT and MRI images, and the resulting 2D overlap Dice coefficient was 99% on average..

  15. Optimisation of intradermal DNA electrotransfer for immunisation.

    PubMed

    Vandermeulen, Gaëlle; Staes, Edith; Vanderhaeghen, Marie Lise; Bureau, Michel Francis; Scherman, Daniel; Préat, Véronique

    2007-12-04

    The development of DNA vaccines requires appropriate delivery technologies. Electrotransfer is one of the most efficient methods of non-viral gene transfer. In the present study, intradermal DNA electrotransfer was first optimised. Strong effects of the injection method and the dose of DNA on luciferase expression were demonstrated. Pre-treatments were evaluated to enhance DNA diffusion in the skin but neither hyaluronidase injection nor iontophoresis improved efficiency of intradermal DNA electrotransfer. Then, DNA immunisation with a weakly immunogenic model antigen, luciferase, was investigated. After intradermal injection of the plasmid encoding luciferase, electrotransfer (HV 700 V/cm 100 micros, LV 200 V/cm 400 ms) was required to induce immune response. The response was Th1-shifted compared to immunisation with the luciferase recombinant protein. Finally, DNA electrotransfer in the skin, the muscle or the ear pinna was compared. Muscle DNA electrotransfer resulted in the highest luciferase expression and the best IgG response. Nevertheless electrotransfer into the skin, the muscle and the ear pinna all resulted in IFN-gamma secretion by luciferase-stimulated splenocytes suggesting that an efficient Th1 response was induced in all case.

  16. Optimizing edible fungal growth and biodegradation of inedible crop residues using various cropping methods.

    PubMed

    Nyochembeng, Leopold M; Beyl, Caula A; Pacumbaba, R P

    2008-09-01

    Long-term manned space flights to Mars require the development of an advanced life support (ALS) ecosystem including efficient food crop production, processing and recycling waste products thereof. Using edible white rot fungi (EWRF) to achieve effective biomass transformation in ALS requires optimal and rapid biodegradative activity on lignocellulosic wastes. We investigated the mycelial growth of Lentinula edodes and Pleurotus ostreatus on processed residues of various crops under various cropping patterns. In single cropping, mycelial growth and fruiting in all strains were significantly repressed on sweet potato and basil. However, growth of the strains was improved when sweet potato and basil residues were paired with rice or wheat straw. Oyster mushroom (Pleurotus) strains were better than shiitake (L. edodes) strains under single, paired, and mixed cropping patterns. Mixed cropping further eliminated the inherent inhibitory effect of sweet potato, basil, or lettuce on fungal growth. Co-cropping fungal species had a synergistic effect on rate of fungal growth, substrate colonization, and fruiting. Use of efficient cropping methods may enhance fungal growth, fruiting, biodegradation of crop residues, and efficiency of biomass recycling.

  17. An efficient method for removing point sources from full-sky radio interferometric maps

    NASA Astrophysics Data System (ADS)

    Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard

    2017-12-01

    A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.

  18. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    NASA Astrophysics Data System (ADS)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  19. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  20. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    2004-01-01

    A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.

  1. An efficient method for variable region assembly in the construction of scFv phage display libraries using independent strand amplification

    PubMed Central

    Sotelo, Pablo H.; Collazo, Noberto; Zuñiga, Roberto; Gutiérrez-González, Matías; Catalán, Diego; Ribeiro, Carolina Hager; Aguillón, Juan Carlos; Molina, María Carmen

    2012-01-01

    Phage display library technology is a common method to produce human antibodies. In this technique, the immunoglobulin variable regions are displayed in a bacteriophage in a way that each filamentous virus displays the product of a single antibody gene on its surface. From the collection of different phages, it is possible to isolate the virus that recognizes specific targets. The most common form in which to display antibody variable regions in the phage is the single chain variable fragment format (scFv), which requires assembly of the heavy and light immunoglobulin variable regions in a single gene. In this work, we describe a simple and efficient method for the assembly of immunoglobulin heavy and light chain variable regions in a scFv format. This procedure involves a two-step reaction: (1) DNA amplification to produce the single strand form of the heavy or light chain gene required for the fusion; and (2) mixture of both single strand products followed by an assembly reaction to construct a complete scFv gene. Using this method, we produced 6-fold more scFv encoding DNA than the commonly used splicing by overlap extension PCR (SOE-PCR) approach. The scFv gene produced by this method also proved to be efficient in generating a diverse scFv phage display library. From this scFv library, we obtained phages that bound several non-related antigens, including recombinant proteins and rotavirus particles. PMID:22692130

  2. Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model

    USGS Publications Warehouse

    Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.

    2012-01-01

    This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  3. Technical efficiency and resources allocation in university hospitals in Tehran, 2009-2012

    PubMed Central

    Rezapour, Aziz; Ebadifard Azar, Farbod; Yousef Zadeh, Negar; Roumiani, YarAllah; Bagheri Faradonbeh, Saeed

    2015-01-01

    Background: Assessment of hospitals’ performance in achieving its goals is a basic necessity. Measuring the efficiency of hospitals in order to boost resource productivity in healthcare organizations is extremely important. The aim of this study was to measure technical efficiency and determining status of resource allocation in some university hospitals, in Tehran, Iran. Methods: This study was conducted in 2012; the research population consisted of all hospitals affiliated to Iran and Tehran medical sciences universities of. Required data, such as human and capital resources information and also production variables (hospital outputs) were collected from data centers of studied hospitals. Data were analyzed using data envelopment analysis (DEA) method, Deap2,1 software; and the stochastic frontier analysis (SFA) method, Frontier 4,1 software. Results: According to DEA method, average of technical, management (pure) and scale efficiency of the studied hospitals during the study period were calculated 0.87, 0.971, and 0.907, respectively. All kinds of efficiency did not follow a fixed trend over the study time and were constantly changing. In the stochastic frontier's production function analysis, the technical efficiency of the studied industry during the study period was estimated to be 0.389. Conclusion: This study represented hospitals with the highest and lowest efficiency. Reference hospitals (more efficient states) were indicated for the inefficient centers. According to the findings, it was found that in the hospitals that do not operate efficiently, there is a capacity to improve the technical efficiency by removing excess inputs without changes in the level of outputs. However, by the optimal allocation of resources in most studied hospitals, very important economy of scale can be achieved. PMID:26793657

  4. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  5. Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati

    2016-10-01

    The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.

  6. Performance of quantitative vegetation sampling methods across gradients of cover in Great Basin plant communities

    USGS Publications Warehouse

    Pilliod, David S.; Arkle, Robert S.

    2013-01-01

    Resource managers and scientists need efficient, reliable methods for quantifying vegetation to conduct basic research, evaluate land management actions, and monitor trends in habitat conditions. We examined three methods for quantifying vegetation in 1-ha plots among different plant communities in the northern Great Basin: photography-based grid-point intercept (GPI), line-point intercept (LPI), and point-quarter (PQ). We also evaluated each method for within-plot subsampling adequacy and effort requirements relative to information gain. We found that, for most functional groups, percent cover measurements collected with the use of LPI, GPI, and PQ methods were strongly correlated. These correlations were even stronger when we used data from the upper canopy only (i.e., top “hit” of pin flags) in LPI to estimate cover. PQ was best at quantifying cover of sparse plants such as shrubs in early successional habitats. As cover of a given functional group decreased within plots, the variance of the cover estimate increased substantially, which required more subsamples per plot (i.e., transect lines, quadrats) to achieve reliable precision. For GPI, we found that that six–nine quadrats per hectare were sufficient to characterize the vegetation in most of the plant communities sampled. All three methods reasonably characterized the vegetation in our plots, and each has advantages depending on characteristics of the vegetation, such as cover or heterogeneity, study goals, precision of measurements required, and efficiency needed.

  7. Efficient Design in a DC to DC Converter Unit

    NASA Technical Reports Server (NTRS)

    Bruemmer, Joel E.; Williams, Fitch R.; Schmitz, Gregory V.

    2002-01-01

    Space Flight hardware requires high power conversion efficiencies due to limited power availability and weight penalties of cooling systems. The International Space Station (ISS) Electric Power System (EPS) DC-DC Converter Unit (DDCU) power converter is no exception. This paper explores the design methods and tradeoffs that were utilized to accomplish high efficiency in the DDCU. An isolating DC to DC converter was selected for the ISS power system because of requirements for separate primary and secondary grounds and for a well-regulated secondary output voltage derived from a widely varying input voltage. A flyback-current-fed push-pull topology or improved Weinberg circuit was chosen for this converter because of its potential for high efficiency and reliability. To enhance efficiency, a non-dissipative snubber circuit for the very-low-Rds-on Field Effect Transistors (FETs) was utilized, redistributing the energy that could be wasted during the switching cycle of the power FETs. A unique, low-impedance connection system was utilized to improve contact resistance over a bolted connection. For improved consistency in performance and to lower internal wiring inductance and losses a planar bus system is employed. All of these choices contributed to the design of a 6.25 KW regulated dc to dc converter that is 95 percent efficient. The methodology used in the design of this DC to DC Converter Unit may be directly applicable to other systems that require a conservative approach to efficient power conversion and distribution.

  8. Recursive Newton-Euler formulation of manipulator dynamics

    NASA Technical Reports Server (NTRS)

    Nasser, M. G.

    1989-01-01

    A recursive Newton-Euler procedure is presented for the formulation and solution of manipulator dynamical equations. The procedure includes rotational and translational joints and a topological tree. This model was verified analytically using a planar two-link manipulator. Also, the model was tested numerically against the Walker-Orin model using the Shuttle Remote Manipulator System data. The hinge accelerations obtained from both models were identical. The computational requirements of the model vary linearly with the number of joints. The computational efficiency of this method exceeds that of Walker-Orin methods. This procedure may be viewed as a considerable generalization of Armstrong's method. A six-by-six formulation is adopted which enhances both the computational efficiency and simplicity of the model.

  9. Justification of Estimates for Fiscal Year 1984 Submitted to Congress.

    DTIC Science & Technology

    1983-01-01

    sponsoring different aspects related to unique manufacturing methods than those pursued by DARPA, and duplication of effort is prevented by direct...weapons systems. Rapid and economical methods of satisfying these requirements must significantly precede weapons systems developments to prevent... methods for obtaining accurate and efficient geodetic measurements. Also, a major advanced sensor/G&G data collection capability is being urdertaken by DNA

  10. Two pass method and radiation interchange processing when applied to thermal-structural analysis of large space truss structures

    NASA Technical Reports Server (NTRS)

    Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.; Rogers, Karen M.

    1993-01-01

    A method of efficient and automated thermal-structural processing of very large space structures is presented. The method interfaces the finite element and finite difference techniques. It also results in a pronounced reduction of the quantity of computations, computer resources and manpower required for the task, while assuring the desired accuracy of the results.

  11. A frequency dependent preconditioned wavelet method for atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2013-12-01

    Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.

  12. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  13. Efficient engineering of marker-free synthetic allotetraploids of Saccharomyces.

    PubMed

    Alexander, William G; Peris, David; Pfannenstiel, Brandon T; Opulente, Dana A; Kuang, Meihua; Hittinger, Chris Todd

    2016-04-01

    Saccharomyces interspecies hybrids are critical biocatalysts in the fermented beverage industry, including in the production of lager beers, Belgian ales, ciders, and cold-fermented wines. Current methods for making synthetic interspecies hybrids are cumbersome and/or require genome modifications. We have developed a simple, robust, and efficient method for generating allotetraploid strains of prototrophic Saccharomyces without sporulation or nuclear genome manipulation. S. cerevisiae×S. eubayanus, S. cerevisiae×S. kudriavzevii, and S. cerevisiae×S. uvarum designer hybrid strains were created as synthetic lager, Belgian, and cider strains, respectively. The ploidy and hybrid nature of the strains were confirmed using flow cytometry and PCR-RFLP analysis, respectively. This method provides an efficient means for producing novel synthetic hybrids for beverage and biofuel production, as well as for constructing tetraploids to be used for basic research in evolutionary genetics and genome stability. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Turtle: identifying frequent k-mers with cache-efficient algorithms.

    PubMed

    Roy, Rajat Shuvro; Bhattacharya, Debashish; Schliep, Alexander

    2014-07-15

    Counting the frequencies of k-mers in read libraries is often a first step in the analysis of high-throughput sequencing data. Infrequent k-mers are assumed to be a result of sequencing errors. The frequent k-mers constitute a reduced but error-free representation of the experiment, which can inform read error correction or serve as the input to de novo assembly methods. Ideally, the memory requirement for counting should be linear in the number of frequent k-mers and not in the, typically much larger, total number of k-mers in the read library. We present a novel method that balances time, space and accuracy requirements to efficiently extract frequent k-mers even for high-coverage libraries and large genomes such as human. Our method is designed to minimize cache misses in a cache-efficient manner by using a pattern-blocked Bloom filter to remove infrequent k-mers from consideration in combination with a novel sort-and-compact scheme, instead of a hash, for the actual counting. Although this increases theoretical complexity, the savings in cache misses reduce the empirical running times. A variant of method can resort to a counting Bloom filter for even larger savings in memory at the expense of false-negative rates in addition to the false-positive rates common to all Bloom filter-based approaches. A comparison with the state-of-the-art shows reduced memory requirements and running times. The tools are freely available for download at http://bioinformatics.rutgers.edu/Software/Turtle and http://figshare.com/articles/Turtle/791582. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  16. Flight test evaluation of a method to determine the level flight performance propeller-driven aircraft

    NASA Technical Reports Server (NTRS)

    Cross, E. J., Jr.

    1976-01-01

    A procedure is developed for deriving the level flight drag and propulsive efficiency of propeller-driven aircraft. This is a method in which the overall drag of the aircraft is expressed in terms of the measured increment of power required to overcome a corresponding known increment of drag. The aircraft is flown in unaccelerated, straight and level flight, and thus includes the effects of the propeller drag and slipstream. Propeller efficiency and airplane drag are computed on the basis of data obtained during flight test and do not rely on the analytical calculations of inadequate theory.

  17. Charge-transfer excited states: Seeking a balanced and efficient wave function ansatz in variational Monte Carlo

    DOE PAGES

    Blunt, Nick S.; Neuscamman, Eric

    2017-11-16

    We present a simple and efficient wave function ansatz for the treatment of excited charge-transfer states in real-space quantum Monte Carlo methods. Using the recently-introduced variation-after-response method, this ansatz allows a crucial orbital optimization step to be performed beyond a configuration interaction singles expansion, while only requiring calculation of two Slater determinant objects. As a result, we demonstrate this ansatz for the illustrative example of the stretched LiF molecule, for a range of excited states of formaldehyde, and finally for the more challenging ethylene-tetrafluoroethylene molecule.

  18. An Efficient Index Dissemination in Unstructured Peer-to-Peer Networks

    NASA Astrophysics Data System (ADS)

    Takahashi, Yusuke; Izumi, Taisuke; Kakugawa, Hirotsugu; Masuzawa, Toshimitsu

    Using Bloom filters is one of the most popular and efficient lookup methods in P2P networks. A Bloom filter is a representation of data item indices, which achieves small memory requirement by allowing one-sided errors (false positive). In the lookup scheme besed on the Bloom filter, each peer disseminates a Bloom filter representing indices of the data items it owns in advance. Using the information of disseminated Bloom filters as a clue, each query can find a short path to its destination. In this paper, we propose an efficient extension of the Bloom filter, called a Deterministic Decay Bloom Filter (DDBF) and an index dissemination method based on it. While the index dissemination based on a standard Bloom filter suffers performance degradation by containing information of too many data items when its dissemination radius is large, the DDBF can circumvent such degradation by limiting information according to the distance between the filter holder and the items holders, i. e., a DDBF contains less information for faraway items and more information for nearby items. Interestingly, the construction of DDBFs requires no extra cost above that of standard filters. We also show by simulation that our method can achieve better lookup performance than existing ones.

  19. Knowledge representation by connection matrices: A method for the on-board implementation of large expert systems

    NASA Technical Reports Server (NTRS)

    Kellner, A.

    1987-01-01

    Extremely large knowledge sources and efficient knowledge access characterizing future real-life artificial intelligence applications represent crucial requirements for on-board artificial intelligence systems due to obvious computer time and storage constraints on spacecraft. A type of knowledge representation and corresponding reasoning mechanism is proposed which is particularly suited for the efficient processing of such large knowledge bases in expert systems.

  20. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  1. Gas dynamic design of the pipe line compressor with 90% efficiency. Model test approval

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Rekstin, A.; Soldatova, K.

    2015-08-01

    Gas dynamic design of the pipe line compressor 32 MW was made for PAO SMPO (Sumy, Ukraine). The technical specification requires compressor efficiency of 90%. The customer offered favorable scheme - single-stage design with console impeller and axial inlet. The authors used the standard optimization methodology of 2D impellers. The original methodology of internal scroll profiling was used to minimize efficiency losses. Radically improved 5th version of the Universal modeling method computer programs was used for precise calculation of expected performances. The customer fulfilled model tests in a 1:2 scale. Tests confirmed the calculated parameters at the design point (maximum efficiency of 90%) and in the whole range of flow rates. As far as the authors know none of compressors have achieved such efficiency. The principles and methods of gas-dynamic design are presented below. The data of the 32 MW compressor presented by the customer in their report at the 16th International Compressor conference (September 2014, Saint- Petersburg) and later transferred to the authors.

  2. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  3. Evaluation of environmental sampling methods for detection of Salmonella enterica in a large animal veterinary hospital.

    PubMed

    Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey

    2018-04-01

    Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.

  4. A New Method for Control of the Efficiency of Gear Reducers

    NASA Astrophysics Data System (ADS)

    E Kozlov, K.; Egorov, A. V.; Belogusev, V. N.

    2017-04-01

    This article proposes a new method to control the energy efficiency of gear reducers. The method allows evaluating the friction losses in a drive motor, drive motor bearing assemblies, and toothing both at the stage of control of the finished product and in the course of its operation, maintenance, and repair. The proposed method, unlike currently used methods for control of the efficiency of gear reducers, allows determining the friction losses without the use of strain measurement, which requires calibration of tensometric sensors and expensive equipment. The method is based on the idea of invariability of mechanical characteristics of the induction motor at constant voltage, resistance of windings, and mains frequency, regardless of the driven inertia mass. This paper presents experimental results which verify the theoretical predictions. The proposed method can be implemented in the procedure of acceptance test at the companies that manufacture gear reducers, thereby assess their effectiveness and the level of degradation processes that significantly affect the service life of the research object. The method can be implemented both with universal and with specialized hardware and software complexes. At that, both an increment of the inertia moment and acceleration time of a gear reducer may serve as a performance criterion.

  5. Efficient Numerical Methods for Nonlinear-Facilitated Transport and Exchange in a Blood-Tissue Exchange Unit

    PubMed Central

    Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.

    2010-01-01

    The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808

  6. Reliable use of determinants to solve nonlinear structural eigenvalue problems efficiently

    NASA Technical Reports Server (NTRS)

    Williams, F. W.; Kennedy, D.

    1988-01-01

    The analytical derivation, numerical implementation, and performance of a multiple-determinant parabolic interpolation method (MDPIM) for use in solving transcendental eigenvalue (critical buckling or undamped free vibration) problems in structural mechanics are presented. The overall bounding, eigenvalue-separation, qualified parabolic interpolation, accuracy-confirmation, and convergence-recovery stages of the MDPIM are described in detail, and the numbers of iterations required to solve sample plane-frame problems using the MDPIM are compared with those for a conventional bisection method and for the Newtonian method of Simpson (1984) in extensive tables. The MDPIM is shown to use 31 percent less computation time than bisection when accuracy of 0.0001 is required, but 62 percent less when accuracy of 10 to the -8th is required; the time savings over the Newtonian method are about 10 percent.

  7. Remote health monitoring of heart failure with data mining via CART method on HRV features.

    PubMed

    Pecchia, Leandro; Melillo, Paolo; Bracale, Marcello

    2011-03-01

    Disease management programs, which use no advanced information and computer technology, are as effective as telemedicine but more efficient because less costly. We proposed a platform to enhance effectiveness and efficiency of home monitoring using data mining for early detection of any worsening in patient's condition. These worsenings could require more complex and expensive care if not recognized. In this letter, we briefly describe the remote health monitoring platform we designed and realized, which supports heart failure (HF) severity assessment offering functions of data mining based on the classification and regression tree method. The system developed achieved accuracy and a precision of 96.39% and 100.00% in detecting HF and of 79.31% and 82.35% in distinguishing severe versus mild HF, respectively. These preliminary results were achieved on public databases of signals to improve their reproducibility. Clinical trials involving local patients are still running and will require longer experimentation.

  8. A Bitslice Implementation of Anderson's Attack on A5/1

    NASA Astrophysics Data System (ADS)

    Bulavintsev, Vadim; Semenov, Alexander; Zaikin, Oleg; Kochemazov, Stepan

    2018-03-01

    The A5/1 keystream generator is a part of Global System for Mobile Communications (GSM) protocol, employed in cellular networks all over the world. Its cryptographic resistance was extensively analyzed in dozens of papers. However, almost all corresponding methods either employ a specific hardware or require an extensive preprocessing stage and significant amounts of memory. In the present study, a bitslice variant of Anderson's Attack on A5/1 is implemented. It requires very little computer memory and no preprocessing. Moreover, the attack can be made even more efficient by harnessing the computing power of modern Graphics Processing Units (GPUs). As a result, using commonly available GPUs this method can quite efficiently recover the secret key using only 64 bits of keystream. To test the performance of the implementation, a volunteer computing project was launched. 10 instances of A5/1 cryptanalysis have been successfully solved in this project in a single week.

  9. Process for CO.sub.2 capture using zeolites from high pressure and moderate temperature gas streams

    DOEpatents

    Siriwardane, Ranjani V [Morgantown, WV; Stevens, Robert W [Morgantown, WV

    2012-03-06

    A method for separating CO.sub.2 from a gas stream comprised of CO.sub.2 and other gaseous constituents using a zeolite sorbent in a swing-adsorption process, producing a high temperature CO.sub.2 stream at a higher CO.sub.2 pressure than the input gas stream. The method utilizes CO.sub.2 desorption in a CO.sub.2 atmosphere and effectively integrates heat transfers for optimizes overall efficiency. H.sub.2O adsorption does not preclude effective operation of the sorbent. The cycle may be incorporated in an IGCC for efficient pre-combustion CO.sub.2 capture. A particular application operates on shifted syngas at a temperature exceeding 200.degree. C. and produces a dry CO.sub.2 stream at low temperature and high CO.sub.2 pressure, greatly reducing any compression energy requirements which may be subsequently required.

  10. Integrated aerodynamic-structural design of a forward-swept transport wing

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Grossman, Bernard; Kao, Pi-Jen; Polen, David M.; Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    The introduction of composite materials is having a profound effect on aircraft design. Since these materials permit the designer to tailor material properties to improve structural, aerodynamic and acoustic performance, they require an integrated multidisciplinary design process. Futhermore, because of the complexity of the design process, numerical optimization methods are required. The utilization of integrated multidisciplinary design procedures for improving aircraft design is not currently feasible because of software coordination problems and the enormous computational burden. Even with the expected rapid growth of supercomputers and parallel architectures, these tasks will not be practical without the development of efficient methods for cross-disciplinary sensitivities and efficient optimization procedures. The present research is part of an on-going effort which is focused on the processes of simultaneous aerodynamic and structural wing design as a prototype for design integration. A sequence of integrated wing design procedures has been developed in order to investigate various aspects of the design process.

  11. Methods of Phase and Power Control in Magnetron Transmitters for Superconducting Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kazadevich, G.; Johnson, R.; Neubauer, M.

    Various methods of phase and power control in magnetron RF sources of superconducting accelerators intended for ADS-class projects were recently developed and studied with conventional 2.45 GHz, 1 kW, CW magnetrons operating in pulsed and CW regimes. Magnetron transmitters excited by a resonant (injection-locking) phasemodulated signal can provide phase and power control with the rates required for precise stabilization of phase and amplitude of the accelerating field in Superconducting RF (SRF) cavities of the intensity-frontier accelerators. An innovative technique that can significantly increase the magnetron transmitter efficiency at the widerange power control required for superconducting accelerators was developed and verifiedmore » with the 2.45 GHz magnetrons operating in CW and pulsed regimes. High efficiency magnetron transmitters of this type can significantly reduce the capital and operation costs of the ADSclass accelerator projects.« less

  12. Cavity-Dumped Communication Laser Design

    NASA Technical Reports Server (NTRS)

    Roberts, W. T.

    2003-01-01

    Cavity-dumped lasers have significant advantages over more conventional Q-switched lasers for high-rate operation with pulse position modulation communications, including the ability to emit laser pulses at 1- to 10-megahertz rates, with pulse widths of 0.5 to 5 nanoseconds. A major advantage of cavity dumping is the potential to vary the cavity output percentage from pulse to pulse, maintaining the remainder of the energy in reserve for the next pulse. This article presents the results of a simplified cavity-dumped laser model, establishing the requirements for cavity efficiency and projecting the ultimate laser efficiency attainable in normal operation. In addition, a method of reducing or eliminating laser dead time is suggested that could significantly enhance communication capacity. The design of a laboratory demonstration laser is presented with estimates of required cavity efficiency and demonstration potential.

  13. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  14. A parallel overset-curvilinear-immersed boundary framework for simulating complex 3D incompressible flows

    PubMed Central

    Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis

    2013-01-01

    We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position. PMID:23833331

  15. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  16. Semi-automated solid phase extraction method for the mass spectrometric quantification of 12 specific metabolites of organophosphorus pesticides, synthetic pyrethroids, and select herbicides in human urine.

    PubMed

    Davis, Mark D; Wade, Erin L; Restrepo, Paula R; Roman-Esteva, William; Bravo, Roberto; Kuklenyik, Peter; Calafat, Antonia M

    2013-06-15

    Organophosphate and pyrethroid insecticides and phenoxyacetic acid herbicides represent important classes of pesticides applied in commercial and residential settings. Interest in assessing the extent of human exposure to these pesticides exists because of their widespread use and their potential adverse health effects. An analytical method for measuring 12 biomarkers of several of these pesticides in urine has been developed. The target analytes were extracted from one milliliter of urine by a semi-automated solid phase extraction technique, separated from each other and from other urinary biomolecules by reversed-phase high performance liquid chromatography, and detected using tandem mass spectrometry with isotope dilution quantitation. This method can be used to measure all the target analytes in one injection with similar repeatability and detection limits of previous methods which required more than one injection. Each step of the procedure was optimized to produce a robust, reproducible, accurate, precise and efficient method. The required selectivity and sensitivity for trace-level analysis (e.g., limits of detection below 0.5ng/mL) was achieved using a narrow diameter analytical column, higher than unit mass resolution for certain analytes, and stable isotope labeled internal standards. The method was applied to the analysis of 55 samples collected from adult anonymous donors with no known exposure to the target pesticides. This efficient and cost-effective method is adequate to handle the large number of samples required for national biomonitoring surveys. Published by Elsevier B.V.

  17. An Efficient Method for the Retrieval of Objects by Topological Relations in Spatial Database Systems.

    ERIC Educational Resources Information Center

    Lin, P. L.; Tan, W. H.

    2003-01-01

    Presents a new method to improve the performance of query processing in a spatial database. Experiments demonstrated that performance of database systems can be improved because both the number of objects accessed and number of objects requiring detailed inspection are much less than those in the previous approach. (AEF)

  18. 77 FR 77159 - Self-Regulatory Organizations; ICE Clear Europe Limited; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-31

    ... those customers. Section 17A(b)(3)(F) of the Act \\6\\ requires, among other things, that the rules of a... submitted by any of the following methods: Electronic Comments Use the Commission's Internet comment form... efficiently, please use only one method. The Commission will post all comments on the Commission's Internet...

  19. Energy Efficient Real-Time Scheduling Using DPM on Mobile Sensors with a Uniform Multi-Cores

    PubMed Central

    Kim, Youngmin; Lee, Chan-Gun

    2017-01-01

    In wireless sensor networks (WSNs), sensor nodes are deployed for collecting and analyzing data. These nodes use limited energy batteries for easy deployment and low cost. The use of limited energy batteries is closely related to the lifetime of the sensor nodes when using wireless sensor networks. Efficient-energy management is important to extending the lifetime of the sensor nodes. Most effort for improving power efficiency in tiny sensor nodes has focused mainly on reducing the power consumed during data transmission. However, recent emergence of sensor nodes equipped with multi-cores strongly requires attention to be given to the problem of reducing power consumption in multi-cores. In this paper, we propose an energy efficient scheduling method for sensor nodes supporting a uniform multi-cores. We extend the proposed T-Ler plane based scheduling for global optimal scheduling of a uniform multi-cores and multi-processors to enable power management using dynamic power management. In the proposed approach, processor selection for a scheduling and mapping method between the tasks and processors is proposed to efficiently utilize dynamic power management. Experiments show the effectiveness of the proposed approach compared to other existing methods. PMID:29240695

  20. Structural analysis for preliminary design of High Speed Civil Transport (HSCT)

    NASA Technical Reports Server (NTRS)

    Bhatia, Kumar G.

    1992-01-01

    In the preliminary design environment, there is a need for quick evaluation of configuration and material concepts. The simplified beam representations used in the subsonic, high aspect ratio wing platform are not applicable for low aspect ratio configurations typical of supersonic transports. There is a requirement to develop methods for efficient generation of structural arrangement and finite element representation to support multidisciplinary analysis and optimization. In addition, empirical data bases required to validate prediction methods need to be improved for high speed civil transport (HSCT) type configurations.

  1. Toward better drug repositioning: prioritizing and integrating existing methods into efficient pipelines.

    PubMed

    Jin, Guangxu; Wong, Stephen T C

    2014-05-01

    Recycling old drugs, rescuing shelved drugs and extending patents' lives make drug repositioning an attractive form of drug discovery. Drug repositioning accounts for approximately 30% of the newly US Food and Drug Administration (FDA)-approved drugs and vaccines in recent years. The prevalence of drug-repositioning studies has resulted in a variety of innovative computational methods for the identification of new opportunities for the use of old drugs. Questions often arise from customizing or optimizing these methods into efficient drug-repositioning pipelines for alternative applications. It requires a comprehensive understanding of the available methods gained by evaluating both biological and pharmaceutical knowledge and the elucidated mechanism-of-action of drugs. Here, we provide guidance for prioritizing and integrating drug-repositioning methods for specific drug-repositioning pipelines. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. CB4-03: An Eye on the Future: A Review of Data Virtualization Techniques to Improve Research Analytics

    PubMed Central

    Richter, Jack; McFarland, Lela; Bredfeldt, Christine

    2012-01-01

    Background/Aims Integrating data across systems can be a daunting process. The traditional method of moving data to a common location, mapping fields with different formats and meanings, and performing data cleaning activities to ensure valid and reliable integration across systems can be both expensive and extremely time consuming. As the scope of needed research data increases, the traditional methodology may not be sustainable. Data Virtualization provides an alternative to traditional methods that may reduce the effort required to integrate data across disparate systems. Objective Our goal was to survey new methods in data integration, cloud computing, enterprise data management and virtual data management for opportunities to increase the efficiency of producing VDW and similar data sets. Methods Kaiser Permanente Information Technology (KPIT), in collaboration with the Mid-Atlantic Permanente Research Institute (MAPRI) reviewed methodologies in the burgeoning field of Data Virtualization. We identified potential strengths and weaknesses of new approaches to data integration. For each method, we evaluated its potential application for producing effective research data sets. Results Data Virtualization provides opportunities to reduce the amount of data movement required to integrate data sources on different platforms in order to produce research data sets. Additionally, Data Virtualization also includes methods for managing “fuzzy” matching used to match fields known to have poor reliability such as names, addresses and social security numbers. These methods could improve the efficiency of integrating state and federal data such as patient race, death, and tumors with internal electronic health record data. Discussion The emerging field of Data Virtualization has considerable potential for increasing the efficiency of producing research data sets. An important next step will be to develop a proof of concept project that will help us understand to benefits and drawbacks of these techniques.

  3. A transient FETI methodology for large-scale parallel implicit computations in structural mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier

    1992-01-01

    Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.

  4. Laser Doppler velocimetry primer

    NASA Technical Reports Server (NTRS)

    Bachalo, William D.

    1985-01-01

    Advanced research in experimental fluid dynamics required a familiarity with sophisticated measurement techniques. In some cases, the development and application of new techniques is required for difficult measurements. Optical methods and in particular, the laser Doppler velocimeter (LDV) are now recognized as the most reliable means for performing measurements in complex turbulent flows. And such, the experimental fluid dynamicist should be familiar with the principles of operation of the method and the details associated with its application. Thus, the goals of this primer are to efficiently transmit the basic concepts of the LDV method to potential users and to provide references that describe the specific areas in greater detail.

  5. A Flexible and Efficient Method for Solving Ill-Posed Linear Integral Equations of the First Kind for Noisy Data

    NASA Astrophysics Data System (ADS)

    Antokhin, I. I.

    2017-06-01

    We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.

  6. Precise Composition Tailoring of Mixed-Cation Hybrid Perovskites for Efficient Solar Cells by Mixture Design Methods.

    PubMed

    Li, Liang; Liu, Na; Xu, Ziqi; Chen, Qi; Wang, Xindong; Zhou, Huanping

    2017-09-26

    Mixed anion/cation perovskites absorber has been recently implemented to construct highly efficient single junction solar cells and tandem devices. However, considerable efforts are still required to map the composition-property relationship of the mixed perovskites absorber, which is essential to facilitate device design. Here we report the intensive exploration of mixed-cation perovskites in their compositional space with the assistance of a rational mixture design (MD) methods. Different from the previous linear search of the cation ratios, it is found that by employing the MD methods, the ternary composition can be tuned simultaneously following simplex lattice designs or simplex-centroid designs, which enable significantly reduced experiment/sampling size to unveil the composition-property relationship for mixed perovskite materials and to boost the resultant device efficiency. We illustrated the composition-property relationship of the mixed perovskites in multidimension and achieved an optimized power conversion efficiency of 20.99% in the corresponding device. Moreover, the method is demonstrated to be feasible to help adjust the bandgap through rational materials design, which can be further extended to other materials systems, not limited in polycrystalline perovskites films for photovoltaic applications only.

  7. Efficient Blockwise Permutation Tests Preserving Exchangeability

    PubMed Central

    Zhou, Chunxiao; Zwilling, Chris E.; Calhoun, Vince D.; Wang, Michelle Y.

    2014-01-01

    In this paper, we present a new blockwise permutation test approach based on the moments of the test statistic. The method is of importance to neuroimaging studies. In order to preserve the exchangeability condition required in permutation tests, we divide the entire set of data into certain exchangeability blocks. In addition, computationally efficient moments-based permutation tests are performed by approximating the permutation distribution of the test statistic with the Pearson distribution series. This involves the calculation of the first four moments of the permutation distribution within each block and then over the entire set of data. The accuracy and efficiency of the proposed method are demonstrated through simulated experiment on the magnetic resonance imaging (MRI) brain data, specifically the multi-site voxel-based morphometry analysis from structural MRI (sMRI). PMID:25289113

  8. Computing the Baker-Campbell-Hausdorff series and the Zassenhaus product

    NASA Astrophysics Data System (ADS)

    Weyrauch, Michael; Scholz, Daniel

    2009-09-01

    The Baker-Campbell-Hausdorff (BCH) series and the Zassenhaus product are of fundamental importance for the theory of Lie groups and their applications in physics and physical chemistry. Standard methods for the explicit construction of the BCH and Zassenhaus terms yield polynomial representations, which must be translated into the usually required commutator representation. We prove that a new translation proposed recently yields a correct representation of the BCH and Zassenhaus terms. This representation entails fewer terms than the well-known Dynkin-Specht-Wever representation, which is of relevance for practical applications. Furthermore, various methods for the computation of the BCH and Zassenhaus terms are compared, and a new efficient approach for the calculation of the Zassenhaus terms is proposed. Mathematica implementations for the most efficient algorithms are provided together with comparisons of efficiency.

  9. Trading efficiency for effectiveness in similarity-based indexing for image databases

    NASA Astrophysics Data System (ADS)

    Barros, Julio E.; French, James C.; Martin, Worthy N.; Kelly, Patrick M.

    1995-11-01

    Image databases typically manage feature data that can be viewed as points in a feature space. Some features, however, can be better expressed as a collection of points or described by a probability distribution function (PDF) rather than as a single point. In earlier work we introduced a similarity measure and a method for indexing and searching the PDF descriptions of these items that guarantees an answer equivalent to sequential search. Unfortunately, certain properties of the data can restrict the efficiency of that method. In this paper we extend that work and examine trade-offs between efficiency and answer quality or effectiveness. These trade-offs reduce the amount of work required during a search by reducing the number of undesired items fetched without excluding an excessive number of the desired ones.

  10. Synthesis of energy-efficient FSMs implemented in PLD circuits

    NASA Astrophysics Data System (ADS)

    Nawrot, Radosław; Kulisz, Józef; Kania, Dariusz

    2017-11-01

    The paper presents an outline of a simple synthesis method of energy-efficient FSMs. The idea consists in using local clock gating to selectively block the clock signal, if no transition of a state of a memory element is required. The research was dedicated to logic circuits using Programmable Logic Devices as the implementation platform, but the conclusions can be applied to any synchronous circuit. The experimental section reports a comparison of three methods of implementing sequential circuits in PLDs with respect to clock distribution: the classical fully synchronous structure, the structure exploiting the Enable Clock inputs of memory elements, and the structure using clock gating. The results show that the approach based on clock gating is the most efficient one, and it leads to significant reduction of dynamic power consumed by the FSM.

  11. New configuration for efficient and durable copper coating on the outer surface of a tube

    DOE PAGES

    Ahmad, Irfan; Chapman, Steven F.; Velas, Katherine M.; ...

    2017-03-27

    A well-adhered copper coating on stainless steel power coupler parts is required in superconducting radio frequency (SRF) accelerators. Radio frequency power coupler parts are complex, tubelike stainless steel structures, which require copper coating on their outer and inner surfaces. Conventional copper electroplating sometimes produces films with inadequate adhesion strength for SRF applications. Electroplating also requires a thin nickel strike layer under the copper coating, whose magnetic properties can be detrimental to SRF applications. Coaxial energetic deposition (CED) and sputtering methods have demonstrated efficient conformal coating on the inner surfaces of tubes but coating the outer surface of a tube ismore » challenging because these coating methods are line of sight. When the substrate is off axis and the plasma source is on axis, only a small section of the substrate’s outer surface is exposed to the source cathode. The conventional approach is to rotate the tube to achieve uniformity across the outer surface. This method results in poor film thickness uniformity and wastes most of the source plasma. Alameda Applied Sciences Corporation (AASC) has developed a novel configuration called hollow external cathode CED (HEC-CED) to overcome these issues. HEC-CED produces a film with uniform thickness and efficiently uses all eroded source material. Furthermore, the Cu film deposited on the outside of a stainless steel tube using the new HEC-CED configuration survived a high pressure water rinse adhesion test. HEC-CED can be used to coat the outside of any cylindrical structure.« less

  12. New configuration for efficient and durable copper coating on the outer surface of a tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, Irfan; Chapman, Steven F.; Velas, Katherine M.

    A well-adhered copper coating on stainless steel power coupler parts is required in superconducting radio frequency (SRF) accelerators. Radio frequency power coupler parts are complex, tubelike stainless steel structures, which require copper coating on their outer and inner surfaces. Conventional copper electroplating sometimes produces films with inadequate adhesion strength for SRF applications. Electroplating also requires a thin nickel strike layer under the copper coating, whose magnetic properties can be detrimental to SRF applications. Coaxial energetic deposition (CED) and sputtering methods have demonstrated efficient conformal coating on the inner surfaces of tubes but coating the outer surface of a tube ismore » challenging because these coating methods are line of sight. When the substrate is off axis and the plasma source is on axis, only a small section of the substrate’s outer surface is exposed to the source cathode. The conventional approach is to rotate the tube to achieve uniformity across the outer surface. This method results in poor film thickness uniformity and wastes most of the source plasma. Alameda Applied Sciences Corporation (AASC) has developed a novel configuration called hollow external cathode CED (HEC-CED) to overcome these issues. HEC-CED produces a film with uniform thickness and efficiently uses all eroded source material. Furthermore, the Cu film deposited on the outside of a stainless steel tube using the new HEC-CED configuration survived a high pressure water rinse adhesion test. HEC-CED can be used to coat the outside of any cylindrical structure.« less

  13. Efficient Parallel Formulations of Hierarchical Methods and Their Applications

    NASA Astrophysics Data System (ADS)

    Grama, Ananth Y.

    1996-01-01

    Hierarchical methods such as the Fast Multipole Method (FMM) and Barnes-Hut (BH) are used for rapid evaluation of potential (gravitational, electrostatic) fields in particle systems. They are also used for solving integral equations using boundary element methods. The linear systems arising from these methods are dense and are solved iteratively. Hierarchical methods reduce the complexity of the core matrix-vector product from O(n^2) to O(n log n) and the memory requirement from O(n^2) to O(n). We have developed highly scalable parallel formulations of a hybrid FMM/BH method that are capable of handling arbitrarily irregular distributions. We apply these formulations to astrophysical simulations of Plummer and Gaussian galaxies. We have used our parallel formulations to solve the integral form of the Laplace equation. We show that our parallel hierarchical mat-vecs yield high efficiency and overall performance even on relatively small problems. A problem containing approximately 200K nodes takes under a second to compute on 256 processors and yet yields over 85% efficiency. The efficiency and raw performance is expected to increase for bigger problems. For the 200K node problem, our code delivers about 5 GFLOPS of performance on a 256 processor T3D. This is impressive considering the fact that the problem has floating point divides and roots, and very little locality resulting in poor cache performance. A dense matrix-vector product of the same dimensions would require about 0.5 TeraBytes of memory and about 770 TeraFLOPS of computing speed. Clearly, if the loss in accuracy resulting from the use of hierarchical methods is acceptable, our code yields significant savings in time and memory. We also study the convergence of a GMRES solver built around this mat-vec. We accelerate the convergence of the solver using three preconditioning techniques: diagonal scaling, block-diagonal preconditioning, and inner-outer preconditioning. We study the performance and parallel efficiency of these preconditioned solvers. Using this solver, we solve dense linear systems with hundreds of thousands of unknowns. Solving a 105K unknown problem takes about 10 minutes on a 64 processor T3D. Until very recently, boundary element problems of this magnitude could not even be generated, let alone solved.

  14. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  15. Increasing Efficiency of Fecal Coliform Testing Through EPA-Approved Alternate Method Colilert*-18

    NASA Technical Reports Server (NTRS)

    Cornwell, Brian

    2017-01-01

    The 21 SM 9221 E multiple-tube fermentation method for fecal coliform analysis requires a large time and reagent investment for the performing laboratory. In late 2010, the EPA approved an alternative procedure for the determination of fecal coliforms designated as Colilert*-18. However, as of late 2016, only two VELAP-certified laboratories in the Commonwealth of Virginia have been certified in this method.

  16. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Lu, Dan; Ye, Ming; Gunzburger, Max; Webster, Clayton

    2013-10-01

    Bayesian analysis has become vital to uncertainty quantification in groundwater modeling, but its application has been hindered by the computational cost associated with numerous model executions required by exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, a new approach is developed to improve the computational efficiency of Bayesian inference by constructing a surrogate of the PPDF, using an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, this paper utilizes a compactly supported higher-order hierarchical basis to construct the surrogate system, resulting in a significant reduction in the number of required model executions. In addition, using the hierarchical surplus as an error indicator allows locally adaptive refinement of sparse grids in the parameter space, which further improves computational efficiency. To efficiently build the surrogate system for the PPDF with multiple significant modes, optimization techniques are used to identify the modes, for which high-probability regions are defined and components of the aSG-hSC approximation are constructed. After the surrogate is determined, the PPDF can be evaluated by sampling the surrogate system directly without model execution, resulting in improved efficiency of the surrogate-based MCMC compared with conventional MCMC. The developed method is evaluated using two synthetic groundwater reactive transport models. The first example involves coupled linear reactions and demonstrates the accuracy of our high-order hierarchical basis approach in approximating high-dimensional posteriori distribution. The second example is highly nonlinear because of the reactions of uranium surface complexation, and demonstrates how the iterative aSG-hSC method is able to capture multimodal and non-Gaussian features of PPDF caused by model nonlinearity. Both experiments show that aSG-hSC is an effective and efficient tool for Bayesian inference.

  17. Single Anisotropic 3-D MR Image Upsampling via Overcomplete Dictionary Trained From In-Plane High Resolution Slices.

    PubMed

    Jia, Yuanyuan; He, Zhongshi; Gholipour, Ali; Warfield, Simon K

    2016-11-01

    In magnetic resonance (MR), hardware limitation, scanning time, and patient comfort often result in the acquisition of anisotropic 3-D MR images. Enhancing image resolution is desired but has been very challenging in medical image processing. Super resolution reconstruction based on sparse representation and overcomplete dictionary has been lately employed to address this problem; however, these methods require extra training sets, which may not be always available. This paper proposes a novel single anisotropic 3-D MR image upsampling method via sparse representation and overcomplete dictionary that is trained from in-plane high resolution slices to upsample in the out-of-plane dimensions. The proposed method, therefore, does not require extra training sets. Abundant experiments, conducted on simulated and clinical brain MR images, show that the proposed method is more accurate than classical interpolation. When compared to a recent upsampling method based on the nonlocal means approach, the proposed method did not show improved results at low upsampling factors with simulated images, but generated comparable results with much better computational efficiency in clinical cases. Therefore, the proposed approach can be efficiently implemented and routinely used to upsample MR images in the out-of-planes views for radiologic assessment and postacquisition processing.

  18. Application of PLE for the determination of essential oil components from Thymus vulgaris L.

    PubMed

    Dawidowicz, Andrzej L; Rado, Ewelina; Wianowska, Dorota; Mardarowicz, Marek; Gawdzik, Jan

    2008-08-15

    Essential plants, due to their long presence in human history, their status in culinary arts, their use in medicine and perfume manufacture, belong to frequently examined stock materials in scientific and industrial laboratories. Because of a large number of freshly cut, dried or frozen plant samples requiring the determination of essential oil amount and composition, a fast, safe, simple, efficient and highly automatic sample preparation method is needed. Five sample preparation methods (steam distillation, extraction in the Soxhlet apparatus, supercritical fluid extraction, solid phase microextraction and pressurized liquid extraction) used for the isolation of aroma-active components from Thymus vulgaris L. are compared in the paper. The methods are mainly discussed with regard to the recovery of components which typically exist in essential oil isolated by steam distillation. According to the obtained data, PLE is the most efficient sample preparation method in determining the essential oil from the thyme herb. Although co-extraction of non-volatile ingredients is the main drawback of this method, it is characterized by the highest yield of essential oil components and the shortest extraction time required. Moreover, the relative peak amounts of essential components revealed by PLE are comparable with those obtained by steam distillation, which is recognized as standard sample preparation method for the analysis of essential oils in aromatic plants.

  19. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  20. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  1. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE PAGES

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa; ...

    2016-06-22

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  2. A new method for determining which stars are near a star sensor field-of-view

    NASA Technical Reports Server (NTRS)

    Yates, Russell E., Jr.; Vedder, John D.

    1991-01-01

    A new method is described for determining which stars in a navigation star catalog are near a star sensor field of view (FOV). This method assumes that an estimate of spacecraft inertial attitude is known. Vector component ranges for the star sensor FOV are computed, so that stars whose vector components lie within these ranges are near the star sensor FOV. This method requires no presorting of the navigation star catalog, and is more efficient than tradition methods.

  3. 40 CFR 63.694 - Testing methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determine treatment process required HAP biodegradation efficiency (Rbio) for compliance with standards... procedures to minimize the loss of compounds due to volatilization, biodegradation, reaction, or sorption... compounds due to volatilization, biodegradation, reaction, or sorption during the sample collection, storage...

  4. 40 CFR 63.694 - Testing methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... determine treatment process required HAP biodegradation efficiency (Rbio) for compliance with standards... procedures to minimize the loss of compounds due to volatilization, biodegradation, reaction, or sorption... compounds due to volatilization, biodegradation, reaction, or sorption during the sample collection, storage...

  5. 40 CFR 63.694 - Testing methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... determine treatment process required HAP biodegradation efficiency (Rbio) for compliance with standards... procedures to minimize the loss of compounds due to volatilization, biodegradation, reaction, or sorption... compounds due to volatilization, biodegradation, reaction, or sorption during the sample collection, storage...

  6. Nanoshells for photothermal therapy: a Monte-Carlo based numerical study of their design tolerance

    PubMed Central

    Grosges, Thomas; Barchiesi, Dominique; Kessentini, Sameh; Gréhan, Gérard; de la Chapelle, Marc Lamy

    2011-01-01

    The optimization of the coated metallic nanoparticles and nanoshells is a current challenge for biological applications, especially for cancer photothermal therapy, considering both the continuous improvement of their fabrication and the increasing requirement of efficiency. The efficiency of the coupling between illumination with such nanostructures for burning purposes depends unevenly on their geometrical parameters (radius, thickness of the shell) and material parameters (permittivities which depend on the illumination wavelength). Through a Monte-Carlo method, we propose a numerical study of such nanodevice, to evaluate tolerances (or uncertainty) on these parameters, given a threshold of efficiency, to facilitate the design of nanoparticles. The results could help to focus on the relevant parameters of the engineering process for which the absorbed energy is the most dependant. The Monte-Carlo method confirms that the best burning efficiency are obtained for hollow nanospheres and exhibit the sensitivity of the absorbed electromagnetic energy as a function of each parameter. The proposed method is general and could be applied in design and development of new embedded coated nanomaterials used in biomedicine applications. PMID:21698021

  7. Runge-Kutta Methods for Linear Ordinary Differential Equations

    NASA Technical Reports Server (NTRS)

    Zingg, David W.; Chisholm, Todd T.

    1997-01-01

    Three new Runge-Kutta methods are presented for numerical integration of systems of linear inhomogeneous ordinary differential equations (ODES) with constant coefficients. Such ODEs arise in the numerical solution of the partial differential equations governing linear wave phenomena. The restriction to linear ODEs with constant coefficients reduces the number of conditions which the coefficients of the Runge-Kutta method must satisfy. This freedom is used to develop methods which are more efficient than conventional Runge-Kutta methods. A fourth-order method is presented which uses only two memory locations per dependent variable, while the classical fourth-order Runge-Kutta method uses three. This method is an excellent choice for simulations of linear wave phenomena if memory is a primary concern. In addition, fifth- and sixth-order methods are presented which require five and six stages, respectively, one fewer than their conventional counterparts, and are therefore more efficient. These methods are an excellent option for use with high-order spatial discretizations.

  8. GPU-accelerated element-free reverse-time migration with Gauss points partition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  9. Partition of unity finite element method for quantum mechanical materials calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pask, J. E.; Sukumar, N.

    The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less

  10. Partition of unity finite element method for quantum mechanical materials calculations

    DOE PAGES

    Pask, J. E.; Sukumar, N.

    2016-11-09

    The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less

  11. Influence of sequence and size of DNA on packaging efficiency of parvovirus MVM-based vectors.

    PubMed

    Brandenburger, A; Coessens, E; El Bakkouri, K; Velu, T

    1999-05-01

    We have derived a vector from the autonomous parvovirus MVM(p), which expresses human IL-2 specifically in transformed cells (Russell et al., J. Virol 1992;66:2821-2828). Testing the therapeutic potential of these vectors in vivo requires high-titer stocks. Stocks with a titer of 10(9) can be obtained after concentration and purification (Avalosse et al., J. Virol. Methods 1996;62:179-183), but this method requires large culture volumes and cannot easily be scaled up. We wanted to increase the production of recombinant virus at the initial transfection step. Poor vector titers could be due to inadequate genome amplification or to inefficient packaging. Here we show that intracellular amplification of MVM vector genomes is not the limiting factor for vector production. Several vector genomes of different size and/or structure were amplified to an equal extent. Their amplification was also equivalent to that of a cotransfected wild-type genome. We did not observe any interference between vector and wild-type genomes at the level of DNA amplification. Despite equivalent genome amplification, vector titers varied greatly between the different genomes, presumably owing to differences in packaging efficiency. Genomes with a size close to 100% that of wild type were packaged most efficiently with loss of efficiency at lower and higher sizes. However, certain genomes of identical size showed different packaging efficiencies, illustrating the importance of the DNA sequence, and probably its structure.

  12. Phased Array Excitations For Efficient Near Field Wireless Power Transmission

    DTIC Science & Technology

    2016-09-01

    relating to the improvement of wireless - power transfer (WPT) in the near field. Improvement to power reception in the near field requires that...improvement of wireless - power transfer (WPT) in the near field. Improvement to power reception in the near field requires that excitation correction methods...transverse electromagnetic TM transverse magnetic UAV unmanned aerial vehicles VSWR voltage standing wave ratio WPT wireless power transfer XML

  13. Constellation Commodities Studies Summary

    NASA Technical Reports Server (NTRS)

    Dirschka, Eric

    2011-01-01

    Constellation program was NASA's long-term program for space exploration. The goal of the commodities studies was to solicit industry expertise in production, storage, and transportation required for future use and to improve efficiency and life cycle cost over legacy methods. Objectives were to consolidate KSC, CCAFS and other requirements; extract available industry expertise; identify commercial opportunities; and establish synergy with State of Florida partnerships. Study results are reviewed.

  14. Efficient Computation Of Manipulator Inertia Matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.

  15. Supercomputing Aspects for Simulating Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin C.

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two methods is compared by obtaining unsteady solutions for the evolution of twin vortices behind a flat plate. Calculated results are compared with experimental and other numerical results. For an unsteady flow, which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiteration procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy the incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in present computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  16. Experimental study on removals of SO2 and NO(x) using adsorption of activated carbon/microwave desorption.

    PubMed

    Ma, Shuang-Chen; Yao, Juan-Juan; Gao, Li; Ma, Xiao-Ying; Zhao, Yi

    2012-09-01

    Experimental studies on desulfurization and denitrification were carried out using activated carbon irradiated by microwave. The influences of the concentrations of nitric oxide (NO) and sulfur dioxide (SO2), the flue gas coexisting compositions, on adsorption properties of activated carbon and efficiencies of desulfurization and denitrification were investigated. The results show that adsorption capacity and removal efficiency of NO decrease with the increasing of SO2 concentrations in flue gas; adsorption capacity of NO increases slightly first and drops to 12.79 mg/g, and desulfurization efficiency descends with the increasing SO2 concentrations. Adsorption capacity of SO2 declines with the increasing of O2 content in flue gas, but adsorption capacity of NO increases, and removal efficiencies of NO and SO2 could be larger than 99%. Adsorption capacity of NO declines with the increase of moisture in the flue gas, but adsorption capacity of SO2 increases and removal efficiencies of NO and SO2 would be relatively stable. Adsorption capacities of both NO and SO2 decrease with the increasing of CO2 content; efficiencies of desulfurization and denitrification augment at the beginning stage, then start to fall when CO2 content exceeds 12.4%. The mechanisms of this process are also discussed. The prominent SO2 and NOx treatment techniques in power plants are wet flue gas desulfurization (FGD) and the catalytic decomposition method like selective catalytic reduction (SCR) or nonselective catalytic reduction (NSCR). However, these processes would have some difficulties in commercial application due to their high investment, requirement of expensive catalysts and large-scale equipment, and so on. A simple SO2 and NOx reduction utilizing decomposition by microwave energy method can be used. The pollutants control of flue gas in the power plants by the method of microwave-induced decomposition using adsorption of activated carbon/microwave desorption can meet the requirements of environmental protection, which will be stricter in the future.

  17. International Experience in Developing Low-Emission Combustors for Land-Based, Large Gas-Turbine Units: Mitsubishi Heavy Industries' Equipment

    NASA Astrophysics Data System (ADS)

    Bulysova, L. A.; Vasil'ev, V. D.; Berne, A. L.; Gutnik, M. N.; Ageev, A. V.

    2018-05-01

    This is the second paper in a series of publications summarizing the international experience in the development of low-emission combustors (LEC) for land-based, large (above 250 MW) gas-turbine units (GTU). The purpose of this series is to generalize and analyze the approaches used by various manufacturers in designing flowpaths for fuel and air in LECs, managing fuel combustion, and controlling the fuel flow. The efficiency of advanced GTUs can be as high as 43% (with an output of 350-500 MW) while the efficiency of 600-800 MW combined-cycle units with these GTUs can attain 63.5%. These high efficiencies require a compression ratio of 20-24 and a temperature as high as 1600°C at the combustor outlet. Accordingly, the temperature in the combustion zone also rises. All the requirements for the control of harmful emissions from these GTUs are met. All the manufacturers and designers of LECs for modern GTUs encounter similar problems, such as emissions control, combustion instability, and reliable cooling of hot path parts. Methods of their elimination are different and interesting from the standpoint of science and practice. One more essential requirement is that the efficiency and environmental performance indices must be maintained irrespective of the fuel composition or heating value and also in operation at part loads below 40% of rated. This paper deals with Mitsubishi Series M701 GTUs, F, G, or J class, which have gained a good reputation in the power equipment market. A design of a burner for LECs and a control method providing stable low-emission fuel combustion are presented. The advantages and disadvantages of the use of air bypass valves installed in each liner to maintain a nearly constant air to fuel ratio within a wide range of GTU loads are described. Methods for controlling low- and high-frequency combustion instabilities are outlined. Upgrading of the cooling system for the wall of a liner and a transition piece is of great interest. Change over from effusion (or film) cooling to convective steam cooling and convective air cooling has considerably increased the GTU efficiency.

  18. Adaptive Stress Testing of Airborne Collision Avoidance Systems

    NASA Technical Reports Server (NTRS)

    Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Brat, Guillaume P.; Owen, Michael P.

    2015-01-01

    This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.

  19. Efficient storage, computation, and exposure of computer-generated holograms by electron-beam lithography.

    PubMed

    Newman, D M; Hawley, R W; Goeckel, D L; Crawford, R D; Abraham, S; Gallagher, N C

    1993-05-10

    An efficient storage format was developed for computer-generated holograms for use in electron-beam lithography. This method employs run-length encoding and Lempel-Ziv-Welch compression and succeeds in exposing holograms that were previously infeasible owing to the hologram's tremendous pattern-data file size. These holograms also require significant computation; thus the algorithm was implemented on a parallel computer, which improved performance by 2 orders of magnitude. The decompression algorithm was integrated into the Cambridge electron-beam machine's front-end processor.Although this provides much-needed ability, some hardware enhancements will be required in the future to overcome inadequacies in the current front-end processor that result in a lengthy exposure time.

  20. Power processing for electric propulsion

    NASA Technical Reports Server (NTRS)

    Finke, R. C.; Herron, B. G.; Gant, G. D.

    1975-01-01

    The inclusion of electric thruster systems in spacecraft design is considered. The propulsion requirements of such spacecraft dictate a wide range of thruster power levels and operational lifetimes, which must be matched by lightweight, efficient, and reliable thruster power processing systems. Electron bombardment ion thruster requirements are presented, and the performance characteristics of present power processing systems are reviewed. Design philosophies and alternatives in areas such as inverter type, arc protection, and control methods are discussed along with future performance potentials for meeting goals in the areas of power process or weight (10 kg/kW), efficiency (approaching 92 percent), reliability (0.96 for 15,000 hr), and thermal control capability (0.3 to 5 AU).

  1. IGDS/TRAP Interface Program (ITIP). Software Design Document

    NASA Technical Reports Server (NTRS)

    Jefferys, Steve; Johnson, Wendell

    1981-01-01

    The preliminary design of the IGDS/TRAP Interface Program (ITIP) is described. The ITIP is implemented on the PDP 11/70 and interfaces directly with the Interactive Graphics Design System and the Data Management and Retrieval System. The program provides an efficient method for developing a network flow diagram. Performance requirements, operational rquirements, and design requirements are discussed along with sources and types of input and destination and types of output. Information processing functions and data base requirements are also covered.

  2. A practically unconditionally gradient stable scheme for the N-component Cahn-Hilliard system

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Geun; Choi, Jeong-Whan; Kim, Junseok

    2012-02-01

    We present a practically unconditionally gradient stable conservative nonlinear numerical scheme for the N-component Cahn-Hilliard system modeling the phase separation of an N-component mixture. The scheme is based on a nonlinear splitting method and is solved by an efficient and accurate nonlinear multigrid method. The scheme allows us to convert the N-component Cahn-Hilliard system into a system of N-1 binary Cahn-Hilliard equations and significantly reduces the required computer memory and CPU time. We observe that our numerical solutions are consistent with the linear stability analysis results. We also demonstrate the efficiency of the proposed scheme with various numerical experiments.

  3. [Hepatic hemostasis with packing in complex abdominal traumatic lesions: indications and postoperative outcomes].

    PubMed

    Mazilu, O; Cnejevici, S; Stef, D; Istodor, A; Dabelea, C; Fluture, V

    2009-01-01

    The purpose of this study is to review our postoperative outcomes with liver packing in complex abdominal trauma. 76 liver trauma were admitted for operative procedures in the Surgical Department of City Hospital Timisoara between April 1994 - September 2009 and 16 cases were identified in our series as requiring liver packing. In all cases, this method was efficient, with no postoperative bleeding. In the same time, there were specific complications such as bile leak or abdominal collections. despite a second procedure for packs removal and the possibility for specific complications, liver packing is an efficient method for severe liver trauma or complex abdominal lesions.

  4. Efficient implementation of a 3-dimensional ADI method on the iPSC/860

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van der Wijngaart, R.F.

    1993-12-31

    A comparison is made between several domain decomposition strategies for the solution of three-dimensional partial differential equations on a MIMD distributed memory parallel computer. The grids used are structured, and the numerical algorithm is ADI. Important implementation issues regarding load balancing, storage requirements, network latency, and overlap of computations and communications are discussed. Results of the solution of the three-dimensional heat equation on the Intel iPSC/860 are presented for the three most viable methods. It is found that the Bruno-Cappello decomposition delivers optimal computational speed through an almost complete elimination of processor idle time, while providing good memory efficiency.

  5. New fiber optics illumination system for application to electronics holography

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.

    1995-08-01

    The practical application of electronic holography requires the use of fiber optics. The need of employing coherent fiber optics imposes restrictions in the efficient use of laser light. This paper proposes a new solution to this problem. The proposed method increases the efficiency in the use of the laser light and simplifies the interface between the laser source and the fiber optics. This paper will present the theory behind the proposed method. A discussion of the effect of the different parameters that influence the formation of interference fringes is presented. Limitations and results that can be achieved are given. An example of application is presented.

  6. Research on liquid impact forming technology of double-layered tubes

    NASA Astrophysics Data System (ADS)

    Sun, Changying; Liu, Jianwei; Yao, Xinqi; Huang, Beixing; Li, Yuhan

    2018-03-01

    A double-layered tube is widely used and developed in various fields because of its perfect comprehensive performance and design. With the advent of the era of a double-layered tube, the requirements for double layered tube forming quality, manufacturing cost and forming efficiency are getting higher, so forming methods of a double-layered tube are emerged in an endless stream, the forming methods of a double-layered tube have a great potential in the future. The liquid impact forming technology is a combination of stamping technology and hydroforming technology. Forming a double-layered tube has huge advantages in production cost, quality and efficiency.

  7. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    PubMed

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  8. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  9. A Method to Determine Supply Voltage of Permanent Magnet Motor at Optimal Design Stage

    NASA Astrophysics Data System (ADS)

    Matustomo, Shinya; Noguchi, So; Yamashita, Hideo; Tanimoto, Shigeya

    The permanent magnet motors (PM motors) are widely used in electrical machinery, such as air conditioner, refrigerator and so on. In recent years, from the point of view of energy saving, it is necessary to improve the efficiency of PM motor by optimization. However, in the efficiency optimization of PM motor, many design variables and many restrictions are required. In this paper, the efficiency optimization of PM motor with many design variables was performed by using the voltage driven finite element analysis with the rotating simulation of the motor and the genetic algorithm.

  10. [A cost-benefit analysis of different therapeutic methods in menorrhagia].

    PubMed

    Kirschner, R

    1995-02-20

    When deciding the right forms of treatment for various medical conditions it has been usual to consider medical knowledge, norms and experience. Increasingly, economic factors and principles are being introduced by the management, in the form of health economics and pharmaco-economic analyses, enforced as budgetary cuts and demands for rationalisation and measures to increase efficiency. Economic evaluations require construction of models for analyses. We have used DRG-information, National Health reimbursements and pharmacological retail prices to make a cost-efficiency analysis of treatments of menorrhagia. The analysis showed better cost-efficiency for certain pharmacological treatments than for surgery.

  11. Palladium-Catalyzed Dehydrogenative Coupling: An Efficient Synthetic Strategy for the Construction of the Quinoline Core

    PubMed Central

    Carral-Menoyo, Asier; Ortiz-de-Elguea, Verónica; Martinez-Nunes, Mikel; Sotomayor, Nuria; Lete, Esther

    2017-01-01

    Palladium-catalyzed dehydrogenative coupling is an efficient synthetic strategy for the construction of quinoline scaffolds, a privileged structure and prevalent motif in many natural and biologically active products, in particular in marine alkaloids. Thus, quinolines and 1,2-dihydroquinolines can be selectively obtained in moderate-to-good yields via intramolecular C–H alkenylation reactions, by choosing the reaction conditions. This methodology provides a direct method for the construction of this type of quinoline through an efficient and atom economical procedure, and constitutes significant advance over the existing procedures that require preactivated reaction partners. PMID:28867803

  12. Formulation of a dynamic analysis method for a generic family of hoop-mast antenna systems

    NASA Technical Reports Server (NTRS)

    Gabriele, A.; Loewy, R.

    1981-01-01

    Analytical studies of mast-cable-hoop-membrane type antennas were conducted using a transfer matrix numerical analysis approach. This method, by virtue of its specialization and the inherently easy compartmentalization of the formulation and numerical procedures, can be significantly more efficient in computer time required and in the time needed to review and interpret the results.

  13. A Comparative Study on Electronic versus Traditional Data Collection in a Special Education Setting

    ERIC Educational Resources Information Center

    Ruf, Hernan Dennis

    2012-01-01

    The purpose of the current study was to determine the efficiency of an electronic data collection method compared to a traditional paper-based method in the educational field, in terms of the accuracy of data collected and the time required to do it. In addition, data were collected to assess users' preference and system usability. The study…

  14. Detecting Lower Bounds to Quantum Channel Capacities.

    PubMed

    Macchiavello, Chiara; Sacchi, Massimiliano F

    2016-04-08

    We propose a method to detect lower bounds to quantum capacities of a noisy quantum communication channel by means of a few measurements. The method is easily implementable and does not require any knowledge about the channel. We test its efficiency by studying its performance for most well-known single-qubit noisy channels and for the generalized Pauli channel in an arbitrary finite dimension.

  15. Ceramic oxygen transport membrane array reactor and reforming method

    DOEpatents

    Kelly, Sean M.; Christie, Gervase Maxwell; Rosen, Lee J.; Robinson, Charles; Wilson, Jamie R.; Gonzalez, Javier E.; Doraswami, Uttam R.

    2016-09-27

    A commercially viable modular ceramic oxygen transport membrane reforming reactor for producing a synthesis gas that improves the thermal coupling of reactively-driven oxygen transport membrane tubes and catalyst reforming tubes required to efficiently and effectively produce synthesis gas.

  16. 77 FR 66209 - Self-Regulatory Organizations; ICE Clear Europe Limited; Notice of Filing of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-02

    ... trades and a separate margin calculation using a Monte Carlo simulation. The initial margin requirement... Commission process and review your comments more efficiently, please use only one method. The Commission will...

  17. Constructing Pairing-Friendly Elliptic Curves under Embedding Degree 1 for Securing Critical Infrastructures.

    PubMed

    Wang, Maocai; Dai, Guangming; Choo, Kim-Kwang Raymond; Jayaraman, Prem Prakash; Ranjan, Rajiv

    2016-01-01

    Information confidentiality is an essential requirement for cyber security in critical infrastructure. Identity-based cryptography, an increasingly popular branch of cryptography, is widely used to protect the information confidentiality in the critical infrastructure sector due to the ability to directly compute the user's public key based on the user's identity. However, computational requirements complicate the practical application of Identity-based cryptography. In order to improve the efficiency of identity-based cryptography, this paper presents an effective method to construct pairing-friendly elliptic curves with low hamming weight 4 under embedding degree 1. Based on the analysis of the Complex Multiplication(CM) method, the soundness of our method to calculate the characteristic of the finite field is proved. And then, three relative algorithms to construct pairing-friendly elliptic curve are put forward. 10 elliptic curves with low hamming weight 4 under 160 bits are presented to demonstrate the utility of our approach. Finally, the evaluation also indicates that it is more efficient to compute Tate pairing with our curves, than that of Bertoni et al.

  18. A multiresolution halftoning algorithm for progressive display

    NASA Astrophysics Data System (ADS)

    Mukherjee, Mithun; Sharma, Gaurav

    2005-01-01

    We describe and implement an algorithmic framework for memory efficient, 'on-the-fly' halftoning in a progressive transmission environment. Instead of a conventional approach which repeatedly recalls the continuous tone image from memory and subsequently halftones it for display, the proposed method achieves significant memory efficiency by storing only the halftoned image and updating it in response to additional information received through progressive transmission. Thus the method requires only a single frame-buffer of bits for storage of the displayed binary image and no additional storage is required for the contone data. The additional image data received through progressive transmission is accommodated through in-place updates of the buffer. The method is thus particularly advantageous for high resolution bi-level displays where it can result in significant savings in memory. The proposed framework is implemented using a suitable multi-resolution, multi-level modification of error diffusion that is motivated by the presence of a single binary frame-buffer. Aggregates of individual display bits constitute the multiple output levels at a given resolution. This creates a natural progression of increasing resolution with decreasing bit-depth.

  19. Mesoscopic-microscopic spatial stochastic simulation with automatic system partitioning.

    PubMed

    Hellander, Stefan; Hellander, Andreas; Petzold, Linda

    2017-12-21

    The reaction-diffusion master equation (RDME) is a model that allows for efficient on-lattice simulation of spatially resolved stochastic chemical kinetics. Compared to off-lattice hard-sphere simulations with Brownian dynamics or Green's function reaction dynamics, the RDME can be orders of magnitude faster if the lattice spacing can be chosen coarse enough. However, strongly diffusion-controlled reactions mandate a very fine mesh resolution for acceptable accuracy. It is common that reactions in the same model differ in their degree of diffusion control and therefore require different degrees of mesh resolution. This renders mesoscopic simulation inefficient for systems with multiscale properties. Mesoscopic-microscopic hybrid methods address this problem by resolving the most challenging reactions with a microscale, off-lattice simulation. However, all methods to date require manual partitioning of a system, effectively limiting their usefulness as "black-box" simulation codes. In this paper, we propose a hybrid simulation algorithm with automatic system partitioning based on indirect a priori error estimates. We demonstrate the accuracy and efficiency of the method on models of diffusion-controlled networks in 3D.

  20. Constructing Pairing-Friendly Elliptic Curves under Embedding Degree 1 for Securing Critical Infrastructures

    PubMed Central

    Dai, Guangming

    2016-01-01

    Information confidentiality is an essential requirement for cyber security in critical infrastructure. Identity-based cryptography, an increasingly popular branch of cryptography, is widely used to protect the information confidentiality in the critical infrastructure sector due to the ability to directly compute the user’s public key based on the user’s identity. However, computational requirements complicate the practical application of Identity-based cryptography. In order to improve the efficiency of identity-based cryptography, this paper presents an effective method to construct pairing-friendly elliptic curves with low hamming weight 4 under embedding degree 1. Based on the analysis of the Complex Multiplication(CM) method, the soundness of our method to calculate the characteristic of the finite field is proved. And then, three relative algorithms to construct pairing-friendly elliptic curve are put forward. 10 elliptic curves with low hamming weight 4 under 160 bits are presented to demonstrate the utility of our approach. Finally, the evaluation also indicates that it is more efficient to compute Tate pairing with our curves, than that of Bertoni et al. PMID:27564373

  1. Simulation of silicon thin-film solar cells for oblique incident waves

    NASA Astrophysics Data System (ADS)

    Jandl, Christine; Hertel, Kai; Pflaum, Christoph; Stiebig, Helmut

    2011-05-01

    To optimize the quantum efficiency (QE) and short-circuit current density (JSC) of silicon thin-film solar cells, one has to study the behavior of sunlight in these solar cells. Simulations are an adequate and economic method to analyze the optical properties of light caused by absorption and reflection. To this end a simulation tool is developed to take several demands into account. These include the analysis of perpendicular and oblique incident waves under E-, H- and circularly polarized light. Furthermore, the topology of the nanotextured interfaces influences the efficiency and therefore also the short-circuit current density. It is well known that a rough transparent conductive oxide (TCO) layer increases the efficiency of solar cells. Therefore, it is indispensable that various roughness profiles at the interfaces of the solar cell layers can be modeled in such a way that atomic force microscope (AFM) scan data can be integrated. Numerical calculations of Maxwell's equations based on the finite integration technique (FIT) and Finite Difference Time Domain (FDTD) method are necessary to incorporate all these requirements. The simulations are performed in parallel on high performance computers (HPC) to meet the large computational requirements.

  2. Improving Quality and Reducing Waste in Allied Health Workplace Education Programs: A Pragmatic Operational Education Framework Approach.

    PubMed

    Golder, Janet; Farlie, Melanie K; Sevenhuysen, Samantha

    2016-01-01

    Efficient utilisation of education resources is required for the delivery of effective learning opportunities for allied health professionals. This study aimed to develop an education framework to support delivery of high-quality education within existing education resources. This study was conducted in a large metropolitan health service. Homogenous and purposive sampling methods were utilised in Phase 1 (n=43) and 2 (n=14) consultation stages. Participants included 25 allied health professionals, 22 managers, 1 educator, and 3 executives. Field notes taken during 43 semi-structured interviews and 4 focus groups were member-checked, and semantic thematic analysis methods were utilised. Framework design was informed by existing published framework development guides. The framework model contains governance, planning, delivery, and evaluation and research elements and identifies performance indicators, practice examples, and support tools for a range of stakeholders. Themes integrated into framework content include improving quality of education and training provided and delivery efficiency, greater understanding of education role requirements, and workforce support for education-specific knowledge and skill development. This framework supports efficient delivery of allied health workforce education and training to the highest standard, whilst pragmatically considering current allied health education workforce demands.

  3. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  4. Site-specific gene transfer into the rat spinal cord by photomechanical waves

    NASA Astrophysics Data System (ADS)

    Ando, Takahiro; Sato, Shunichi; Toyooka, Terushige; Uozumi, Yoichi; Nawashiro, Hiroshi; Ashida, Hiroshi; Obara, Minoru

    2011-10-01

    Nonviral, site-specific gene delivery to deep tissue is required for gene therapy of a spinal cord injury. However, an efficient method satisfying these requirements has not been established. This study demonstrates efficient and targeted gene transfer into the spinal cord by using photomechanical waves (PMWs), which were generated by irradiating a black laser absorbing rubber with 532-nm nanosecond Nd:YAG laser pulses. After a solution of plasmid DNA coding for enhanced green fluorescent protein (EGFP) or luciferase was intraparenchymally injected into the spinal cord, PMWs were applied to the target site. In the PMW application group, we observed significant EGFP gene expression in the white matter and remarkably high luciferase activity only in the spinal cord segment exposed to the PMWs. We also assessed hind limb movements 24 h after the application of PMWs based on the Basso-Beattie-Bresnahan (BBB) score to evaluate the noninvasiveness of this method. Locomotor evaluation showed no significant decrease in BBB score under optimum laser irradiation conditions. These findings demonstrated that exogenous genes can be efficiently and site-selectively delivered into the spinal cord by applying PMWs without significant locomotive damage.

  5. Self-calibration method for rotating laser positioning system using interscanning technology and ultrasonic ranging.

    PubMed

    Wu, Jun; Yu, Zhijing; Zhuge, Jingchang

    2016-04-01

    A rotating laser positioning system (RLPS) is an efficient measurement method for large-scale metrology. Due to multiple transmitter stations, which consist of a measurement network, the position relationship of these stations must be first calibrated. However, with such auxiliary devices such as a laser tracker, scale bar, and complex calibration process, the traditional calibration methods greatly reduce the measurement efficiency. This paper proposes a self-calibration method for RLPS, which can automatically obtain the position relationship. The method is implemented through interscanning technology by using a calibration bar mounted on the transmitter station. Each bar is composed of three RLPS receivers and one ultrasonic sensor whose coordinates are known in advance. The calibration algorithm is mainly based on multiplane and distance constraints and is introduced in detail through a two-station mathematical model. The repeated experiments demonstrate that the coordinate measurement uncertainty of spatial points by using this method is about 0.1 mm, and the accuracy experiments show that the average coordinate measurement deviation is about 0.3 mm compared with a laser tracker. The accuracy can meet the requirements of most applications, while the calibration efficiency is significantly improved.

  6. New Automotive Air Conditioning System Simulation Tool Developed in MATLAB/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiss, T.; Chaney, L.; Meyer, J.

    Further improvements in vehicle fuel efficiency require accurate evaluation of the vehicle's transient total power requirement. When operated, the air conditioning (A/C) system is the largest auxiliary load on a vehicle; therefore, accurate evaluation of the load it places on the vehicle's engine and/or energy storage system is especially important. Vehicle simulation software, such as 'Autonomie,' has been used by OEMs to evaluate vehicles' energy performance. A transient A/C simulation tool incorporated into vehicle simulation models would also provide a tool for developing more efficient A/C systems through a thorough consideration of the transient A/C system performance. The dynamic systemmore » simulation software Matlab/Simulink was used to develop new and more efficient vehicle energy system controls. The various modeling methods used for the new simulation tool are described in detail. Comparison with measured data is provided to demonstrate the validity of the model.« less

  7. Thermal design of a Mars oxygen production plant

    NASA Technical Reports Server (NTRS)

    Sridhar, K. R.; Iyer, Venkatesh A.

    1991-01-01

    The optimal design of the thermal components of a system that uses carbon dioxide from the Martian atmosphere to produce oxygen for spacecraft propulsion and/or life support is discussed. The gases are pressurized, heated and passed through an electrochemical cell. Carbon dioxide is reduced to carbon monoxide and oxygen due to thermal dissociation and electrocatalysis. The oxygen thus formed is separated from the gas mixture by the electrochemical cell. The objective of the design is to optimize both the overall mass and the power consumption of the system. The analysis shows that at electrochemical cell efficiencies of about 50 percent and lower, the optimal system would require unspent carbon dioxide in the exhaust gases to be separated and recycled. Various methods of efficiently compressing the intake gases to system pressures of 0.1 MPa are investigated. The total power requirement for oxygen production rates of 1, 5, and 10 kg/day at various cell efficiencies are presented.

  8. Need for speed: An optimized gridding approach for spatially explicit disease simulations.

    PubMed

    Sellman, Stefan; Tsao, Kimberly; Tildesley, Michael J; Brommesson, Peter; Webb, Colleen T; Wennergren, Uno; Keeling, Matt J; Lindström, Tom

    2018-04-01

    Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power.

  9. Need for speed: An optimized gridding approach for spatially explicit disease simulations

    PubMed Central

    Tildesley, Michael J.; Brommesson, Peter; Webb, Colleen T.; Wennergren, Uno; Lindström, Tom

    2018-01-01

    Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power. PMID:29624574

  10. A fast marching algorithm for the factored eikonal equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treister, Eran, E-mail: erantreister@gmail.com; Haber, Eldad, E-mail: haber@math.ubc.ca; Department of Mathematics, The University of British Columbia, Vancouver, BC

    The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. Thismore » inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.« less

  11. Automatic drawing for traffic marking with MMS LIDAR intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Shimano, Y.

    2014-05-01

    Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.

  12. Evaluation of immunoturbidimetric rheumatoid factor method from Diagam on Abbott c8000 analyzer: comparison with immunonephelemetric method.

    PubMed

    Dupuy, Anne Marie; Hurstel, Rémy; Bargnoux, Anne Sophie; Badiou, Stéphanie; Cristol, Jean Paul

    2014-01-01

    Rheumatoid factor (RF) consists of autoantibodies and because of its heterogeneity its determination is not easy. Currently, nephelometry and Elisa method are considered as reference methods. Due to consolidation, many laboratories have fully automated turbidimetric apparatus, and specific nephelemetric systems are not always available. In addition, nephelemetry is more accurate, but time consuming, expensive, and requires a specific device, resulting in a lower efficiency. Turbidimetry could be an attractive alternative. The turbidimetric RF test from Diagam meets the requirements of accuracy and precision for optimal clinical use, with an acceptable measuring range, and could be an alternative in the determination of RF, without the associated cost of a dedicated instrument, making consolidation and saving blood possible.

  13. Advanced Extra-Vehicular Activity Pressure Garment Requirements Development

    NASA Technical Reports Server (NTRS)

    Ross, Amy; Aitchison, Lindsay; Rhodes, Richard

    2015-01-01

    The NASA Johnson Space Center advanced pressure garment technology development team is addressing requirements development for exploration missions. Lessons learned from the Z-2 high fidelity prototype development have reiterated that clear low-level requirements and verification methods reduce risk to the government, improve efficiency in pressure garment design efforts, and enable the government to be a smart buyer. The expectation is to provide requirements at the specification level that are validated so that their impact on pressure garment design is understood. Additionally, the team will provide defined verification protocols for the requirements. However, in reviewing exploration space suit high level requirements there are several gaps in the team's ability to define and verify related lower level requirements. This paper addresses the efforts in requirement areas such as mobility/fit/comfort and environmental protection (dust, radiation, plasma, secondary impacts) to determine the method by which the requirements can be defined and use of those methods for verification. Gaps exist at various stages. In some cases component level work is underway, but no system level effort has begun; in other cases no effort has been initiated to close the gap. Status of on-going efforts and potential approaches to open gaps are discussed.

  14. Disk space and load time requirements for eye movement biometric databases

    NASA Astrophysics Data System (ADS)

    Kasprowski, Pawel; Harezlak, Katarzyna

    2016-06-01

    Biometric identification is a very popular area of interest nowadays. Problems with the so-called physiological methods like fingerprints or iris recognition resulted in increased attention paid to methods measuring behavioral patterns. Eye movement based biometric (EMB) identification is one of the interesting behavioral methods and due to the intensive development of eye tracking devices it has become possible to define new methods for the eye movement signal processing. Such method should be supported by an efficient storage used to collect eye movement data and provide it for further analysis. The aim of the research was to check various setups enabling such a storage choice. There were various aspects taken into consideration, like disk space usage, time required for loading and saving whole data set or its chosen parts.

  15. Development of a probabilistic analysis methodology for structural reliability estimation

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.

    1991-01-01

    The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.

  16. A Coding Method for Efficient Subgraph Querying on Vertex- and Edge-Labeled Graphs

    PubMed Central

    Zhu, Lei; Song, Qinbao; Guo, Yuchen; Du, Lei; Zhu, Xiaoyan; Wang, Guangtao

    2014-01-01

    Labeled graphs are widely used to model complex data in many domains, so subgraph querying has been attracting more and more attention from researchers around the world. Unfortunately, subgraph querying is very time consuming since it involves subgraph isomorphism testing that is known to be an NP-complete problem. In this paper, we propose a novel coding method for subgraph querying that is based on Laplacian spectrum and the number of walks. Our method follows the filtering-and-verification framework and works well on graph databases with frequent updates. We also propose novel two-step filtering conditions that can filter out most false positives and prove that the two-step filtering conditions satisfy the no-false-negative requirement (no dismissal in answers). Extensive experiments on both real and synthetic graphs show that, compared with six existing counterpart methods, our method can effectively improve the efficiency of subgraph querying. PMID:24853266

  17. Electron linac for medical isotope production with improved energy efficiency and isotope recovery

    DOEpatents

    Noonan, John; Walters, Dean; Virgo, Matt; Lewellen, John

    2015-09-08

    A method and isotope linac system are provided for producing radio-isotopes and for recovering isotopes. The isotope linac is an energy recovery linac (ERL) with an electron beam being transmitted through an isotope-producing target. The electron beam energy is recollected and re-injected into an accelerating structure. The ERL provides improved efficiency with reduced power requirements and provides improved thermal management of an isotope target and an electron-to-x-ray converter.

  18. Extraction and concentration of phenolic compounds from water and sediment

    USGS Publications Warehouse

    Goldberg, Marvin C.; Weiner, Eugene R.

    1980-01-01

    Continuous liquid-liquid extractors are used to concentrate phenols at the ??g l-1 level from water into dichloromethane; this is followed by Kuderna-Danish evaporative concentration and gas chromatography. The procedure requires 5 h for 18 l of sample water. Overall concentration factors around 1000 are obtained. Overall concentration efficiencies vary from 23.1 to 87.1%. Concentration efficiencies determined by a batch method suitable for sediments range from 18.9 to 73.8%. ?? 1980.

  19. Efficiency improvement by navigated safety inspection involving visual clutter based on the random search model.

    PubMed

    Sun, Xinlu; Chong, Heap-Yih; Liao, Pin-Chao

    2018-06-25

    Navigated inspection seeks to improve hazard identification (HI) accuracy. With tight inspection schedule, HI also requires efficiency. However, lacking quantification of HI efficiency, navigated inspection strategies cannot be comprehensively assessed. This work aims to determine inspection efficiency in navigated safety inspection, controlling for the HI accuracy. Based on a cognitive method of the random search model (RSM), an experiment was conducted to observe the HI efficiency in navigation, for a variety of visual clutter (VC) scenarios, while using eye-tracking devices to record the search process and analyze the search performance. The results show that the RSM is an appropriate instrument, and VC serves as a hazard classifier for navigation inspection in improving inspection efficiency. This suggests a new and effective solution for addressing the low accuracy and efficiency of manual inspection through navigated inspection involving VC and the RSM. It also provides insights into the inspectors' safety inspection ability.

  20. Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions

    NASA Astrophysics Data System (ADS)

    Chen, N.; Majda, A.

    2017-12-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  1. Improving the sludge disintegration efficiency of sonication by combining with alkalization and thermal pre-treatment methods.

    PubMed

    Şahinkaya, S; Sevimli, M F; Aygün, A

    2012-01-01

    One of the most serious problems encountered in biological wastewater treatment processes is the production of waste activated sludge (WAS). Sonication, which is an energy-intensive process, is the most powerful sludge pre-treatment method. Due to lack of information about the combined pre-treatment methods of sonication, the combined pre-treatment methods were investigated and it was aimed to improve the disintegration efficiency of sonication by combining sonication with alkalization and thermal pre-treatment methods in this study. The process performances were evaluated based on the quantities of increases in soluble chemical oxygen demand (COD), protein and carbohydrate. The releases of soluble COD, carbohydrate and protein by the combined methods were higher than those by sonication, alkalization and thermal pre-treatment alone. Degrees of sludge disintegration in various options of sonication were in the following descending order: sono-alkalization > sono-thermal pre-treatment > sonication. Therefore, it was determined that combining sonication with alkalization significantly improved the sludge disintegration and decreased the required energy to reach the same yield by sonication. In addition, effects on sludge settleability and dewaterability and kinetic mathematical modelling of pre-treatment performances of these methods were investigated. It was proven that the proposed model accurately predicted the efficiencies of ultrasonic pre-treatment methods.

  2. [Methods of substances and organelles introduction in living cell for cell engineering technologies].

    PubMed

    Nikitin, V A

    2007-01-01

    We have presented the classification of more than 40 methods of genetic material, substances and organelles introduction into a living cell. Each of them has its characteristic advantages, disadvantages and limitations with respect to cell viability, transfer efficiency, general applicability, and technical requirements. It this article we have enlarged on the description of our developments of several new and improved approaches, methods and devices of the direct microinjection into a single cell and cell microsurgery with the help of glass micropipettes. The problem of low efficiency of mammalian cloning is discussed with emphasis on the necessity of expertizing of each step of single cell reconstruction to begin with microsurgical manipulations and necessity of the development of such methods of single cell resonstruction that could minimize the possible damage of the cell.

  3. Replica exchange with solute tempering: A method for sampling biological systems in explicit water

    NASA Astrophysics Data System (ADS)

    Liu, Pu; Kim, Byungchan; Friesner, Richard A.; Berne, B. J.

    2005-09-01

    An innovative replica exchange (parallel tempering) method called replica exchange with solute tempering (REST) for the efficient sampling of aqueous protein solutions is presented here. The method bypasses the poor scaling with system size of standard replica exchange and thus reduces the number of replicas (parallel processes) that must be used. This reduction is accomplished by deforming the Hamiltonian function for each replica in such a way that the acceptance probability for the exchange of replica configurations does not depend on the number of explicit water molecules in the system. For proof of concept, REST is compared with standard replica exchange for an alanine dipeptide molecule in water. The comparisons confirm that REST greatly reduces the number of CPUs required by regular replica exchange and increases the sampling efficiency. This method reduces the CPU time required for calculating thermodynamic averages and for the ab initio folding of proteins in explicit water. Author contributions: B.J.B. designed research; P.L. and B.K. performed research; P.L. and B.K. analyzed data; and P.L., B.K., R.A.F., and B.J.B. wrote the paper.Abbreviations: REST, replica exchange with solute tempering; REM, replica exchange method; MD, molecular dynamics.*P.L. and B.K. contributed equally to this work.

  4. Spline Approximation of Thin Shell Dynamics

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1996-01-01

    A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkler, Jon; Booten, Chuck

    Residential building codes and voluntary labeling programs are continually increasing the energy efficiency requirements of residential buildings. Improving a building's thermal enclosure and installing energy-efficient appliances and lighting can result in significant reductions in sensible cooling loads leading to smaller air conditioners and shorter cooling seasons. However due to fresh air ventilation requirements and internal gains, latent cooling loads are not reduced by the same proportion. Thus, it's becoming more challenging for conventional cooling equipment to control indoor humidity at part-load cooling conditions and using conventional cooling equipment in a non-conventional building poses the potential risk of high indoor humidity.more » The objective of this project was to investigate the impact the chosen design condition has on the calculated part-load cooling moisture load, and compare calculated moisture loads and the required dehumidification capacity to whole-building simulations. Procedures for sizing whole-house supplemental dehumidification equipment have yet to be formalized; however minor modifications to current Air-Conditioner Contractors of America (ACCA) Manual J load calculation procedures are appropriate for calculating residential part-load cooling moisture loads. Though ASHRAE 1% DP design conditions are commonly used to determine the dehumidification requirements for commercial buildings, an appropriate DP design condition for residential buildings has not been investigated. Two methods for sizing supplemental dehumidification equipment were developed and tested. The first method closely followed Manual J cooling load calculations; whereas the second method made more conservative assumptions impacting both sensible and latent loads.« less

  6. Comments on localized and integral localized approximations in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Gouesbet, Gérard; Lock, James A.

    2016-08-01

    Localized approximation procedures are efficient ways to evaluate beam shape coefficients of laser beams, and are particularly useful when other methods are ineffective or inefficient. Comments on these procedures are, however, required in order to help researchers make correct decisions concerning their use. This paper has the flavor of a short review and takes the opportunity to attract the attention of the readers to a required refinement of terminology.

  7. Factors that influence the efficiency of beef and dairy cattle recording system in Kenya: A SWOT-AHP analysis.

    PubMed

    Wasike, Chrilukovian B; Magothe, Thomas M; Kahi, Alexander K; Peters, Kurt J

    2011-01-01

    Animal recording in Kenya is characterised by erratic producer participation and high drop-out rates from the national recording scheme. This study evaluates factors influencing efficiency of beef and dairy cattle recording system. Factors influencing efficiency of animal identification and registration, pedigree and performance recording, and genetic evaluation and information utilisation were generated using qualitative and participatory methods. Pairwise comparison of factors was done by strengths, weaknesses, opportunities and threats-analytical hierarchical process analysis and priority scores to determine their relative importance to the system calculated using Eigenvalue method. For identification and registration, and evaluation and information utilisation, external factors had high priority scores. For pedigree and performance recording, threats and weaknesses had the highest priority scores. Strengths factors could not sustain the required efficiency of the system. Weaknesses of the system predisposed it to threats. Available opportunities could be explored as interventions to restore efficiency in the system. Defensive strategies such as reorienting the system to offer utility benefits to recording, forming symbiotic and binding collaboration between recording organisations and NARS, and development of institutions to support recording were feasible.

  8. GAPIT version 2: an enhanced integrated tool for genomic association and prediction

    USDA-ARS?s Scientific Manuscript database

    Most human diseases and agriculturally important traits are complex. Dissecting their genetic architecture requires continued development of innovative and powerful statistical methods. Corresponding advances in computing tools are critical to efficiently use these statistical innovations and to enh...

  9. Method for simple and rapid concentration of Zika virus particles from infected cell-culture supernatants.

    PubMed

    Richard, Vaea; Aubry, Maite

    2018-05-01

    Experimental studies on Zika virus (ZIKV) may require improvement of infectious titers in viral stocks obtained by cell culture amplification. The use of centrifugal filter devices to increase infectious titers of ZIKV from cell-culture supernatants is highlighted here. A mean gain of 2.33 ± 0.12 log 10 DICT 50 /mL was easily and rapidly obtained with this process. This efficient method of ultrafiltration may be applied to other viruses and be useful in various experimental studies requiring high viral titers. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. On eco-efficient technologies to minimize industrial water consumption

    NASA Astrophysics Data System (ADS)

    Amiri, Mohammad C.; Mohammadifard, Hossein; Ghaffari, Ghasem

    2016-07-01

    Purpose - Water scarcity will further stress on available water systems and decrease the security of water in many areas. Therefore, innovative methods to minimize industrial water usage and waste production are of paramount importance in the process of extending fresh water resources and happen to be the main life support systems in many arid regions of the world. This paper demonstrates that there are good opportunities for many industries to save water and decrease waste water in softening process by substituting traditional with echo-friendly methods. The patented puffing method is an eco-efficient and viable technology for water saving and waste reduction in lime softening process. Design/methodology/approach - Lime softening process (LSP) is a very sensitive process to chemical reactions. In addition, optimal monitoring not only results in minimizing sludge that must be disposed of but also it reduces the operating costs of water conditioning. Weakness of the current (regular) control of LSP based on chemical analysis has been demonstrated experimentally and compared with the eco-efficient puffing method. Findings - This paper demonstrates that there is a good opportunity for many industries to save water and decrease waste water in softening process by substituting traditional method with puffing method, a patented eco-efficient technology. Originality/value - Details of the required innovative works to minimize industrial water usage and waste production are outlined in this paper. Employing the novel puffing method for monitoring of lime softening process results in saving a considerable amount of water while reducing chemical sludge.

  11. A convergent diffusion and social marketing approach for disseminating proven approaches to physical activity promotion.

    PubMed

    Dearing, James W; Maibach, Edward W; Buller, David B

    2006-10-01

    Approaches from diffusion of innovations and social marketing are used here to propose efficient means to promote and enhance the dissemination of evidence-based physical activity programs. While both approaches have traditionally been conceptualized as top-down, center-to-periphery, centralized efforts at social change, their operational methods have usually differed. The operational methods of diffusion theory have a strong relational emphasis, while the operational methods of social marketing have a strong transactional emphasis. Here, we argue for a convergence of diffusion of innovation and social marketing principles to stimulate the efficient dissemination of proven-effective programs. In general terms, we are encouraging a focus on societal sectors as a logical and efficient means for enhancing the impact of dissemination efforts. This requires an understanding of complex organizations and the functional roles played by different individuals in such organizations. In specific terms, ten principles are provided for working effectively within societal sectors and enhancing user involvement in the processes of adoption and implementation.

  12. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  13. Theoretical study of the accuracy of the elution by characteristic points method for bi-langmuir isotherms.

    PubMed

    Ravald, L; Fornstedt, T

    2001-01-26

    The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.

  14. Simultaneous and rapid determination of multiple component concentrations in a Kraft liquor process stream

    DOEpatents

    Li, Jian [Marietta, GA; Chai, Xin Sheng [Atlanta, GA; Zhu, Junyoung [Marietta, GA

    2008-06-24

    The present invention is a rapid method of determining the concentration of the major components in a chemical stream. The present invention is also a simple, low cost, device of determining the in-situ concentration of the major components in a chemical stream. In particular, the present invention provides a useful method for simultaneously determining the concentrations of sodium hydroxide, sodium sulfide and sodium carbonate in aqueous kraft pulping liquors through use of an attenuated total reflectance (ATR) tunnel flow cell or optical probe capable of producing a ultraviolet absorbency spectrum over a wavelength of 190 to 300 nm. In addition, the present invention eliminates the need for manual sampling and dilution previously required to generate analyzable samples. The inventive method can be used in Kraft pulping operations to control white liquor causticizing efficiency, sulfate reduction efficiency in green liquor, oxidation efficiency for oxidized white liquor and the active and effective alkali charge to kraft pulping operations.

  15. Improving Engine Efficiency Through Core Developments

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.

    2011-01-01

    The NASA Environmentally Responsible Aviation (ERA) Project and Fundamental Aeronautics Projects are supporting compressor and turbine research with the goal of reducing aircraft engine fuel burn and greenhouse gas emissions. The primary goals of this work are to increase aircraft propulsion system fuel efficiency for a given mission by increasing the overall pressure ratio (OPR) of the engine while maintaining or improving aerodynamic efficiency of these components. An additional area of work involves reducing the amount of cooling air required to cool the turbine blades while increasing the turbine inlet temperature. This is complicated by the fact that the cooling air is becoming hotter due to the increases in OPR. Various methods are being investigated to achieve these goals, ranging from improved compressor three-dimensional blade designs to improved turbine cooling hole shapes and methods. Finally, a complementary effort in improving the accuracy, range, and speed of computational fluid mechanics (CFD) methods is proceeding to better capture the physical mechanisms underlying all these problems, for the purpose of improving understanding and future designs.

  16. Technical efficiency and resources allocation in university hospitals in Tehran, 2009-2012.

    PubMed

    Rezapour, Aziz; Ebadifard Azar, Farbod; Yousef Zadeh, Negar; Roumiani, YarAllah; Bagheri Faradonbeh, Saeed

    2015-01-01

    Assessment of hospitals' performance in achieving its goals is a basic necessity. Measuring the efficiency of hospitals in order to boost resource productivity in healthcare organizations is extremely important. The aim of this study was to measure technical efficiency and determining status of resource allocation in some university hospitals, in Tehran, Iran. This study was conducted in 2012; the research population consisted of all hospitals affiliated to Iran and Tehran medical sciences universities of. Required data, such as human and capital resources information and also production variables (hospital outputs) were collected from data centers of studied hospitals. Data were analyzed using data envelopment analysis (DEA) method, Deap2,1 software; and the stochastic frontier analysis (SFA) method, Frontier 4,1 software. According to DEA method, average of technical, management (pure) and scale efficiency of the studied hospitals during the study period were calculated 0.87, 0.971, and 0.907, respectively. All kinds of efficiency did not follow a fixed trend over the study time and were constantly changing. In the stochastic frontier's production function analysis, the technical efficiency of the studied industry during the study period was estimated to be 0.389. This study represented hospitals with the highest and lowest efficiency. Reference hospitals (more efficient states) were indicated for the inefficient centers. According to the findings, it was found that in the hospitals that do not operate efficiently, there is a capacity to improve the technical efficiency by removing excess inputs without changes in the level of outputs. However, by the optimal allocation of resources in most studied hospitals, very important economy of scale can be achieved.

  17. A fast semi-discrete Kansa method to solve the two-dimensional spatiotemporal fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Sun, HongGuang; Liu, Xiaoting; Zhang, Yong; Pang, Guofei; Garrard, Rhiannon

    2017-09-01

    Fractional-order diffusion equations (FDEs) extend classical diffusion equations by quantifying anomalous diffusion frequently observed in heterogeneous media. Real-world diffusion can be multi-dimensional, requiring efficient numerical solvers that can handle long-term memory embedded in mass transport. To address this challenge, a semi-discrete Kansa method is developed to approximate the two-dimensional spatiotemporal FDE, where the Kansa approach first discretizes the FDE, then the Gauss-Jacobi quadrature rule solves the corresponding matrix, and finally the Mittag-Leffler function provides an analytical solution for the resultant time-fractional ordinary differential equation. Numerical experiments are then conducted to check how the accuracy and convergence rate of the numerical solution are affected by the distribution mode and number of spatial discretization nodes. Applications further show that the numerical method can efficiently solve two-dimensional spatiotemporal FDE models with either a continuous or discrete mixing measure. Hence this study provides an efficient and fast computational method for modeling super-diffusive, sub-diffusive, and mixed diffusive processes in large, two-dimensional domains with irregular shapes.

  18. Feature extraction using first and second derivative extrema (FSDE) for real-time and hardware-efficient spike sorting.

    PubMed

    Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G

    2013-04-30

    Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Compact Water Vapor Exchanger for Regenerative Life Support Systems

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Chen, Weibo; Anderson, Molly; Hodgson, Edward

    2012-01-01

    Thermal and environmental control systems for future exploration spacecraft must meet challenging requirements for efficient operation and conservation of resources. Regenerative CO2 removal systems are attractive for these missions because they do not use consumable CO2 absorbers. However, these systems also absorb and vent water to space along with carbon dioxide. This paper describes an innovative device designed to minimize water lost from regenerative CO2 control systems. Design studies and proof-of-concept testing have shown the feasibility of a compact, efficient membrane water vapor exchanger (WVX) that will conserve water while meeting challenging requirements for operation on future spacecraft. Compared to conventional WVX designs, the innovative membrane WVX described here has the potential for high water recovery efficiency, compact size, and very low pressure losses. The key innovation is a method for maintaining highly uniform flow channels in a WVX core built from water-permeable membranes. The proof-of-concept WVX incorporates all the key design features of a prototypical unit, except that it is relatively small scale (1/23 relative to a unit sized for a crew of six) and some components were fabricated using non-prototypical methods. The proof-of-concept WVX achieved over 90% water recovery efficiency in a compact core in good agreement with analysis models. Furthermore the overall pressure drop is very small (less than 0.5 in. H2O, total for both flow streams) and meets requirements for service in environmental control and life support systems on future spacecraft. These results show that the WVX provides very uniform flow through flow channels for both the humid and dry streams. Measurements also show that CO2 diffusion through the water-permeable membranes will have negligible effect on the CO2 partial pressure in the spacecraft atmosphere.

  20. Web-based oil immersion whole slide imaging increases efficiency and clinical team satisfaction in hematopathology tumor board

    PubMed Central

    Chen, Zhongchuan Will; Kohan, Jessica; Perkins, Sherrie L.; Hussong, Jerry W.; Salama, Mohamed E.

    2014-01-01

    Background: Whole slide imaging (WSI) is widely used for education and research, but is increasingly being used to streamline clinical workflow. We present our experience with regard to satisfaction and time utilization using oil immersion WSI for presentation of blood/marrow aspirate smears, core biopsies, and tissue sections in hematology/oncology tumor board/treatment planning conferences (TPC). Methods: Lymph nodes and bone marrow core biopsies were scanned at ×20 magnification and blood/marrow smears at 83X under oil immersion and uploaded to an online library with areas of interest to be displayed annotated digitally via web browser. Pathologist time required to prepare slides for scanning was compared to that required to prepare for microscope projection (MP). Time required to present cases during TPC was also compared. A 10-point evaluation survey was used to assess clinician satisfaction with each presentation method. Results: There was no significant difference in hematopathologist preparation time between WSI and MP. However, presentation time was significantly less for WSI compared to MP as selection and annotation of slides was done prior to TPC with WSI, enabling more efficient use of TPC presentation time. Survey results showed a significant increase in satisfaction by clinical attendees with regard to image quality, efficiency of presentation of pertinent findings, aid in clinical decision-making, and overall satisfaction regarding pathology presentation. A majority of respondents also noted decreased motion sickness with WSI. Conclusions: Whole slide imaging, particularly with the ability to use oil scanning, provides higher quality images compared to MP and significantly increases clinician satisfaction. WSI streamlines preparation for TPC by permitting prior slide selection, resulting in greater efficiency during TPC presentation. PMID:25379347

  1. Computer-based learning: interleaving whole and sectional representation of neuroanatomy.

    PubMed

    Pani, John R; Chariker, Julia H; Naaz, Farah

    2013-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). Copyright © 2012 American Association of Anatomists.

  2. An efficient algorithm to compute marginal posterior genotype probabilities for every member of a pedigree with loops

    PubMed Central

    2009-01-01

    Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551

  3. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldhaber, Steve; Holland, Marika

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less

  4. Computer-Based Learning: Interleaving Whole and Sectional Representation of Neuroanatomy

    PubMed Central

    Pani, John R.; Chariker, Julia H.; Naaz, Farah

    2015-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). PMID:22761001

  5. A new desorption method for removing organic solvents from activated carbon using surfactant

    PubMed Central

    Hinoue, Mitsuo; Ishimatsu, Sumiyo; Fueta, Yukiko; Hori, Hajime

    2017-01-01

    Objectives: A new desorption method was investigated, which does not require toxic organic solvents. Efficient desorption of organic solvents from activated carbon was achieved with an ananionic surfactant solution, focusing on its washing and emulsion action. Methods: Isopropyl alcohol (IPA) and methyl ethyl ketone (MEK) were used as test solvents. Lauryl benzene sulfonic acid sodium salt (LAS) and sodium dodecyl sulfate (SDS) were used as the surfactant. Activated carbon (100 mg) was placed in a vial and a predetermined amount of organic solvent was added. After leaving for about 24 h, a predetermined amount of the surfactant solution was added. After leaving for another 72 h, the vial was heated in an incubator at 60°C for a predetermined time. The organic vapor concentration was then determined with a frame ionization detector (FID)-gas chromatograph and the desorption efficiency was calculated. Results: A high desorption efficiency was obtained with a 10% surfactant solution (LAS 8%, SDS 2%), 5 ml desorption solution, 60°C desorption temperature, and desorption time of over 24 h, and the desorption efficiency was 72% for IPA and 9% for MEK. Under identical conditions, the desorption efficiencies for another five organic solvents were investigated, which were 36%, 3%, 32%, 2%, and 3% for acetone, ethyl acetate, dichloromethane, toluene, and m-xylene, respectively. Conclusions: A combination of two anionic surfactants exhibited a relatively high desorption efficiency for IPA. For toluene, the desorption efficiency was low due to poor detergency and emulsification power. PMID:28132972

  6. Reducing the computational footprint for real-time BCPNN learning

    PubMed Central

    Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian

    2015-01-01

    The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618

  7. Reducing the computational footprint for real-time BCPNN learning.

    PubMed

    Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian

    2015-01-01

    The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.

  8. Efficient free energy calculations by combining two complementary tempering sampling methods.

    PubMed

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  9. Efficient free energy calculations by combining two complementary tempering sampling methods

    NASA Astrophysics Data System (ADS)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-01

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  10. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  11. The efficiency frontier approach to economic evaluation of health-care interventions.

    PubMed

    Caro, J Jaime; Nord, Erik; Siebert, Uwe; McGuire, Alistair; McGregor, Maurice; Henry, David; de Pouvourville, Gérard; Atella, Vincenzo; Kolominsky-Rabas, Peter

    2010-10-01

    IQWiG commissioned an international panel of experts to develop methods for the assessment of the relation of benefits to costs in the German statutory health-care system. The panel recommended that IQWiG inform German decision makers of the net costs and value of additional benefits of an intervention in the context of relevant other interventions in that indication. To facilitate guidance regarding maximum reimbursement, this information is presented in an efficiency plot with costs on the horizontal axis and value of benefits on the vertical. The efficiency frontier links the interventions that are not dominated and provides guidance. A technology that places on the frontier or to the left is reasonably efficient, while one falling to the right requires further justification for reimbursement at that price. This information does not automatically give the maximum reimbursement, as other considerations may be relevant. Given that the estimates are for a specific indication, they do not address priority setting across the health-care system. This approach informs decision makers about efficiency of interventions, conforms to the mandate and is consistent with basic economic principles. Empirical testing of its feasibility and usefulness is required.

  12. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  13. An Optimal Bahadur-Efficient Method in Detection of Sparse Signals with Applications to Pathway Analysis in Sequencing Association Studies.

    PubMed

    Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui

    2016-01-01

    Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.

  14. 40 CFR Appendix A to Subpart Dddd... - Alternative Procedure To Determine Capture Efficiency From Enclosures Around Hot Presses in the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of average velocity during the run and using these data, in conjunction with the pre- and post-test..., you may choose to follow the post-test calibration procedures of Method 320 in appendix A to 40 CFR... of hazardous air pollutants during the press process. This test method requires a minimum of three...

  15. Solving Ordinary Differential Equations

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.

    1987-01-01

    Initial-value ordinary differential equation solution via variable order Adams method (SIVA/DIVA) package is collection of subroutines for solution of nonstiff ordinary differential equations. There are versions for single-precision and double-precision arithmetic. Requires fewer evaluations of derivatives than other variable-order Adams predictor/ corrector methods. Option for direct integration of second-order equations makes integration of trajectory problems significantly more efficient. Written in FORTRAN 77.

  16. The role of finite-difference methods in design and analysis for supersonic cruise

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.

    1976-01-01

    Finite-difference methods for analysis of steady, inviscid supersonic flows are described, and their present state of development is assessed with particular attention to their applicability to vehicles designed for efficient cruise flight. Current work is described which will allow greater geometric latitude, improve treatment of embedded shock waves, and relax the requirement that the axial velocity must be supersonic.

  17. Gallium Nitride Direct Energy Conversion Betavoltaic Modeling and Optimization

    DTIC Science & Technology

    2017-03-01

    require high energy density battery systems. Radioisotopes are the most energy dense materials that can be converted into electrical energy. Pure...beta radioisotopes can be used towards making a long-lasting battery. However, the process to convert the energy provided by a pure beta radioisotope ...betavoltaic. Each energy conversion method has different challenges to overcome to improve thesystem efficiency. These energy conversion methods that are

  18. Robust stability of linear systems: Some computational considerations

    NASA Technical Reports Server (NTRS)

    Laub, A. J.

    1979-01-01

    The cases of both additive and multiplicative perturbations were discussed and a number of relationships between the two cases were given. A number of computational aspects of the theory were also discussed, including a proposed new method for evaluating general transfer or frequency response matrices. The new method is numerically stable and efficient, requiring only operations to update for new values of the frequency parameter.

  19. U.S. Army Public Affairs Officers and Social Media Training Requirements

    DTIC Science & Technology

    2016-06-10

    ABSTRACT Social media platforms have become an effective and efficient method used by U.S. Army organizations to deliver and communicate messages to...standards. 15. SUBJECT TERMS Public Affairs Officer, Social Media Training, Communications , Social Media Platforms, Training 16. SECURITY...methods to directly communicate with various audiences. The dramatic impact of social media in the information environment has created a shift, and caused

  20. Simple and efficient production of embryonic stem cell-embryo chimeras by coculture.

    PubMed Central

    Wood, S A; Pascoe, W S; Schmidt, C; Kemler, R; Evans, M J; Allen, N D

    1993-01-01

    A method for the production of embryonic stem (ES) cell-embryo chimeras was developed that involves the simple coculture of eight-cell embryos on a lawn of ES cells. After coculture, the embryos with ES cells attached are transferred to normal embryo culture medium and allowed to develop to the blastocyst stage before reimplantation into foster mothers. Although the ES cells initially attach to the outside of the embryos, they primarily colonize the inner cell mass and its derivatives. This method results in the efficient production of chimeras with high levels of chimerism including the germ line. As embryos are handled en masse and manipulative steps are minimal, this method should greatly reduce the time and effort required to produce chimeric mice. Images Fig. 1 Fig. 2 PMID:8506303

  1. Physical methods for genetic transformation of fungi and yeast

    NASA Astrophysics Data System (ADS)

    Rivera, Ana Leonor; Magaña-Ortíz, Denis; Gómez-Lim, Miguel; Fernández, Francisco; Loske, Achim M.

    2014-06-01

    The production of transgenic fungi is a routine process. Currently, it is possible to insert genes from other fungi, viruses, bacteria and even animals, albeit with low efficiency, into the genomes of a number of fungal species. Genetic transformation requires the penetration of the transgene through the fungal cell wall, a process that can be facilitated by biological or physical methods. Novel methodologies for the efficient introduction of specific genes and stronger promoters are needed to increase production levels. A possible solution to this problem is the recently discovered shock-wave-mediated transformation. The objective of this article is to review the state of the art of the physical methods used for genetic fungi transformation and to describe some of the basic physics and molecular biology behind them.

  2. Teleform scannable data entry: an efficient method to update a community-based medical record? Community care coordination network Database Group.

    PubMed Central

    Guerette, P.; Robinson, B.; Moran, W. P.; Messick, C.; Wright, M.; Wofford, J.; Velez, R.

    1995-01-01

    Community-based multi-disciplinary care of chronically ill individuals frequently requires the efforts of several agencies and organizations. The Community Care Coordination Network (CCCN) is an effort to establish a community-based clinical database and electronic communication system to facilitate the exchange of pertinent patient data among primary care, community-based and hospital-based providers. In developing a primary care based electronic record, a method is needed to update records from the field or remote sites and agencies and yet maintain data quality. Scannable data entry with fixed fields, optical character recognition and verification was compared to traditional keyboard data entry to determine the relative efficiency of each method in updating the CCCN database. PMID:8563414

  3. Bypassing the malfunction junction in warm dense matter simulations

    NASA Astrophysics Data System (ADS)

    Cangi, Attila; Pribram-Jones, Aurora

    2015-03-01

    Simulation of warm dense matter requires computational methods that capture both quantum and classical behavior efficiently under high-temperature and high-density conditions. The state-of-the-art approach to model electrons and ions under those conditions is density functional theory molecular dynamics, but this method's computational cost skyrockets as temperatures and densities increase. We propose finite-temperature potential functional theory as an in-principle-exact alternative that suffers no such drawback. In analogy to the zero-temperature theory developed previously, we derive an orbital-free free energy approximation through a coupling-constant formalism. Our density approximation and its associated free energy approximation demonstrate the method's accuracy and efficiency. A.C. has been partially supported by NSF Grant CHE-1112442. A.P.J. is supported by DOE Grant DE-FG02-97ER25308.

  4. Second derivatives for approximate spin projection methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Lee M.; Hratchian, Hrant P., E-mail: hhratchian@ucmerced.edu

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical secondmore » derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.« less

  5. Accelerator-driven transmutation of spent fuel elements

    DOEpatents

    Venneri, Francesco; Williamson, Mark A.; Li, Ning

    2002-01-01

    An apparatus and method is described for transmuting higher actinides, plutonium and selected fission products in a liquid-fuel subcritical assembly. Uranium may also be enriched, thereby providing new fuel for use in conventional nuclear power plants. An accelerator provides the additional neutrons required to perform the processes. The size of the accelerator needed to complete fuel cycle closure depends on the neutron efficiency of the supported reactors and on the neutron spectrum of the actinide transmutation apparatus. Treatment of spent fuel from light water reactors (LWRs) using uranium-based fuel will require the largest accelerator power, whereas neutron-efficient high temperature gas reactors (HTGRs) or CANDU reactors will require the smallest accelerator power, especially if thorium is introduced into the newly generated fuel according to the teachings of the present invention. Fast spectrum actinide transmutation apparatus (based on liquid-metal fuel) will take full advantage of the accelerator-produced source neutrons and provide maximum utilization of the actinide-generated fission neutrons. However, near-thermal transmutation apparatus will require lower standing

  6. YAMAT-seq: an efficient method for high-throughput sequencing of mature transfer RNAs

    PubMed Central

    Shigematsu, Megumi; Honda, Shozo; Loher, Phillipe; Telonis, Aristeidis G.; Rigoutsos, Isidore

    2017-01-01

    Abstract Besides translation, transfer RNAs (tRNAs) play many non-canonical roles in various biological pathways and exhibit highly variable expression profiles. To unravel the emerging complexities of tRNA biology and molecular mechanisms underlying them, an efficient tRNA sequencing method is required. However, the rigid structure of tRNA has been presenting a challenge to the development of such methods. We report the development of Y-shaped Adapter-ligated MAture TRNA sequencing (YAMAT-seq), an efficient and convenient method for high-throughput sequencing of mature tRNAs. YAMAT-seq circumvents the issue of inefficient adapter ligation, a characteristic of conventional RNA sequencing methods for mature tRNAs, by employing the efficient and specific ligation of Y-shaped adapter to mature tRNAs using T4 RNA Ligase 2. Subsequent cDNA amplification and next-generation sequencing successfully yield numerous mature tRNA sequences. YAMAT-seq has high specificity for mature tRNAs and high sensitivity to detect most isoacceptors from minute amount of total RNA. Moreover, YAMAT-seq shows quantitative capability to estimate expression levels of mature tRNAs, and has high reproducibility and broad applicability for various cell lines. YAMAT-seq thus provides high-throughput technique for identifying tRNA profiles and their regulations in various transcriptomes, which could play important regulatory roles in translation and other biological processes. PMID:28108659

  7. Chapter 15: Commercial New Construction Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Keates, Steven

    This protocol is intended to describe the recommended method when evaluating the whole-building performance of new construction projects in the commercial sector. The protocol focuses on energy conservation measures (ECMs) or packages of measures where evaluators can analyze impacts using building simulation. These ECMs typically require the use of calibrated building simulations under Option D of the International Performance Measurement and Verification Protocol (IPMVP).

  8. A rapid and cost-effective method for sequencing pooled cDNA clones by using a combination of transposon insertion and Gateway technology.

    PubMed

    Morozumi, Takeya; Toki, Daisuke; Eguchi-Ogawa, Tomoko; Uenishi, Hirohide

    2011-09-01

    Large-scale cDNA-sequencing projects require an efficient strategy for mass sequencing. Here we describe a method for sequencing pooled cDNA clones using a combination of transposon insertion and Gateway technology. Our method reduces the number of shotgun clones that are unsuitable for reconstruction of cDNA sequences, and has the advantage of reducing the total costs of the sequencing project.

  9. A feasible DY conjugate gradient method for linear equality constraints

    NASA Astrophysics Data System (ADS)

    LI, Can

    2017-09-01

    In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.

  10. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  11. Assessment of disinfection of hospital surfaces using different monitoring methods1

    PubMed Central

    Ferreira, Adriano Menis; de Andrade, Denise; Rigotti, Marcelo Alessandro; de Almeida, Margarete Teresa Gottardo; Guerra, Odanir Garcia; dos Santos, Aires Garcia

    2015-01-01

    OBJECTIVE: to assess the efficiency of cleaning/disinfection of surfaces of an Intensive Care Unit. METHOD: descriptive-exploratory study with quantitative approach conducted over the course of four weeks. Visual inspection, bioluminescence adenosine triphosphate and microbiological indicators were used to indicate cleanliness/disinfection. Five surfaces (bed rails, bedside tables, infusion pumps, nurses' counter, and medical prescription table) were assessed before and after the use of rubbing alcohol at 70% (w/v), totaling 160 samples for each method. Non-parametric tests were used considering statistically significant differences at p<0.05. RESULTS: after the cleaning/disinfection process, 87.5, 79.4 and 87.5% of the surfaces were considered clean using the visual inspection, bioluminescence adenosine triphosphate and microbiological analyses, respectively. A statistically significant decrease was observed in the disapproval rates after the cleaning process considering the three assessment methods; the visual inspection was the least reliable. CONCLUSION: the cleaning/disinfection method was efficient in reducing microbial load and organic matter of surfaces, however, these findings require further study to clarify aspects related to the efficiency of friction, its frequency, and whether or not there is association with other inputs to achieve improved results of the cleaning/disinfection process. PMID:26312634

  12. An efficient 3-D eddy-current solver using an independent impedance method for transcranial magnetic stimulation.

    PubMed

    De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc

    2011-02-01

    In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.

  13. 41 CFR 102-34.40 - Who must comply with motor vehicle fuel efficiency requirements?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... motor vehicle fuel efficiency requirements? 102-34.40 Section 102-34.40 Public Contracts and Property... with motor vehicle fuel efficiency requirements? (a) Executive agencies operating domestic fleets must comply with motor vehicle fuel efficiency requirements for such fleets. (b) This subpart does not apply...

  14. Bridge-in-a-Backpack(TM). Task 2 : reduction of costs through design modifications and optimization.

    DOT National Transportation Integrated Search

    2011-09-01

    The cost effective use of FRP composites in infrastructure requires the efficient use of the : composite materials in the design. Previous work during the development phase and : demonstration phase illustrated the need to refine the design methods f...

  15. Clinical application of fully digital Cerec surgical guides made in-house.

    PubMed

    Bindl, A

    2015-01-01

    It is now possible to produce full-digital drilling templates with Cerec Guide 2 (Sirona) in the dental practice relatively quickly, efficiently, and economically. Here, a patient case example is used to present an exemplary description of the procedure and method to do this. The solution described herein shows the advantageous efficiency, compared with other systems presently on the market, of a procedure that does not require the external production of the drilling template in the laboratory or a manufacturing center.

  16. Sonochemical enzyme-catalyzed regioselective acylation of flavonoid glycosides.

    PubMed

    Ziaullah; Rupasinghe, H P Vasantha

    2016-04-01

    This work compares a highly efficient and alternative method of sonication-assisted lipase catalyzed acylation of quercetin-3-O-glucoside and phloretin-2'-glucoside, using Candida antarctica lipase B (Novozyme 435(®)), with a range of fatty acids. In this study, sonication-assisted irradiation coupled with stirring has been found to be more efficient and economical than conventional reaction conditions. Sonication-assisted acylation accelerated the reactions and reduced the time required by 4-5 folds. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Anaerobic digestion of municipal solid waste: Technical developments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivard, C.J.

    1996-01-01

    The anaerobic biogasification of organic wastes generates two useful products: a medium-Btu fuel gas and a compost-quality organic residue. Although commercial-scale digestion systems are used to treat municipal sewage wastes, the disposal of solid organic wastes, including municipal solid wastes (MSW), requires a more cost-efficient process. Modern biogasification systems employ high-rate, high-solids fermentation methods to improve process efficiency and reduce capital costs. The design criteria and development stages are discussed. These systems are also compared with conventional low-solids fermentation technology.

  18. Perspectives in astrophysical databases

    NASA Astrophysics Data System (ADS)

    Frailis, Marco; de Angelis, Alessandro; Roberto, Vito

    2004-07-01

    Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.

  19. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  20. ODEion--a software module for structural identification of ordinary differential equations.

    PubMed

    Gennemark, Peter; Wedelin, Dag

    2014-02-01

    In the systems biology field, algorithms for structural identification of ordinary differential equations (ODEs) have mainly focused on fixed model spaces like S-systems and/or on methods that require sufficiently good data so that derivatives can be accurately estimated. There is therefore a lack of methods and software that can handle more general models and realistic data. We present ODEion, a software module for structural identification of ODEs. Main characteristic features of the software are: • The model space is defined by arbitrary user-defined functions that can be nonlinear in both variables and parameters, such as for example chemical rate reactions. • ODEion implements computationally efficient algorithms that have been shown to efficiently handle sparse and noisy data. It can run a range of realistic problems that previously required a supercomputer. • ODEion is easy to use and provides SBML output. We describe the mathematical problem, the ODEion system itself, and provide several examples of how the system can be used. Available at: http://www.odeidentification.org.

  1. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  2. Brain blood vessel segmentation using line-shaped profiles

    NASA Astrophysics Data System (ADS)

    Babin, Danilo; Pižurica, Aleksandra; De Vylder, Jonas; Vansteenkiste, Ewout; Philips, Wilfried

    2013-11-01

    Segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, especially for embolization of cerebral aneurysms and arteriovenous malformations (AVMs). In order to perform embolization of the AVM, the structural and geometric information of blood vessels from 3D images is of utmost importance. For this reason, the in-depth segmentation of cerebral blood vessels is usually done as a fusion of different segmentation techniques, often requiring extensive user interaction. In this paper we introduce the idea of line-shaped profiling with an application to brain blood vessel and AVM segmentation, efficient both in terms of resolving details and in terms of computation time. Our method takes into account both local proximate and wider neighbourhood of the processed pixel, which makes it efficient for segmenting large blood vessel tree structures, as well as fine structures of the AVMs. Another advantage of our method is that it requires selection of only one parameter to perform segmentation, yielding very little user interaction.

  3. A nanostructure based on metasurfaces for optical interconnects

    NASA Astrophysics Data System (ADS)

    Lin, Shulang; Gu, Huarong

    2017-08-01

    Optical-electronic Integrated Neural Co-processor takes vital part in optical neural network, which is mainly realized by optical interconnects. Because of the accuracy requirement and long-term goal of integration, optical interconnects should be effective and pint-size. In traditional solutions of optical interconnects, holography built on crystalloid or law of Fresnel diffraction exploited on zone plate was used. However, holographic method cannot meet the efficiency requirement and zone plate is too bulk to make the optical neural unit miniaturization. Thus, this paper aims to find a way to replace holographic method or zone plate with enough diffraction efficiency and smaller size. Metasurfaces are composed of subwavelength-spaced phase shifters at an interface of medium. Metasurfaces allow for unprecedented control of light properties. They also have advanced optical technology of enabling versatile functionalities in a planar structure. In this paper, a nanostructure is presented for optical interconnects. The comparisons of light splitting ability and simulated crosstalk between nanostructure and zone plate are also made.

  4. Solving large-scale PDE-constrained Bayesian inverse problems with Riemann manifold Hamiltonian Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Girolami, M.

    2014-11-01

    We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. Various numerical results up to 1025 parameters are presented to demonstrate the ability of the RMHMC method in exploring the geometric structure of the problem to propose (almost) uncorrelated/independent samples that are far away from each other, and yet the acceptance rate is almost unity. The results also suggest that for the PDE models considered the proposed fixed metric RMHMC can attain almost as high a quality performance as the original RMHMC, i.e. generating (almost) uncorrelated/independent samples, while being two orders of magnitude less computationally expensive.

  5. Event-driven processing for hardware-efficient neural spike sorting

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  6. Wavelet-based adaptation methodology combined with finite difference WENO to solve ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Do, Seongju; Li, Haojun; Kang, Myungjoo

    2017-06-01

    In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.

  7. Biodesulfurization of refractory organic sulfur compounds in fossil fuels.

    PubMed

    Soleimani, Mehran; Bassi, Amarjeet; Margaritis, Argyrios

    2007-01-01

    The stringent new regulations to lower sulfur content in fossil fuels require new economic and efficient methods for desulfurization of recalcitrant organic sulfur. Hydrodesulfurization of such compounds is very costly and requires high operating temperature and pressure. Biodesulfurization is a non-invasive approach that can specifically remove sulfur from refractory hydrocarbons under mild conditions and it can be potentially used in industrial desulfurization. Intensive research has been conducted in microbiology and molecular biology of the competent strains to increase their desulfurization activity; however, even the highest activity obtained is still insufficient to fulfill the industrial requirements. To improve the biodesulfurization efficiency, more work is needed in areas such as increasing specific desulfurization activity, hydrocarbon phase tolerance, sulfur removal at higher temperature, and isolating new strains for desulfurizing a broader range of sulfur compounds. This article comprehensively reviews and discusses key issues, advances and challenges for a competitive biodesulfurization process.

  8. A variational eigenvalue solver on a photonic quantum processor

    PubMed Central

    Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L.

    2014-01-01

    Quantum computers promise to efficiently solve important problems that are intractable on a conventional computer. For quantum systems, where the physical dimension grows exponentially, finding the eigenvalues of certain operators is one such intractable problem and remains a fundamental challenge. The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution. Here we present an alternative approach that greatly reduces the requirements for coherent evolution and combine this method with a new approach to state preparation based on ansätze and classical optimization. We implement the algorithm by combining a highly reconfigurable photonic quantum processor with a conventional computer. We experimentally demonstrate the feasibility of this approach with an example from quantum chemistry—calculating the ground-state molecular energy for He–H+. The proposed approach drastically reduces the coherence time requirements, enhancing the potential of quantum resources available today and in the near future. PMID:25055053

  9. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  10. Energy-Performance-Based Design-Build Process: Strategies for Procuring High-Performance Buildings on Typical Construction Budgets: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheib, J.; Pless, S.; Torcellini, P.

    NREL experienced a significant increase in employees and facilities on our 327-acre main campus in Golden, Colorado over the past five years. To support this growth, researchers developed and demonstrated a new building acquisition method that successfully integrates energy efficiency requirements into the design-build requests for proposals and contracts. We piloted this energy performance based design-build process with our first new construction project in 2008. We have since replicated and evolved the process for large office buildings, a smart grid research laboratory, a supercomputer, a parking structure, and a cafeteria. Each project incorporated aggressive efficiency strategies using contractual energy usemore » requirements in the design-build contracts, all on typical construction budgets. We have found that when energy efficiency is a core project requirement as defined at the beginning of a project, innovative design-build teams can integrate the most cost effective and high performance efficiency strategies on typical construction budgets. When the design-build contract includes measurable energy requirements and is set up to incentivize design-build teams to focus on achieving high performance in actual operations, owners can now expect their facilities to perform. As NREL completed the new construction in 2013, we have documented our best practices in training materials and a how-to guide so that other owners and owner's representatives can replicate our successes and learn from our experiences in attaining market viable, world-class energy performance in the built environment.« less

  11. Machine learning action parameters in lattice quantum chromodynamics

    NASA Astrophysics Data System (ADS)

    Shanahan, Phiala E.; Trewartha, Daniel; Detmold, William

    2018-05-01

    Numerical lattice quantum chromodynamics studies of the strong interaction are important in many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. The high information content and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.

  12. Desktop supercomputer: what can it do?

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  13. Efficiency Evaluation of Handling of Geologic-Geophysical Information by Means of Computer Systems

    NASA Astrophysics Data System (ADS)

    Nuriyahmetova, S. M.; Demyanova, O. V.; Zabirova, L. M.; Gataullin, I. I.; Fathutdinova, O. A.; Kaptelinina, E. A.

    2018-05-01

    Development of oil and gas resources, considering difficult geological, geographical and economic conditions, requires considerable finance costs; therefore their careful reasons, application of the most perspective directions and modern technologies from the point of view of cost efficiency of planned activities are necessary. For ensuring high precision of regional and local forecasts and modeling of reservoirs of fields of hydrocarbonic raw materials, it is necessary to analyze huge arrays of the distributed information which is constantly changing spatial. The solution of this task requires application of modern remote methods of a research of the perspective oil-and-gas territories, complex use of materials remote, nondestructive the environment of geologic-geophysical and space methods of sounding of Earth and the most perfect technologies of their handling. In the article, the authors considered experience of handling of geologic-geophysical information by means of computer systems by the Russian and foreign companies. Conclusions that the multidimensional analysis of geologicgeophysical information space, effective planning and monitoring of exploration works requires broad use of geoinformation technologies as one of the most perspective directions in achievement of high profitability of an oil and gas industry are drawn.

  14. A high-performance transcutaneous battery charger for medical implants.

    PubMed

    Artan, N; Vanjani, Hitesh; Vashist, Gurudath; Fu, Zhen; Bhakthavatsala, Santosh; Ludvig, Nandor; Medveczky, Geza; Chao, H

    2010-01-01

    As new functionality is added to the implantable devices, their power requirements also increase. Such power requirements make it hard for keeping such implants operational for long periods by non-rechargeable batteries. This result in a need for frequent surgeries to replace these batteries. Rechargeable batteries can satisfy the long-term power requirements of these new functions. To minimize the discomfort to the patients, the recharging of the batteries should be as infrequent as possible. Traditional battery charging methods have low battery charging efficiency. This means they may limit the amount of charge that can be delivered to the device, speeding up the depletion of the battery and forcing frequent recharging. In this paper, we evaluate the suitability of a state-of-the-art general purpose charging method called current-pumped battery charger (CPBC) for implant applications. Using off-the-shelf components and with minimum optimization, we prototyped a proof-of-concept transcutaenous battery charger based on CPBC and show that the CPBC can charge a 100 mAh battery transcutaneously within 137 minutes with at most 2.1°C increase in tissue temperature even with a misalignment of 1.3 cm in between the coils, while keeping the battery charging efficiency at 85%.

  15. Optimization of preservation and storage time of sponge tissues to obtain quality mRNA for next-generation sequencing.

    PubMed

    Riesgo, Ana; Pérez-Porro, Alicia R; Carmona, Susana; Leys, Sally P; Giribet, Gonzalo

    2012-03-01

    Transcriptome sequencing with next-generation sequencing technologies has the potential for addressing many long-standing questions about the biology of sponges. Transcriptome sequence quality depends on good cDNA libraries, which requires high-quality mRNA. Standard protocols for preserving and isolating mRNA often require optimization for unusual tissue types. Our aim was assessing the efficiency of two preservation modes, (i) flash freezing with liquid nitrogen (LN₂) and (ii) immersion in RNAlater, for the recovery of high-quality mRNA from sponge tissues. We also tested whether the long-term storage of samples at -80 °C affects the quantity and quality of mRNA. We extracted mRNA from nine sponge species and analysed the quantity and quality (A260/230 and A260/280 ratios) of mRNA according to preservation method, storage time, and taxonomy. The quantity and quality of mRNA depended significantly on the preservation method used (LN₂) outperforming RNAlater), the sponge species, and the interaction between them. When the preservation was analysed in combination with either storage time or species, the quantity and A260/230 ratio were both significantly higher for LN₂-preserved samples. Interestingly, individual comparisons for each preservation method over time indicated that both methods performed equally efficiently during the first month, but RNAlater lost efficiency in storage times longer than 2 months compared with flash-frozen samples. In summary, we find that for long-term preservation of samples, flash freezing is the preferred method. If LN₂ is not available, RNAlater can be used, but mRNA extraction during the first month of storage is advised. © 2011 Blackwell Publishing Ltd.

  16. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  17. [Automated analyser of organ cultured corneal endothelial mosaic].

    PubMed

    Gain, P; Thuret, G; Chiquet, C; Gavet, Y; Turc, P H; Théillère, C; Acquart, S; Le Petit, J C; Maugery, J; Campos, L

    2002-05-01

    Until now, organ-cultured corneal endothelial mosaic has been assessed in France by cell counting using a calibrated graticule, or by drawing cells on a computerized image. The former method is unsatisfactory because it is characterized by a lack of objective evaluation of the cell surface and hexagonality and it requires an experienced technician. The latter method is time-consuming and requires careful attention. We aimed to make an efficient, fast and easy to use, automated digital analyzer of video images of the corneal endothelium. The hardware included a PC Pentium III ((R)) 800 MHz-Ram 256, a Data Translation 3155 acquisition card, a Sony SC 75 CE CCD camera, and a 22-inch screen. Special functions for automated cell boundary determination consisted of Plug-in programs included in the ImageTool software. Calibration was performed using a calibrated micrometer. Cell densities of 40 organ-cultured corneas measured by both manual and automated counting were compared using parametric tests (Student's t test for paired variables and the Pearson correlation coefficient). All steps were considered more ergonomic i.e., endothelial image capture, image selection, thresholding of multiple areas of interest, automated cell count, automated detection of errors in cell boundary drawing, presentation of the results in an HTML file including the number of counted cells, cell density, coefficient of variation of cell area, cell surface histogram and cell hexagonality. The device was efficient because the global process lasted on average 7 minutes and did not require an experienced technician. The correlation between cell densities obtained with both methods was high (r=+0.84, p<0.001). The results showed an under-estimation using manual counting (2191+/-322 vs. 2273+/-457 cell/mm(2), p=0.046), compared with the automated method. Our automated endothelial cell analyzer is efficient and gives reliable results quickly and easily. A multicentric validation would allow us to standardize cell counts among cornea banks in our country.

  18. An efficient soil water balance model based on hybrid numerical and statistical methods

    NASA Astrophysics Data System (ADS)

    Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei

    2018-04-01

    Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new model makes it particularly suitable for large-scale simulation of soil water movement, because the new model can be used with coarse discretization in space and time.

  19. Automated optimization techniques for aircraft synthesis

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1976-01-01

    Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.

  20. Demonstration Of Ultra HI-FI (UHF) Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2004-01-01

    Computational aero-acoustics (CAA) requires efficient, high-resolution simulation tools. Most current techniques utilize finite-difference approaches because high order accuracy is considered too difficult or expensive to achieve with finite volume or finite element methods. However, a novel finite volume approach (Ultra HI-FI or UHF) which utilizes Hermite fluxes is presented which can achieve both arbitrary accuracy and fidelity in space and time. The technique can be applied to unstructured grids with some loss of fidelity or with multi-block structured grids for maximum efficiency and resolution. In either paradigm, it is possible to resolve ultra-short waves (less than 2 PPW). This is demonstrated here by solving the 4th CAA workshop Category 1 Problem 1.

  1. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  2. Efficient storage and management of radiographic images using a novel wavelet-based multiscale vector quantizer

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; Mitra, Sunanda

    2002-05-01

    Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.

  3. Caseload management methods for use within district nursing teams: a literature review.

    PubMed

    Roberson, Carole

    2016-05-01

    Effective and efficient caseload management requires extensive skills to ensure that patients receive the right care by the right person at the right time. District nursing caseloads are continually increasing in size and complexity, which requires specialist district nursing knowledge and skills. This article reviews the literature related to caseload management with the aim of identifying the most effective method for district nursing teams. The findings from this review are that there are different styles and methods of caseload management. The literature review was unable to identify a single validated tool or method, but identified themes for implementing effective caseload management, specifically caseload analysis; workload measurement; work allocation; service and practice development and workforce planning. This review also identified some areas for further research.

  4. Modeling of Melt-Infiltrated SiC/SiC Composite Properties

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Bednarcyk, Brett A.; Arnold, Steven M.; Lang, Jerry

    2009-01-01

    The elastic properties of a two-dimensional five-harness melt-infiltrated silicon carbide fiber reinforced silicon carbide matrix (MI SiC/SiC) ceramic matrix composite (CMC) were predicted using several methods. Methods used in this analysis are multiscale laminate analysis, micromechanics-based woven composite analysis, a hybrid woven composite analysis, and two- and three-dimensional finite element analyses. The elastic properties predicted are in good agreement with each other as well as with the available measured data. However, the various methods differ from each other in three key areas: (1) the fidelity provided, (2) the efforts required for input data preparation, and (3) the computational resources required. Results also indicate that efficient methods are also able to provide a reasonable estimate of local stress fields.

  5. The holy grail of soil metal contamination site assessment: reducing risk and increasing confidence of decision making using infield portable X-ray Fluorescence (pXRF) technology

    NASA Astrophysics Data System (ADS)

    Rouillon, M.; Taylor, M. P.; Dong, C.

    2016-12-01

    This research assesses the advantages of integrating field portable X-ray Fluorescence (pXRF) technology for reducing the risk and increase confidence of decision making for metal-contaminated site assessments. Metal-contaminated sites are often highly heterogeneous and require a high sampling density to accurately characterize the distribution and concentration of contaminants. The current regulatory assessment approaches rely on a small number of samples processed using standard wet-chemistry methods. In New South Wales (NSW), Australia, the current notification trigger for characterizing metal-contaminated sites require the upper 95% confidence interval of the site mean to equal or exceed the relevant guidelines. The method's low `minimum' sampling requirements can misclassify sites due to the heterogeneous nature of soil contamination, leading to inaccurate decision making. To address this issue, we propose integrating infield pXRF analysis with the established sampling method to overcome sampling limitations. This approach increases the minimum sampling resolution and reduces the 95% CI of the site mean. Infield pXRF analysis at contamination hotspots enhances sample resolution efficiently and without the need to return to the site. In this study, the current and proposed pXRF site assessment methods are compared at five heterogeneous metal-contaminated sites by analysing the spatial distribution of contaminants, 95% confidence intervals of site means, and the sampling and analysis uncertainty associated with each method. Finally, an analysis of costs associated with both the current and proposed methods is presented to demonstrate the advantages of incorporating pXRF into metal-contaminated site assessments. The data shows that pXRF integrated site assessments allows for faster, cost-efficient, characterisation of metal-contaminated sites with greater confidence for decision making.

  6. Proximal sensing for soil carbon accounting

    NASA Astrophysics Data System (ADS)

    England, Jacqueline R.; Viscarra Rossel, Raphael A.

    2018-05-01

    Maintaining or increasing soil organic carbon (C) is vital for securing food production and for mitigating greenhouse gas (GHG) emissions, climate change, and land degradation. Some land management practices in cropping, grazing, horticultural, and mixed farming systems can be used to increase organic C in soil, but to assess their effectiveness, we need accurate and cost-efficient methods for measuring and monitoring the change. To determine the stock of organic C in soil, one requires measurements of soil organic C concentration, bulk density, and gravel content, but using conventional laboratory-based analytical methods is expensive. Our aim here is to review the current state of proximal sensing for the development of new soil C accounting methods for emissions reporting and in emissions reduction schemes. We evaluated sensing techniques in terms of their rapidity, cost, accuracy, safety, readiness, and their state of development. The most suitable method for measuring soil organic C concentrations appears to be visible-near-infrared (vis-NIR) spectroscopy and, for bulk density, active gamma-ray attenuation. Sensors for measuring gravel have not been developed, but an interim solution with rapid wet sieving and automated measurement appears useful. Field-deployable, multi-sensor systems are needed for cost-efficient soil C accounting. Proximal sensing can be used for soil organic C accounting, but the methods need to be standardized and procedural guidelines need to be developed to ensure proficient measurement and accurate reporting and verification. These are particularly important if the schemes use financial incentives for landholders to adopt management practices to sequester soil organic C. We list and discuss requirements for developing new soil C accounting methods based on proximal sensing, including requirements for recording, verification, and auditing.

  7. Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.

    PubMed

    He, Lifeng; Chao, Yuyan; Suzuki, Kenji

    2011-08-01

    Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.

  8. High efficiency x-ray nanofocusing by the blazed stacking of binary zone plates

    NASA Astrophysics Data System (ADS)

    Mohacsi, I.; Karvinen, P.; Vartiainen, I.; Diaz, A.; Somogyi, A.; Kewish, C. M.; Mercere, P.; David, C.

    2013-09-01

    The focusing efficiency of binary Fresnel zone plate lenses is fundamentally limited and higher efficiency requires a multi step lens profile. To overcome the manufacturing problems of high resolution and high efficiency multistep zone plates, we investigate the concept of stacking two different binary zone plates in each other's optical near-field. We use a coarse zone plate with π phase shift and a double density fine zone plate with π/2 phase shift to produce an effective 4- step profile. Using a compact experimental setup with piezo actuators for alignment, we demonstrated 47.1% focusing efficiency at 6.5 keV using a pair of 500 μm diameter and 200 nm smallest zone width. Furthermore, we present a spatially resolved characterization method using multiple diffraction orders to identify manufacturing errors, alignment errors and pattern distortions and their effect on diffraction efficiency.

  9. A comparison of three methods for estimating the requirements for medical specialists: the case of otolaryngologists.

    PubMed Central

    Anderson, G F; Han, K C; Miller, R H; Johns, M E

    1997-01-01

    OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613

  10. A Short Take-Off/Vertical Landing (STOVL) Aircraft Carrier (S-CVX).

    DTIC Science & Technology

    1998-05-01

    distance for the S-CVXcan be covered in less than 20 days at 25 kts even with a half-day stop for refueling. Transit time differences for Norfolk to...efficiency concerns will result in a rather limited selection of basic weapon types with variants to accommodate different mission requirements. This...components shall be decontaminated through use of a countermeasure wash down system and portable Decontamination (DECON) methods . This requirement

  11. Future needs for biomedical transducers

    NASA Technical Reports Server (NTRS)

    Wooten, F. T.

    1971-01-01

    In summary there are three major classes of transducer improvements required: improvements in existing transducers, needs for unexploited physical science phenomena in transducer design, and needs for unutilized physiological phenomena in transducer design. During the next decade, increasing emphasis will be placed on noninvasive measurement in all of these areas. Patient safety, patient comfort, and the need for efficient utilization of the time of both patient and physician requires that noninvasive methods of monitoring be developed.

  12. Tuning charge balance in PHOLEDs with ambipolar host materials to achieve high efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padmaperuma, Asanga B.; Koech, Phillip K.; Cosimbescu, Lelia

    2009-08-27

    The efficiency and stability of blue organic light emitting devices (OLEDs) continue to be a primary roadblock to developing organic solid state white lighting. For OLEDs to meet the high power conversion efficiency goal, they will require both close to 100% internal quantum efficiency and low operating voltage in a white light emitting device.1 It is generally accepted that such high quantum efficiency, can only be achieved with the use of organometallic phosphor doped OLEDs. Blue OLEDs are particularly important for solid state lighting. The simplest (and therefore likely the lowest cost) method of generating white light is to downmore » convert part of the emission from a blue light source with a system of external phosphors.2 A second method of generating white light requires the superposition of the light from red, green and blue OLEDs in the correct ratio. Either of these two methods (and indeed any method of generating white light with a high color rendering index) critically depends on a high efficiency blue light component.3 A simple OLED generally consists of a hole-injecting anode, a preferentially hole transporting organic layer (HTL), an emissive layer that contains the recombination zone and ideally transports both holes and electrons, a preferentially electron-transporting layer (ETL) and an electron-injecting cathode. Color in state-of-the-art OLEDs is generated by an organometallic phosphor incorporated by co-sublimation into the emissive layer (EML).4 New materials functioning as hosts, emitters, charge transporting, and charge blocking layers have been developed along with device architectures leading to electrophosphorescent based OLEDs with high quantum efficiencies near the theoretical limit. However, the layers added to the device architecture to enable high quantum efficiencies lead to higher operating voltages and correspondingly lower power efficiencies. Achievement of target luminance power efficiencies will require new strategies for lowering operating voltages, particularly if this is to be achieved in a device that can be manufactured at low cost. To avoid the efficiency losses associated with phosphorescence quenching by back-energy transfer from the dopant onto the host, the triplet excited states of the host material must be higher in energy than the triplet excited state of the dopant.5 This must be accomplished without sacrificing the charge transporting properties of the composite.6 Similar problems limit the efficiency of OLED-based displays, where blue light emitters are the least efficient and least stable. We previously demonstrated the utility of organic phosphine oxide (PO) materials as electron transporting HMs for FIrpic in blue OLEDs.7 However, the high reluctance of PO materials to oxidation and thus, hole injection limits the ability to balance charge injection and transport in the EML without relying on charge transport by the phosphorescent dopant. PO host materials were engineered to transport both electrons and holes in the EML and still maintain high triplet exciton energy to ensure efficient energy transfer to the dopant (Figure 1). There are examples of combining hole transporting moieties (mainly aromatic amines) with electron transport moieties (e.g., oxadiazoles, triazines, boranes)8 to develop new emitter and host materials for small molecule and polymer9 OLEDs. The challenge is to combine the two moieties without lowering the triplet energy of the target molecule. For example, coupling of a dimesitylphenylboryl moiety with a tertiary aromatic amine (FIAMBOT) results in intramolecular electron transfer from the amine to the boron atom through the bridging phenyl. The mesomeric effect of the dimesitylphenylboryl unit acts to extend conjugation and lowers triplet exciton energies (< 2.8 eV) rendering such systems inadequate as ambipolar hosts for blue phosphors.« less

  13. Data Management System (DMS) Evolution Analysis

    NASA Technical Reports Server (NTRS)

    Douglas, Katherine

    1990-01-01

    The all encompassing goal for the Data Management System (DMS) Evolution Analysis task is to develop an advocacy for ensuring that growth and technology insertion issues are properly and adequately addressed during DMS requirements specification, design, and development. The most efficient methods of addressing those issues are via planned and graceful evolution, technology transparency, and system growth margins. It is necessary that provisions, such as those previously mentioned, are made to accommodate advanced missions requirements (e.g., Human Space Exploration Programs) in addition to evolving Space Station Freedom operations and user requirements .

  14. An Efficient Method for the Isolation of Highly Purified RNA from Seeds for Use in Quantitative Transcriptome Analysis.

    PubMed

    Kanai, Masatake; Mano, Shoji; Nishimura, Mikio

    2017-01-11

    Plant seeds accumulate large amounts of storage reserves comprising biodegradable organic matter. Humans rely on seed storage reserves for food and as industrial materials. Gene expression profiles are powerful tools for investigating metabolic regulation in plant cells. Therefore, detailed, accurate gene expression profiles during seed development are required for crop breeding. Acquiring highly purified RNA is essential for producing these profiles. Efficient methods are needed to isolate highly purified RNA from seeds. Here, we describe a method for isolating RNA from seeds containing large amounts of oils, proteins, and polyphenols, which have inhibitory effects on high-purity RNA isolation. Our method enables highly purified RNA to be obtained from seeds without the use of phenol, chloroform, or additional processes for RNA purification. This method is applicable to Arabidopsis, rapeseed, and soybean seeds. Our method will be useful for monitoring the expression patterns of low level transcripts in developing and mature seeds.

  15. Autonomous Byte Stream Randomizer

    NASA Technical Reports Server (NTRS)

    Paloulian, George K.; Woo, Simon S.; Chow, Edward T.

    2013-01-01

    Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

  16. Efficient and Controlled Generation of 2D and 3D Bile Duct Tissue from Human Pluripotent Stem Cell-Derived Spheroids.

    PubMed

    Tian, Lipeng; Deshmukh, Abhijeet; Ye, Zhaohui; Jang, Yoon-Young

    2016-08-01

    While in vitro liver tissue engineering has been increasingly studied during the last several years, presently engineered liver tissues lack the bile duct system. The lack of bile drainage not only hinders essential digestive functions of the liver, but also leads to accumulation of bile that is toxic to hepatocytes and known to cause liver cirrhosis. Clearly, generation of bile duct tissue is essential for engineering functional and healthy liver. Differentiation of human induced pluripotent stem cells (iPSCs) to bile duct tissue requires long and/or complex culture conditions, and has been inefficient so far. Towards generating a fully functional liver containing biliary system, we have developed defined and controlled conditions for efficient 2D and 3D bile duct epithelial tissue generation. A marker for multipotent liver progenitor in both adult human liver and ductal plate in human fetal liver, EpCAM, is highly expressed in hepatic spheroids generated from human iPSCs. The EpCAM high hepatic spheroids can, not only efficiently generate a monolayer of biliary epithelial cells (cholangiocytes), in a 2D differentiation condition, but also form functional ductal structures in a 3D condition. Importantly, this EpCAM high spheroid based biliary tissue generation is significantly faster than other existing methods and does not require cell sorting. In addition, we show that a knock-in CK7 reporter human iPSC line generated by CRISPR/Cas9 genome editing technology greatly facilitates the analysis of biliary differentiation. This new ductal differentiation method will provide a more efficient method of obtaining bile duct cells and tissues, which may facilitate engineering of complete and functional liver tissue in the future.

  17. Laser photolysis of caged compounds at 405 nm: photochemical advantages, localisation, phototoxicity and methods for calibration.

    PubMed

    Trigo, Federico F; Corrie, John E T; Ogden, David

    2009-05-30

    Rapid, localised photolytic release of neurotransmitters from caged precursors at synaptic regions in the extracellular space is greatly hampered at irradiation wavelengths in the near-UV, close to the wavelength of maximum absorption of the caged precursor, because of inner-filtering by strong absorption of light in the cage solution between the objective and cell. For this reason two-photon excitation is commonly used for photolysis, particularly at multiple points distributed over large fields; or, with near-UV, if combined with local perfusion of the cage. These methods each have problems: the small cross-sections of common cages with two-photon excitation require high cage concentrations and light intensities near the phototoxic limit, while local perfusion gives non-uniform cage concentrations over the field of view. Single-photon photolysis at 405 nm, although less efficient than at 330-350 nm, with present cages is more efficient than two-photon photolysis. The reduced light absorption in the bulk cage solution permits efficient wide-field uncaging at non-toxic intensities with uniform cage concentration. Full photolysis of MNI-glutamate with 100 micros pulses required intensities of 2 mW microm(-2) at the preparation, shown to be non-toxic with repeated exposures. Light scattering at 405 nm was estimated as 50% at 18 microm depth in 21-day rat cerebellum. Methods are described for: (1) varying the laser spot size; (2) photolysis calibration in the microscope with the caged fluorophore NPE-HPTS over the wavelength range 347-405 nm; and (3) determining the point-spread function of excitation. Furthermore, DM-Nitrophen photolysis at 405 nm was efficient for intracellular investigations of Ca2+-dependent processes.

  18. Reproducing Quantum Probability Distributions at the Speed of Classical Dynamics: A New Approach for Developing Force-Field Functors.

    PubMed

    Sundar, Vikram; Gelbwaser-Klimovsky, David; Aspuru-Guzik, Alán

    2018-04-05

    Modeling nuclear quantum effects is required for accurate molecular dynamics (MD) simulations of molecules. The community has paid special attention to water and other biomolecules that show hydrogen bonding. Standard methods of modeling nuclear quantum effects like Ring Polymer Molecular Dynamics (RPMD) are computationally costlier than running classical trajectories. A force-field functor (FFF) is an alternative method that computes an effective force field that replicates quantum properties of the original force field. In this work, we propose an efficient method of computing FFF using the Wigner-Kirkwood expansion. As a test case, we calculate a range of thermodynamic properties of Neon, obtaining the same level of accuracy as RPMD, but with the shorter runtime of classical simulations. By modifying existing MD programs, the proposed method could be used in the future to increase the efficiency and accuracy of MD simulations involving water and proteins.

  19. Assessment of Intralaminar Progressive Damage and Failure Analysis Using an Efficient Evaluation Framework

    NASA Technical Reports Server (NTRS)

    Hyder, Imran; Schaefer, Joseph; Justusson, Brian; Wanthal, Steve; Leone, Frank; Rose, Cheryl

    2017-01-01

    Reducing the timeline for development and certification for composite structures has been a long standing objective of the aerospace industry. This timeline can be further exacerbated when attempting to integrate new fiber-reinforced composite materials due to the large number of testing required at every level of design. computational progressive damage and failure analysis (PDFA) attempts to mitigate this effect; however, new PDFA methods have been slow to be adopted in industry since material model evaluation techniques have not been fully defined. This study presents an efficient evaluation framework which uses a piecewise verification and validation (V&V) approach for PDFA methods. Specifically, the framework is applied to evaluate PDFA research codes within the context of intralaminar damage. Methods are incrementally taken through various V&V exercises specifically tailored to study PDFA intralaminar damage modeling capability. Finally, methods are evaluated against a defined set of success criteria to highlight successes and limitations.

  20. A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary

    NASA Astrophysics Data System (ADS)

    Gillis, Nicolas; Luce, Robert

    2018-01-01

    A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.

  1. Novel transform for image description and compression with implementation by neural architectures

    NASA Astrophysics Data System (ADS)

    Ben-Arie, Jezekiel; Rao, Raghunath K.

    1991-10-01

    A general method for signal representation using nonorthogonal basis functions that are composed of Gaussians are described. The Gaussians can be combined into groups with predetermined configuration that can approximate any desired basis function. The same configuration at different scales forms a set of self-similar wavelets. The general scheme is demonstrated by representing a natural signal employing an arbitrary basis function. The basic methodology is demonstrated by two novel schemes for efficient representation of 1-D and 2- D signals using Gaussian basis functions (BFs). Special methods are required here since the Gaussian functions are nonorthogonal. The first method employs a paradigm of maximum energy reduction interlaced with the A* heuristic search. The second method uses an adaptive lattice system to find the minimum-squared error of the BFs onto the signal, and a lateral-vertical suppression network to select the most efficient representation in terms of data compression.

  2. Algal cell disruption using microbubbles to localize ultrasonic energy

    PubMed Central

    Krehbiel, Joel D.; Schideman, Lance C.; King, Daniel A.; Freund, Jonathan B.

    2015-01-01

    Microbubbles were added to an algal solution with the goal of improving cell disruption efficiency and the net energy balance for algal biofuel production. Experimental results showed that disruption increases with increasing peak rarefaction ultrasound pressure over the range studied: 1.90 to 3.07 MPa. Additionally, ultrasound cell disruption increased by up to 58% by adding microbubbles, with peak disruption occurring in the range of 108 microbubbles/ml. The localization of energy in space and time provided by the bubbles improve efficiency: energy requirements for such a process were estimated to be one-fourth of the available heat of combustion of algal biomass and one-fifth of currently used cell disruption methods. This increase in energy efficiency could make microbubble enhanced ultrasound viable for bioenergy applications and is expected to integrate well with current cell harvesting methods based upon dissolved air flotation. PMID:25311188

  3. Deterministic binary vectors for efficient automated indexing of MEDLINE/PubMed abstracts.

    PubMed

    Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R; Bernstam, Elmer V; Cohen, Trevor

    2012-01-01

    The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.

  4. Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts

    PubMed Central

    Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R.; Bernstam, Elmer V.; Cohen, Trevor

    2012-01-01

    The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI. PMID:23304369

  5. Gradient-based Optimization for Poroelastic and Viscoelastic MR Elastography

    PubMed Central

    Tan, Likun; McGarry, Matthew D.J.; Van Houten, Elijah E.W.; Ji, Ming; Solamen, Ligin; Weaver, John B.

    2017-01-01

    We describe an efficient gradient computation for solving inverse problems arising in magnetic resonance elastography (MRE). The algorithm can be considered as a generalized ‘adjoint method’ based on a Lagrangian formulation. One requirement for the classic adjoint method is assurance of the self-adjoint property of the stiffness matrix in the elasticity problem. In this paper, we show this property is no longer a necessary condition in our algorithm, but the computational performance can be as efficient as the classic method, which involves only two forward solutions and is independent of the number of parameters to be estimated. The algorithm is developed and implemented in material property reconstructions using poroelastic and viscoelastic modeling. Various gradient- and Hessian-based optimization techniques have been tested on simulation, phantom and in vivo brain data. The numerical results show the feasibility and the efficiency of the proposed scheme for gradient calculation. PMID:27608454

  6. High efficiency laser-assisted H - charge exchange for microsecond duration beams

    DOE PAGES

    Cousineau, Sarah; Rakhman, Abdurahim; Kay, Martin; ...

    2017-12-26

    Laser-assisted stripping is a novel approach to H - charge exchange that overcomes long-standing limitations associated with the traditional, foil-based method of producing high-intensity, time-structured beams of protons. This paper reports on the first successful demonstration of the laser stripping technique for microsecond duration beams. The experiment represents a factor of 1000 increase in the stripped pulse duration compared with the previous proof-of-principle demonstration. The central theme of the experiment is the implementation of methods to reduce the required average laser power such that high efficiency stripping can be accomplished for microsecond duration beams using conventional laser technology. In conclusion,more » the experiment was performed on the Spallation Neutron Source 1 GeV H - beam using a 1 MW peak power UV laser and resulted in ~95% stripping efficiency.« less

  7. High efficiency laser-assisted H - charge exchange for microsecond duration beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cousineau, Sarah; Rakhman, Abdurahim; Kay, Martin

    Laser-assisted stripping is a novel approach to H - charge exchange that overcomes long-standing limitations associated with the traditional, foil-based method of producing high-intensity, time-structured beams of protons. This paper reports on the first successful demonstration of the laser stripping technique for microsecond duration beams. The experiment represents a factor of 1000 increase in the stripped pulse duration compared with the previous proof-of-principle demonstration. The central theme of the experiment is the implementation of methods to reduce the required average laser power such that high efficiency stripping can be accomplished for microsecond duration beams using conventional laser technology. In conclusion,more » the experiment was performed on the Spallation Neutron Source 1 GeV H - beam using a 1 MW peak power UV laser and resulted in ~95% stripping efficiency.« less

  8. Reducing the impact of speed dispersion on subway corridor flow.

    PubMed

    Qiao, Jing; Sun, Lishan; Liu, Xiaoming; Rong, Jian

    2017-11-01

    The rapid increase in the volume of subway passengers in Beijing has necessitated higher requirements for the safety and efficiency of subway corridors. Speed dispersion is an important factor that affects safety and efficiency. This paper aims to analyze the management control methods for reducing pedestrian speed dispersion in subways. The characteristics of the speed dispersion of pedestrian flow were analyzed according to field videos. The control measurements which were conducted by placing traffic signs, yellow marking, and guardrail were proposed to alleviate speed dispersion. The results showed that the methods of placing traffic signs, yellow marking, and a guardrail improved safety and efficiency for all four volumes of pedestrian traffic flow, and the best-performing control measurement was guardrails. Furthermore, guardrails' optimal position and design measurements were explored. The research findings provide a rationale for subway managers in optimizing pedestrian traffic flow in subway corridors. Copyright © 2017. Published by Elsevier Ltd.

  9. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  10. Fast algorithms for Quadrature by Expansion I: Globally valid expansions

    NASA Astrophysics Data System (ADS)

    Rachh, Manas; Klöckner, Andreas; O'Neil, Michael

    2017-09-01

    The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.

  11. Focus on Efficient Management.

    ERIC Educational Resources Information Center

    Kentucky State Dept. of Education, Frankfort. Office of Resource Management.

    Compiled as a workshop handbook, this guide presents information to help food service program administrators comply with federal regulations and evaluate and upgrade their operations. Part I discusses requirements of the National School Lunch Program, focusing on the "offer versus serve" method of service enacted in 1976 to reduce waste.…

  12. Effects of Toluene, Acrolein and Vinyl Chloride on Motor Activity of Drosophila Melanogaster

    EPA Science Inventory

    The data generated by current high-throughput assays for chemical toxicity require information to link effects at molecular targets to adverse outcomes in whole animals. In addition, more efficient methods for testing volatile chemicals are needed. Here we begin to address these ...

  13. Really big data: Processing and analysis of large datasets

    USDA-ARS?s Scientific Manuscript database

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  14. Partition method and experimental validation for impact dynamics of flexible multibody system

    NASA Astrophysics Data System (ADS)

    Wang, J. Y.; Liu, Z. Y.; Hong, J. Z.

    2018-06-01

    The impact problem of a flexible multibody system is a non-smooth, high-transient, and strong-nonlinear dynamic process with variable boundary. How to model the contact/impact process accurately and efficiently is one of the main difficulties in many engineering applications. The numerical approaches being used widely in impact analysis are mainly from two fields: multibody system dynamics (MBS) and computational solid mechanics (CSM). Approaches based on MBS provide a more efficient yet less accurate analysis of the contact/impact problems, while approaches based on CSM are well suited for particularly high accuracy needs, yet require very high computational effort. To bridge the gap between accuracy and efficiency in the dynamic simulation of a flexible multibody system with contacts/impacts, a partition method is presented considering that the contact body is divided into two parts, an impact region and a non-impact region. The impact region is modeled using the finite element method to guarantee the local accuracy, while the non-impact region is modeled using the modal reduction approach to raise the global efficiency. A three-dimensional rod-plate impact experiment is designed and performed to validate the numerical results. The principle for how to partition the contact bodies is proposed: the maximum radius of the impact region can be estimated by an analytical method, and the modal truncation orders of the non-impact region can be estimated by the highest frequency of the signal measured. The simulation results using the presented method are in good agreement with the experimental results. It shows that this method is an effective formulation considering both accuracy and efficiency. Moreover, a more complicated multibody impact problem of a crank slider mechanism is investigated to strengthen this conclusion.

  15. Generalized source Finite Volume Method for radiative transfer equation in participating media

    NASA Astrophysics Data System (ADS)

    Zhang, Biao; Xu, Chuan-Long; Wang, Shi-Min

    2017-03-01

    Temperature monitoring is very important in a combustion system. In recent years, non-intrusive temperature reconstruction has been explored intensively on the basis of calculating arbitrary directional radiative intensities. In this paper, a new method named Generalized Source Finite Volume Method (GSFVM) was proposed. It was based on radiative transfer equation and Finite Volume Method (FVM). This method can be used to calculate arbitrary directional radiative intensities and is proven to be accurate and efficient. To verify the performance of this method, six test cases of 1D, 2D, and 3D radiative transfer problems were investigated. The numerical results show that the efficiency of this method is close to the radial basis function interpolation method, but the accuracy and stability is higher than that of the interpolation method. The accuracy of the GSFVM is similar to that of the Backward Monte Carlo (BMC) algorithm, while the time required by the GSFVM is much shorter than that of the BMC algorithm. Therefore, the GSFVM can be used in temperature reconstruction and improvement on the accuracy of the FVM.

  16. Weighted Geometric Dilution of Precision Calculations with Matrix Multiplication

    PubMed Central

    Chen, Chien-Sheng

    2015-01-01

    To enhance the performance of location estimation in wireless positioning systems, the geometric dilution of precision (GDOP) is widely used as a criterion for selecting measurement units. Since GDOP represents the geometric effect on the relationship between measurement error and positioning determination error, the smallest GDOP of the measurement unit subset is usually chosen for positioning. The conventional GDOP calculation using matrix inversion method requires many operations. Because more and more measurement units can be chosen nowadays, an efficient calculation should be designed to decrease the complexity. Since the performance of each measurement unit is different, the weighted GDOP (WGDOP), instead of GDOP, is used to select the measurement units to improve the accuracy of location. To calculate WGDOP effectively and efficiently, the closed-form solution for WGDOP calculation is proposed when more than four measurements are available. In this paper, an efficient WGDOP calculation method applying matrix multiplication that is easy for hardware implementation is proposed. In addition, the proposed method can be used when more than exactly four measurements are available. Even when using all-in-view method for positioning, the proposed method still can reduce the computational overhead. The proposed WGDOP methods with less computation are compatible with global positioning system (GPS), wireless sensor networks (WSN) and cellular communication systems. PMID:25569755

  17. Taming Log Files from Game/Simulation-Based Assessments: Data Models and Data Analysis Tools. Research Report. ETS RR-16-10

    ERIC Educational Resources Information Center

    Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm

    2016-01-01

    Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…

  18. Process research of non-Cz material

    NASA Astrophysics Data System (ADS)

    Campbell, R. B.

    1985-06-01

    Efforts were aimed at achieving a simultaneous front and back junction. Lasers and other heat sources were tried. Successful results were gained by two different methods: laser and flash lamp. Polymer dopants were applied to both sides of dendritic web cells. Rapid heating and cooling avoided any cross contamination between two junctions after removal of the dendrites. Both methods required subsequent thermal annealing in an oven to produce maximum efficiency cells.

  19. Development of Methods for Carrier-Mediated Targeted Delivery of Antiviral Compounds Using Monoclonal Antibodies

    DTIC Science & Technology

    1987-04-01

    2. Comparison of Immunofluorescent Staining in Formaldehyde-Fixed Pichlnde Virus-Infected Cells That Had Been either Dried prior to Reaction with...was undertaken. 37 &aa&&3&M ^.{m^mmsmmmmmmmmmiä B. Experimental Methods General Procedures and Instrumentation. When required, reactions and...period, the reaction mixture was red and efficient stirring became very difficult. After the addition was complete, the reaction mixture was allowed

  20. Clustering methods applied in the detection of Ki67 hot-spots in whole tumor slide images: an efficient way to characterize heterogeneous tissue-based biomarkers.

    PubMed

    Lopez, Xavier Moles; Debeir, Olivier; Maris, Calliope; Rorive, Sandrine; Roland, Isabelle; Saerens, Marco; Salmon, Isabelle; Decaestecker, Christine

    2012-09-01

    Whole-slide scanners allow the digitization of an entire histological slide at very high resolution. This new acquisition technique opens a wide range of possibilities for addressing challenging image analysis problems, including the identification of tissue-based biomarkers. In this study, we use whole-slide scanner technology for imaging the proliferating activity patterns in tumor slides based on Ki67 immunohistochemistry. Faced with large images, pathologists require tools that can help them identify tumor regions that exhibit high proliferating activity, called "hot-spots" (HSs). Pathologists need tools that can quantitatively characterize these HS patterns. To respond to this clinical need, the present study investigates various clustering methods with the aim of identifying Ki67 HSs in whole tumor slide images. This task requires a method capable of identifying an unknown number of clusters, which may be highly variable in terms of shape, size, and density. We developed a hybrid clustering method, referred to as Seedlink. Compared to manual HS selections by three pathologists, we show that Seedlink provides an efficient way of detecting Ki67 HSs and improves the agreement among pathologists when identifying HSs. Copyright © 2012 International Society for Advancement of Cytometry.

  1. Chapter 17: Residential Behavior Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Stewart, James; Todd, Annika

    Residential behavior-based (BB) programs use strategies grounded in the behavioral and social sciences to influence household energy use. These may include providing households with real-time or delayed feedback about their energy use; supplying energy efficiency education and tips; rewarding households for reducing their energy use; comparing households to their peers; and establishing games, tournaments, and competitions. BB programs often target multiple energy end uses and encourage energy savings, demand savings, or both. Savings from BB programs are usually a small percentage of energy use, typically less than 5 percent. Utilities will continue to implement residential BB programs as large-scale, randomizedmore » control trials (RCTs); however, some are now experimenting with alternative program designs that are smaller scale; involve new communication channels such as the web, social media, and text messaging; or that employ novel strategies for encouraging behavior change (for example, Facebook competitions). These programs will create new evaluation challenges and may require different evaluation methods than those currently employed to verify any savings they generate. Quasi-experimental methods, however, require stronger assumptions to yield valid savings estimates and may not measure savings with the same degree of validity and accuracy as randomized experiments.« less

  2. Groundwater management under uncertainty using a stochastic multi-cell model

    NASA Astrophysics Data System (ADS)

    Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.

    2017-08-01

    The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.

  3. Capillary electrophoresis method to determine siRNA complexation with cationic liposomes.

    PubMed

    Furst, Tania; Bettonville, Virginie; Farcas, Elena; Frere, Antoine; Lechanteur, Anna; Evrard, Brigitte; Fillet, Marianne; Piel, Géraldine; Servais, Anne-Catherine

    2016-10-01

    Small interfering RNA (siRNA) inducing gene silencing has great potential to treat many human diseases. To ensure effective siRNA delivery, it must be complexed with an appropriate vector, generally nanoparticles. The nanoparticulate complex requires an optimal physiochemical characterization and the complexation efficiency has to be precisely determined. The methods usually used to measure complexation in gel electrophoresis and RiboGreen ® fluorescence-based assay. However, those approaches are not automated and present some drawbacks such as the low throughput and the use of carcinogenic reagents. The aim of this study is to develop a new simple and fast method to accurately quantify the complexation efficiency. In this study, capillary electrophoresis (CE) was used to determine the siRNA complexation with cationic liposomes. The short-end injection mode applied enabled siRNA detection in less than 5 min. Moreover, the CE technique offers many advantages compared with the other classical methods. It is automated, does not require sample preparation and expensive reagents. Moreover, no mutagenic risk is associated with the CE approach since no carcinogenic product is used. Finally, this methodology can also be extended for the characterization of other types of nanoparticles encapsulating siRNA, such as cationic polymeric nanoparticles. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three

    NASA Astrophysics Data System (ADS)

    Steinhardt, Charles L.; Jermyn, Adam S.

    2018-02-01

    Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.

  5. Parametric State Space Structuring

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Tilgner, Marco

    1997-01-01

    Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.

  6. Spatial recurrence analysis: A sensitive and fast detection tool in digital mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prado, T. L.; Galuzio, P. P.; Lopes, S. R.

    Efficient diagnostics of breast cancer requires fast digital mammographic image processing. Many breast lesions, both benign and malignant, are barely visible to the untrained eye and requires accurate and reliable methods of image processing. We propose a new method of digital mammographic image analysis that meets both needs. It uses the concept of spatial recurrence as the basis of a spatial recurrence quantification analysis, which is the spatial extension of the well-known time recurrence analysis. The recurrence-based quantifiers are able to evidence breast lesions in a way as good as the best standard image processing methods available, but with amore » better control over the spurious fragments in the image.« less

  7. First- and Second-Order Sensitivity Analysis of a P-Version Finite Element Equation Via Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    1998-01-01

    Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.

  8. DNA methods: critical review of innovative approaches.

    PubMed

    Kok, Esther J; Aarts, Henk J M; Van Hoef, A M Angeline; Kuiper, Harry A

    2002-01-01

    The presence of ingredients derived from genetically modified organisms (GMOs) in food products in the market place is subject to a number of European regulations that stipulate which product consisting of or containing GMO-derived ingredients should be labeled as such. In order to maintain these labeling requirements, a variety of different GMO detection methods have been developed to screen for either the presence of DNA or protein derived from (approved) GM varieties. Recent incidents where unapproved GM varieties entered the European market show that more powerful GMO detection and identification methods will be needed to maintain European labeling requirements in an adequate, efficient, and cost-effective way. This report discusses the current state-of-the-art as well as future developments in GMO detection.

  9. Portable brine evaporator unit, process, and system

    DOEpatents

    Hart, Paul John; Miller, Bruce G.; Wincek, Ronald T.; Decker, Glenn E.; Johnson, David K.

    2009-04-07

    The present invention discloses a comprehensive, efficient, and cost effective portable evaporator unit, method, and system for the treatment of brine. The evaporator unit, method, and system require a pretreatment process that removes heavy metals, crude oil, and other contaminates in preparation for the evaporator unit. The pretreatment and the evaporator unit, method, and system process metals and brine at the site where they are generated (the well site). Thus, saving significant money to producers who can avoid present and future increases in transportation costs.

  10. Rapid Assembly of DNA via Ligase Cycling Reaction (LCR).

    PubMed

    Chandran, Sunil

    2017-01-01

    The assembly of multiple DNA parts into a larger DNA construct is a requirement in most synthetic biology laboratories. Here we describe a method for the efficient, high-throughput, assembly of DNA utilizing the ligase chain reaction (LCR). The LCR method utilizes non-overlapping DNA parts that are ligated together with the guidance of bridging oligos. Using this method, we have successfully assembled up to 20 DNA parts in a single reaction or DNA constructs up to 26 kb in size.

  11. Bio-Orthogonal Mediated Nucleic Acid Transfection of Cells via Cell Surface Engineering.

    PubMed

    O'Brien, Paul J; Elahipanah, Sina; Rogozhnikov, Dmitry; Yousaf, Muhammad N

    2017-05-24

    The efficient delivery of foreign nucleic acids (transfection) into cells is a critical tool for fundamental biomedical research and a pillar of several biotechnology industries. There are currently three main strategies for transfection including reagent, instrument, and viral based methods. Each technology has significantly advanced cell transfection; however, reagent based methods have captured the majority of the transfection market due to their relatively low cost and ease of use. This general method relies on the efficient packaging of a reagent with nucleic acids to form a stable complex that is subsequently associated and delivered to cells via nonspecific electrostatic targeting. Reagent transfection methods generally use various polyamine cationic type molecules to condense with negatively charged nucleic acids into a highly positively charged complex, which is subsequently delivered to negatively charged cells in culture for association, internalization, release, and expression. Although this appears to be a straightforward procedure, there are several major issues including toxicity, low efficiency, sorting of viable transfected from nontransfected cells, and limited scope of transfectable cell types. Herein, we report a new strategy (SnapFect) for nucleic acid transfection to cells that does not rely on electrostatic interactions but instead uses an integrated approach combining bio-orthogonal liposome fusion, click chemistry, and cell surface engineering. We show that a target cell population is rapidly and efficiently engineered to present a bio-orthogonal functional group on its cell surface through nanoparticle liposome delivery and fusion. A complementary bio-orthogonal nucleic acid complex is then formed and delivered to which chemoselective click chemistry induced transfection occurs to the primed cell. This new strategy requires minimal time, steps, and reagents and leads to superior transfection results for a broad range of cell types. Moreover the transfection is efficient with high cell viability and does not require a postsorting step to separate transfected from nontransfected cells in the cell population. We also show for the first time a precision transfection strategy where a single cell type in a coculture is target transfected via bio-orthogonal click chemistry.

  12. Application of augmented-Lagrangian methods in meteorology: Comparison of different conjugate-gradient codes for large-scale minimization

    NASA Technical Reports Server (NTRS)

    Navon, I. M.

    1984-01-01

    A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.

  13. A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation

    NASA Technical Reports Server (NTRS)

    Jones, Brandon A.; Anderson, Rodney L.

    2012-01-01

    Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.

  14. A Robust and Efficient Method for Steady State Patterns in Reaction-Diffusion Systems

    PubMed Central

    Lo, Wing-Cheong; Chen, Long; Wang, Ming; Nie, Qing

    2012-01-01

    An inhomogeneous steady state pattern of nonlinear reaction-diffusion equations with no-flux boundary conditions is usually computed by solving the corresponding time-dependent reaction-diffusion equations using temporal schemes. Nonlinear solvers (e.g., Newton’s method) take less CPU time in direct computation for the steady state; however, their convergence is sensitive to the initial guess, often leading to divergence or convergence to spatially homogeneous solution. Systematically numerical exploration of spatial patterns of reaction-diffusion equations under different parameter regimes requires that the numerical method be efficient and robust to initial condition or initial guess, with better likelihood of convergence to an inhomogeneous pattern. Here, a new approach that combines the advantages of temporal schemes in robustness and Newton’s method in fast convergence in solving steady states of reaction-diffusion equations is proposed. In particular, an adaptive implicit Euler with inexact solver (AIIE) method is found to be much more efficient than temporal schemes and more robust in convergence than typical nonlinear solvers (e.g., Newton’s method) in finding the inhomogeneous pattern. Application of this new approach to two reaction-diffusion equations in one, two, and three spatial dimensions, along with direct comparisons to several other existing methods, demonstrates that AIIE is a more desirable method for searching inhomogeneous spatial patterns of reaction-diffusion equations in a large parameter space. PMID:22773849

  15. A time-domain finite element boundary integral approach for elastic wave scattering

    NASA Astrophysics Data System (ADS)

    Shi, F.; Lowe, M. J. S.; Skelton, E. A.; Craster, R. V.

    2018-04-01

    The response of complex scatterers, such as rough or branched cracks, to incident elastic waves is required in many areas of industrial importance such as those in non-destructive evaluation and related fields; we develop an approach to generate accurate and rapid simulations. To achieve this we develop, in the time domain, an implementation to efficiently couple the finite element (FE) method within a small local region, and the boundary integral (BI) globally. The FE explicit scheme is run in a local box to compute the surface displacement of the scatterer, by giving forcing signals to excitation nodes, which can lie on the scatterer itself. The required input forces on the excitation nodes are obtained with a reformulated FE equation, according to the incident displacement field. The surface displacements computed by the local FE are then projected, through time-domain BI formulae, to calculate the scattering signals with different modes. This new method yields huge improvements in the efficiency of FE simulations for scattering from complex scatterers. We present results using different shapes and boundary conditions, all simulated using this approach in both 2D and 3D, and then compare with full FE models and theoretical solutions to demonstrate the efficiency and accuracy of this numerical approach.

  16. FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)

    NASA Astrophysics Data System (ADS)

    Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.

    2011-04-01

    A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.

  17. Synergistic effect of electrical and chemical factors on endocytosis in micro-discharge plasma gene transfection

    NASA Astrophysics Data System (ADS)

    Jinno, M.; Ikeda, Y.; Motomura, H.; Isozaki, Y.; Kido, Y.; Satoh, S.

    2017-06-01

    We have developed a new micro-discharge plasma (MDP)-based gene transfection method, which transfers genes into cells with high efficiency and low cytotoxicity; however, the mechanism underlying the method is still unknown. Studies revealed that the N-acetylcysteine-mediated inhibition of reactive oxygen species (ROS) activity completely abolished gene transfer. In this study, we used laser-produced plasma to demonstrate that gene transfer does not occur in the absence of electrical factors. Our results show that both electrical and chemical factors are necessary for gene transfer inside cells by microplasma irradiation. This indicates that plasma-mediated gene transfection utilizes the synergy between electrical and chemical factors. The electric field threshold required for transfection was approximately 1 kV m-1 in our MDP system. This indicates that MDP irradiation supplies sufficient concentrations of ROS, and the stimulation intensity of the electric field determines the transfection efficiency in our system. Gene transfer by plasma irradiation depends mainly on endocytosis, which accounts for at least 80% of the transfer, and clathrin-mediated endocytosis is a dominant endocytosis. In plasma-mediated gene transfection, alterations in electrical and chemical factors can independently regulate plasmid DNA adhesion and triggering of endocytosis, respectively. This implies that plasma characteristics can be adjusted according to target cell requirements, and the transfection process can be optimized with minimum damage to cells and maximum efficiency. This may explain how MDP simultaneously achieves high transfection efficiency with minimal cell damage.

  18. Confine Clay in an Alternating Multilayered Structure through Injection Molding: A Simple and Efficient Route to Improve Barrier Performance of Polymeric Materials.

    PubMed

    Yu, Feilong; Deng, Hua; Bai, Hongwei; Zhang, Qin; Wang, Ke; Chen, Feng; Fu, Qiang

    2015-05-20

    Various methods have been devoted to trigger the formation of multilayered structure for wide range of applications. These methods are often complicated with low production efficiency or require complex equipment. Herein, we demonstrate a simple and efficient method for the fabrication of polymeric sheets containing multilayered structure with enhanced barrier property through high speed thin-wall injection molding (HSIM). To achieve this, montmorillonite (MMT) is added into PE first, then blended with PP to fabricate PE-MMT/PP ternary composites. It is demonstrated that alternating multilayer structure could be obtained in the ternary composites because of low interfacial tension and good viscosity match between different polymer components. MMT is selectively dispersed in PE phase with partial exfoliated/partial intercalated microstructure. 2D-WAXD analysis indicates that the clay tactoids in PE-MMT/PP exhibits an uniplanar-axial orientation with their surface parallel to the molded part surface, while the tactoids in binary PE-MMT composites with the same overall MMT contents illustrate less orientation. The enhanced orientation of nanoclay in PE-MMT/PP could be attributed to the confinement of alternating multilayer structure, which prohibits the tumbling and rotation of nanoplatelets. Therefore, the oxygen barrier property of PE-MMT/PP is superior to that of PE-MMT because of increased gas permeation pathway. Comparing with the results obtained for PE based composites in literature, outstanding barrier property performance (45.7% and 58.2% improvement with 1.5 and 2.5 wt % MMT content, respectively) is achieved in current study. Two issues are considered responsible for such improvement: enhanced MMT orientation caused by the confinement in layered structure, and higher local density of MMT in layered structure induced denser assembly. Finally, enhancement in barrier property by confining impermeable filler into alternating multilayer structure through such simple and efficient method could provide a novel route toward high-performance packaging materials and other functional materials require layered structure.

  19. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that have resulted from this work. A review of computational aeroacoustics has recently been given by Lele.

  20. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

Top