Sample records for methods provide efficient

  1. Measuring Efficiency of Secondary Healthcare Providers in Slovenia

    PubMed Central

    Blatnik, Patricia; Bojnec, Štefan; Tušak, Matej

    2017-01-01

    Abstract The chief aim of this study was to analyze secondary healthcare providers' efficiency, focusing on the efficiency analysis of Slovene general hospitals. We intended to present a complete picture of technical, allocative, and cost or economic efficiency of general hospitals. Methods We researched the aspects of efficiency with two econometric methods. First, we calculated the necessary quotients of efficiency with the stochastic frontier analyze (SFA), which are realized by econometric evaluation of stochastic frontier functions; then, with the data envelopment analyze (DEA), we calculated the necessary quotients that are based on the linear programming method. Results Results on measures of efficiency showed that the two chosen methods produced two different conclusions. The SFA method concluded Celje General Hospital is the most efficient general hospital, whereas the DEA method concluded Brežice General Hospital was the hospital to be declared as the most efficient hospital. Conclusion Our results are a useful tool that can aid managers, payers, and designers of healthcare policy to better understand how general hospitals operate. The participants can accordingly decide with less difficulty on any further business operations of general hospitals, having the best practices of general hospitals at their disposal. PMID:28730180

  2. Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.

    PubMed

    Leung, Denis H Y; Wang, You-Gan; Zhu, Min

    2009-07-01

    The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

  3. Biological optimization systems for enhancing photosynthetic efficiency and methods of use

    DOEpatents

    Hunt, Ryan W.; Chinnasamy, Senthil; Das, Keshav C.; de Mattos, Erico Rolim

    2012-11-06

    Biological optimization systems for enhancing photosynthetic efficiency and methods of use. Specifically, methods for enhancing photosynthetic efficiency including applying pulsed light to a photosynthetic organism, using a chlorophyll fluorescence feedback control system to determine one or more photosynthetic efficiency parameters, and adjusting one or more of the photosynthetic efficiency parameters to drive the photosynthesis by the delivery of an amount of light to optimize light absorption of the photosynthetic organism while providing enough dark time between light pulses to prevent oversaturation of the chlorophyll reaction centers are disclosed.

  4. Efficiency and factors influencing efficiency of Community Health Strategy in providing Maternal and Child Health services in Mwingi District, Kenya: an expert opinion perspective

    PubMed Central

    Nzioki, Japheth Mativo; Onyango, Rosebella Ogutu; Ombaka, James Herbert

    2015-01-01

    Introduction Community Health Strategy (CHS) is a new Primary Health Care (PHC) model in Kenya, designed to provide PHC services in Kenya. In 2011, CHS was initiated in Mwingi district as one of the components of APHIA plus kamili program. The objectives of this study was to evaluate the efficiency of the CHS in providing MCH services in Mwingi district and to establish the factors influencing efficiency of the CHS in providing MCH services in the district. Methods This was a qualitative study. Fifteen Key informants were sampled from key stakeholders. Sampling was done using purposive and maximum variation sampling methods. Semi-structured in-depth interviews were used for data collection. Data was managed and analyzed using NVIVO. Framework analysis and quasi statistics were used in data analysis. Results Expert opinion data indicated that the CHS was efficient in providing MCH services. Factors influencing efficiency of the CHS in provision of MCH services were: challenges facing Community Health Workers (CHWs), Social cultural and economic factors influencing MCH in the district, and motivation among CHWs. Conclusion Though CHS was found to be efficient in providing MCH services, this was an expert opinion perspective, a quantitative Cost Effectiveness Analysis (CEA) to confirm these findings is recommended. To improve efficiency of the CHS in the district, challenges facing CHWs and Social cultural and economic factors that influence efficiency of the CHS in the district need to be addressed. PMID:26090046

  5. Compositions and Methods for Inhibiting Gene Expressions

    NASA Technical Reports Server (NTRS)

    Williams, Loren D. (Inventor); Hsiao, Chiaolong (Inventor); Fang, Po-Yu (Inventor); Williams, Justin (Inventor)

    2018-01-01

    A combined packing and assembly method that efficiently packs ribonucleic acid (RNA) into virus like particles (VLPs) has been developed. The VLPs can spontaneously assemble and load RNA in vivo, efficiently packaging specifically designed RNAs at high densities and with high purity. In some embodiments the RNA is capable of interference activity, or is a precursor of a RNA capable of causing interference activity. Compositions and methods for the efficient expression, production and purification of VLP-RNAs are provided. VLP-RNAs can be used for the storage of RNA for long periods, and provide the ability to deliver RNA in stable form that is readily taken up by cells.

  6. Development of an Itemwise Efficiency Scoring Method: Concurrent, Convergent, Discriminant, and Neuroimaging-Based Predictive Validity Assessed in a Large Community Sample

    PubMed Central

    Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.

    2016-01-01

    Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796

  7. Chapter 1: Introduction. The Uniform Methods Project: Methods for Determining Energy-Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Michael; Haeri, Hossein; Reynolds, Arlis

    This chapter provides a set of model protocols for determining energy and demand savings that result from specific energy efficiency measures implemented through state and utility efficiency programs. The methods described here are approaches that are or are among the most commonly used and accepted in the energy efficiency industry for certain measures or programs. As such, they draw from the existing body of research and best practices for energy efficiency program evaluation, measurement, and verification (EM&V). These protocols were developed as part of the Uniform Methods Project (UMP), funded by the U.S. Department of Energy (DOE). The principal objectivemore » for the project was to establish easy-to-follow protocols based on commonly accepted methods for a core set of widely deployed energy efficiency measures.« less

  8. Flash memory management system and method utilizing multiple block list windows

    NASA Technical Reports Server (NTRS)

    Chow, James (Inventor); Gender, Thomas K. (Inventor)

    2005-01-01

    The present invention provides a flash memory management system and method with increased performance. The flash memory management system provides the ability to efficiently manage and allocate flash memory use in a way that improves reliability and longevity, while maintaining good performance levels. The flash memory management system includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism. The free block mechanism provides efficient sorting of free blocks to facilitate selecting low use blocks for writing. The disk maintenance mechanism provides for the ability to efficiently clean flash memory blocks during processor idle times. The bad block detection mechanism provides the ability to better detect when a block of flash memory is likely to go bad. The flash status mechanism stores information in fast access memory that describes the content and status of the data in the flash disk. The new bank detection mechanism provides the ability to automatically detect when new banks of flash memory are added to the system. Together, these mechanisms provide a flash memory management system that can improve the operational efficiency of systems that utilize flash memory.

  9. Techniques for evaluating optimum data center operation

    DOEpatents

    Hamann, Hendrik F.; Rodriguez, Sergio Adolfo Bermudez; Wehle, Hans-Dieter

    2017-06-14

    Techniques for modeling a data center are provided. In one aspect, a method for determining data center efficiency is provided. The method includes the following steps. Target parameters for the data center are obtained. Technology pre-requisite parameters for the data center are obtained. An optimum data center efficiency is determined given the target parameters for the data center and the technology pre-requisite parameters for the data center.

  10. Nano-patterned superconducting surface for high quantum efficiency cathode

    DOEpatents

    Hannon, Fay; Musumeci, Pietro

    2017-03-07

    A method for providing a superconducting surface on a laser-driven niobium cathode in order to increase the effective quantum efficiency. The enhanced surface increases the effective quantum efficiency by improving the laser absorption of the surface and enhancing the local electric field. The surface preparation method makes feasible the construction of superconducting radio frequency injectors with niobium as the photocathode. An array of nano-structures are provided on a flat surface of niobium. The nano-structures are dimensionally tailored to interact with a laser of specific wavelength to thereby increase the electron yield of the surface.

  11. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  12. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  13. Power and spectrally efficient M-ARY QAM schemes for future mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Sreenath, K.; Feher, K.

    1990-01-01

    An effective method to compensate nonlinear phase distortion caused by the mobile amplifier is proposed. As a first step towards the future use of spectrally efficient modulation schemes for mobile satellite applications, we have investigated effects of nonlinearities and the phase compensation method on 16-QAM. The new method provides about 2 dB savings in power for 16-QAM operation with cost effective amplifiers near saturation and thereby promising use of spectrally efficient linear modulation schemes for future mobile satellite applications.

  14. Compatibility of Segments of Thermoelectric Generators

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey; Ursell, Tristan

    2009-01-01

    A method of calculating (usually for the purpose of maximizing) the power-conversion efficiency of a segmented thermoelectric generator is based on equations derived from the fundamental equations of thermoelectricity. Because it is directly traceable to first principles, the method provides physical explanations in addition to predictions of phenomena involved in segmentation. In comparison with the finite-element method used heretofore to predict (without being able to explain) the behavior of a segmented thermoelectric generator, this method is much simpler to implement in practice: in particular, the efficiency of a segmented thermoelectric generator can be estimated by evaluating equations using only hand-held calculator with this method. In addition, the method provides for determination of cascading ratios. The concept of cascading is illustrated in the figure and the definition of the cascading ratio is defined in the figure caption. An important aspect of the method is its approach to the issue of compatibility among segments, in combination with introduction of the concept of compatibility within a segment. Prior approaches involved the use of only averaged material properties. Two materials in direct contact could be examined for compatibility with each other, but there was no general framework for analysis of compatibility. The present method establishes such a framework. The mathematical derivation of the method begins with the definition of reduced efficiency of a thermoelectric generator as the ratio between (1) its thermal-to-electric power-conversion efficiency and (2) its Carnot efficiency (the maximum efficiency theoretically attainable, given its hot- and cold-side temperatures). The derivation involves calculation of the reduced efficiency of a model thermoelectric generator for which the hot-side temperature is only infinitesimally greater than the cold-side temperature. The derivation includes consideration of the ratio (u) between the electric current and heat-conduction power and leads to the concept of compatibility factor (s) for a given thermoelectric material, defined as the value of u that maximizes the reduced efficiency of the aforementioned model thermoelectric generator.

  15. Human Movement Potential: Its Ideokinetic Facilitation.

    ERIC Educational Resources Information Center

    Sweigard, Lulu E.

    This book focuses on the interdependence of postural alignment and the performance of movement. It provides an educational method (ideokinesis), which stresses the inherent capacity of the nervous system to determine the most efficient neuromuscular coordination for each movement. This method of teaching body balance and efficient movement has…

  16. Including quality attributes in efficiency measures consistent with net benefit: creating incentives for evidence based medicine in practice.

    PubMed

    Eckermann, Simon; Coelli, Tim

    2013-01-01

    Evidence based medicine supports net benefit maximising therapies and strategies in processes of health technology assessment (HTA) for reimbursement and subsidy decisions internationally. However, translation of evidence based medicine to practice is impeded by efficiency measures such as cost per case-mix adjusted separation in hospitals, which ignore health effects of care. In this paper we identify a correspondence method that allows quality variables under control of providers to be incorporated in efficiency measures consistent with maximising net benefit. Including effects framed from a disutility bearing (utility reducing) perspective (e.g. mortality, morbidity or reduction in life years) as inputs and minimising quality inclusive costs on the cost-disutility plane is shown to enable efficiency measures consistent with maximising net benefit under a one to one correspondence. The method combines advantages of radial properties with an appropriate objective of maximising net benefit to overcome problems of inappropriate objectives implicit with alternative methods, whether specifying quality variables with utility bearing output (e.g. survival, reduction in morbidity or life years), hyperbolic or exogenous variables. This correspondence approach is illustrated in undertaking efficiency comparison at a clinical activity level for 45 Australian hospitals allowing for their costs and mortality rates per admission. Explicit coverage and comparability conditions of the underlying correspondence method are also shown to provide a robust framework for preventing cost-shifting and cream-skimming incentives, with appropriate qualification of analysis and support for data linkage and risk adjustment where these conditions are not satisfied. Comparison on the cost-disutility plane has previously been shown to have distinct advantages in comparing multiple strategies in HTA, which this paper naturally extends to a robust method and framework for comparing efficiency of health care providers in practice. Consequently, the proposed approach provides a missing link between HTA and practice, to allow active incentives for evidence based net benefit maximisation in practice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. The efficiency and effectiveness of utilizing diagrams in interviews: an assessment of participatory diagramming and graphic elicitation.

    PubMed

    Umoquit, Muriah J; Dobrow, Mark J; Lemieux-Charles, Louise; Ritvo, Paul G; Urbach, David R; Wodchis, Walter P

    2008-08-08

    This paper focuses on measuring the efficiency and effectiveness of two diagramming methods employed in key informant interviews with clinicians and health care administrators. The two methods are 'participatory diagramming', where the respondent creates a diagram that assists in their communication of answers, and 'graphic elicitation', where a researcher-prepared diagram is used to stimulate data collection. These two diagramming methods were applied in key informant interviews and their value in efficiently and effectively gathering data was assessed based on quantitative measures and qualitative observations. Assessment of the two diagramming methods suggests that participatory diagramming is an efficient method for collecting data in graphic form, but may not generate the depth of verbal response that many qualitative researchers seek. In contrast, graphic elicitation was more intuitive, better understood and preferred by most respondents, and often provided more contemplative verbal responses, however this was achieved at the expense of more interview time. Diagramming methods are important for eliciting interview data that are often difficult to obtain through traditional verbal exchanges. Subject to the methodological limitations of the study, our findings suggest that while participatory diagramming and graphic elicitation have specific strengths and weaknesses, their combined use can provide complementary information that would not likely occur with the application of only one diagramming method. The methodological insights gained by examining the efficiency and effectiveness of these diagramming methods in our study should be helpful to other researchers considering their incorporation into qualitative research designs.

  18. Chapter 13: Assessing Persistence and Other Evaluation Issues Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Violette, Daniel M.

    Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).

  19. Probabilistic methods for rotordynamics analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  20. Apparatus and method for measuring single cell and sub-cellular photosynthetic efficiency

    DOEpatents

    Davis, Ryan Wesley; Singh, Seema; Wu, Huawen

    2013-07-09

    Devices for measuring single cell changes in photosynthetic efficiency in algal aquaculture are disclosed that include a combination of modulated LED trans-illumination of different intensities with synchronized through objective laser illumination and confocal detection. Synchronization and intensity modulation of a dual illumination scheme were provided using a custom microcontroller for a laser beam block and constant current LED driver. Therefore, single whole cell photosynthetic efficiency, and subcellular (diffraction limited) photosynthetic efficiency measurement modes are permitted. Wide field rapid light scanning actinic illumination is provided for both by an intensity modulated 470 nm LED. For the whole cell photosynthetic efficiency measurement, the same LED provides saturating pulses for generating photosynthetic induction curves. For the subcellular photosynthetic efficiency measurement, a switched through objective 488 nm laser provides saturating pulses for generating photosynthetic induction curves. A second near IR LED is employed to generate dark adapted states in the system under study.

  1. Recurrent neural network based virtual detection line

    NASA Astrophysics Data System (ADS)

    Kadikis, Roberts

    2018-04-01

    The paper proposes an efficient method for detection of moving objects in the video. The objects are detected when they cross a virtual detection line. Only the pixels of the detection line are processed, which makes the method computationally efficient. A Recurrent Neural Network processes these pixels. The machine learning approach allows one to train a model that works in different and changing outdoor conditions. Also, the same network can be trained for various detection tasks, which is demonstrated by the tests on vehicle and people counting. In addition, the paper proposes a method for semi-automatic acquisition of labeled training data. The labeling method is used to create training and testing datasets, which in turn are used to train and evaluate the accuracy and efficiency of the detection method. The method shows similar accuracy as the alternative efficient methods but provides greater adaptability and usability for different tasks.

  2. An efficient multilevel optimization method for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.

    1988-01-01

    An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.

  3. Alternative Methods in the Evaluation of School District Cash Management Programs.

    ERIC Educational Resources Information Center

    Dembowski, Frederick L.

    1980-01-01

    Empirically evaluates three measures of effectiveness of school district cash management: the rate of return method in common use and two new measures--efficiency rating and Net Present Value (NPV). The NPV approach allows examination of efficiency and provides a framework for evaluating other areas of educational policy. (Author/IRT)

  4. Cost effectiveness and efficiency in assistive technology service delivery.

    PubMed

    Warren, C G

    1993-01-01

    In order to develop and maintain a viable service delivery program, the realities of cost effectiveness and cost efficiency in providing assistive technology must be addressed. Cost effectiveness relates to value of the outcome compared to the expenditures. Cost efficiency analyzes how a provider uses available resources to supply goods and services. This paper describes how basic business principles of benefit/cost analysis can be used to determine cost effectiveness. In addition, basic accounting principles are used to illustrate methods of evaluating a program's cost efficiency. Service providers are encouraged to measure their own program's effectiveness and efficiency (and potential viability) in light of current trends. This paper is meant to serve as a catalyst for continued dialogue on this topic.

  5. The efficiency and effectiveness of utilizing diagrams in interviews: an assessment of participatory diagramming and graphic elicitation

    PubMed Central

    Umoquit, Muriah J; Dobrow, Mark J; Lemieux-Charles, Louise; Ritvo, Paul G; Urbach, David R; Wodchis, Walter P

    2008-01-01

    Background This paper focuses on measuring the efficiency and effectiveness of two diagramming methods employed in key informant interviews with clinicians and health care administrators. The two methods are 'participatory diagramming', where the respondent creates a diagram that assists in their communication of answers, and 'graphic elicitation', where a researcher-prepared diagram is used to stimulate data collection. Methods These two diagramming methods were applied in key informant interviews and their value in efficiently and effectively gathering data was assessed based on quantitative measures and qualitative observations. Results Assessment of the two diagramming methods suggests that participatory diagramming is an efficient method for collecting data in graphic form, but may not generate the depth of verbal response that many qualitative researchers seek. In contrast, graphic elicitation was more intuitive, better understood and preferred by most respondents, and often provided more contemplative verbal responses, however this was achieved at the expense of more interview time. Conclusion Diagramming methods are important for eliciting interview data that are often difficult to obtain through traditional verbal exchanges. Subject to the methodological limitations of the study, our findings suggest that while participatory diagramming and graphic elicitation have specific strengths and weaknesses, their combined use can provide complementary information that would not likely occur with the application of only one diagramming method. The methodological insights gained by examining the efficiency and effectiveness of these diagramming methods in our study should be helpful to other researchers considering their incorporation into qualitative research designs. PMID:18691410

  6. Generalized Gilat-Raubenheimer method for density-of-states calculation in photonic crystals

    NASA Astrophysics Data System (ADS)

    Liu, Boyuan; Johnson, Steven G.; Joannopoulos, John D.; Lu, Ling

    2018-04-01

    An efficient numerical algorithm is the key for accurate evaluation of density of states (DOS) in band theory. The Gilat-Raubenheimer (GR) method proposed in 1966 is an efficient linear extrapolation method which was limited in specific lattices. Here, using an affine transformation, we provide a new generalization of the original GR method to any Bravais lattices and show that it is superior to the tetrahedron method and the adaptive Gaussian broadening method. Finally, we apply our generalized GR method to compute DOS of various gyroid photonic crystals of topological degeneracies.

  7. FRACTIONAL AEROSOL FILTRATION EFFICIENCY OF IN-DUCT VENTILATION AIR CLEANERS

    EPA Science Inventory

    The filtration efficiency of ventilation air cleaners is highly particle-size dependent over the 0.01 to 3 μm diameter size range. Current standardized test methods, which determine only overall efficiencies for ambient aerosol or other test aerosols, provide data of limited util...

  8. Enhanced analysis of real-time PCR data by using a variable efficiency model: FPK-PCR

    PubMed Central

    Lievens, Antoon; Van Aelst, S.; Van den Bulcke, M.; Goetghebeur, E.

    2012-01-01

    Current methodology in real-time Polymerase chain reaction (PCR) analysis performs well provided PCR efficiency remains constant over reactions. Yet, small changes in efficiency can lead to large quantification errors. Particularly in biological samples, the possible presence of inhibitors forms a challenge. We present a new approach to single reaction efficiency calculation, called Full Process Kinetics-PCR (FPK-PCR). It combines a kinetically more realistic model with flexible adaptation to the full range of data. By reconstructing the entire chain of cycle efficiencies, rather than restricting the focus on a ‘window of application’, one extracts additional information and loses a level of arbitrariness. The maximal efficiency estimates returned by the model are comparable in accuracy and precision to both the golden standard of serial dilution and other single reaction efficiency methods. The cycle-to-cycle changes in efficiency, as described by the FPK-PCR procedure, stay considerably closer to the data than those from other S-shaped models. The assessment of individual cycle efficiencies returns more information than other single efficiency methods. It allows in-depth interpretation of real-time PCR data and reconstruction of the fluorescence data, providing quality control. Finally, by implementing a global efficiency model, reproducibility is improved as the selection of a window of application is avoided. PMID:22102586

  9. Coding For Compression Of Low-Entropy Data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  10. A general method to analyze the thermal performance of multi-cavity concentrating solar power receivers

    DOE PAGES

    Fleming, Austin; Folsom, Charles; Ban, Heng; ...

    2015-11-13

    Concentrating solar power (CSP) with thermal energy storage has potential to provide grid-scale, on-demand, dispatachable renewable energy. As higher solar receiver output temperatures are necessary for higher thermal cycle efficiency, current CSP research is focused on high outlet temperature and high efficiency receivers. Here, the objective of this study is to provide a simplified model to analyze the thermal efficiency of multi-cavity concentrating solar power receivers.

  11. Apparatus and method for enabling quantum-defect-limited conversion efficiency in cladding-pumped Raman fiber lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heebner, John E.; Sridharan, Arun K.; Dawson, Jay Walter

    Cladding-pumped Raman fiber lasers and amplifiers provide high-efficiency conversion efficiency at high brightness enhancement. Differential loss is applied to both single-pass configurations appropriate for pulsed amplification and laser oscillator configurations applied to high average power cw source generation.

  12. A tool to measure nurse efficiency and value.

    PubMed

    Landry, M T; Landry, H T; Hebert, W

    2001-07-01

    Home care nurses who have multiple roles can increase their value by validating their contributions and work efficiency. This article presents a method for tracking nurse efficiency for those who are paid on an hourly basis, and provides a mechanism to document their contributions to the home care agency.

  13. Scaling of ratings: Concepts and methods

    Treesearch

    Thomas C. Brown; Terry C. Daniel

    1990-01-01

    Rating scales provide an efficient and widely used means of recording judgments. This paper reviews scaling issues within the context of a psychometric model of the rating process, describes several methods of scaling rating data, and compares the methods in terms of the assumptions they require about the rating process and the information they provide about the...

  14. Chapter 21: Estimating Net Savings - Common Practices. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Violette, Daniel M.; Rathbun, Pamela

    This chapter focuses on the methods used to estimate net energy savings in evaluation, measurement, and verification (EM and V) studies for energy efficiency (EE) programs. The chapter provides a definition of net savings, which remains an unsettled topic both within the EE evaluation community and across the broader public policy evaluation community, particularly in the context of attribution of savings to a program. The chapter differs from the measure-specific Uniform Methods Project (UMP) chapters in both its approach and work product. Unlike other UMP resources that provide recommended protocols for determining gross energy savings, this chapter describes and comparesmore » the current industry practices for determining net energy savings but does not prescribe methods.« less

  15. Energy Efficiency Building Code for Commercial Buildings in Sri Lanka

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busch, John; Greenberg, Steve; Rubinstein, Francis

    2000-09-30

    1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards.

  16. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  17. Rapid one-step recombinational cloning

    PubMed Central

    Fu, Changlin; Wehr, Daniel R.; Edwards, Janice; Hauge, Brian

    2008-01-01

    As an increasing number of genes and open reading frames of unknown function are discovered, expression of the encoded proteins is critical toward establishing function. Accordingly, there is an increased need for highly efficient, high-fidelity methods for directional cloning. Among the available methods, site-specific recombination-based cloning techniques, which eliminate the use of restriction endonucleases and ligase, have been widely used for high-throughput (HTP) procedures. We have developed a recombination cloning method, which uses truncated recombination sites to clone PCR products directly into destination/expression vectors, thereby bypassing the requirement for first producing an entry clone. Cloning efficiencies in excess of 80% are obtained providing a highly efficient method for directional HTP cloning. PMID:18424799

  18. Method and apparatus for diamond wire cutting of metal structures

    DOEpatents

    Parsells, Robert; Gettelfinger, Geoff; Perry, Erik; Rule, Keith

    2005-04-19

    A method and apparatus for diamond wire cutting of metal structures, such as nuclear reactor vessels, is provided. A diamond wire saw having a plurality of diamond beads with beveled or chamfered edges is provided for sawing into the walls of the metal structure. The diamond wire is guided by a plurality of support structures allowing for a multitude of different cuts. The diamond wire is cleaned and cooled by CO.sub.2 during the cutting process to prevent breakage of the wire and provide efficient cutting. Concrete can be provided within the metal structure to enhance cutting efficiency and reduce airborne contaminants. The invention can be remotely controlled to reduce exposure of workers to radioactivity and other hazards.

  19. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  20. Efficient Online Learning Algorithms Based on LSTM Neural Networks.

    PubMed

    Ergen, Tolga; Kozat, Suleyman Serdar

    2017-09-13

    We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.

  1. Seismic data restoration with a fast L1 norm trust region method

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei

    2014-08-01

    Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.

  2. Three-Dimensional Navier-Stokes Method with Two-Equation Turbulence Models for Efficient Numerical Simulation of Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.

    1994-01-01

    A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.

  3. A FAST ITERATIVE METHOD FOR SOLVING THE EIKONAL EQUATION ON TRIANGULATED SURFACES*

    PubMed Central

    Fu, Zhisong; Jeong, Won-Ki; Pan, Yongsheng; Kirby, Robert M.; Whitaker, Ross T.

    2012-01-01

    This paper presents an efficient, fine-grained parallel algorithm for solving the Eikonal equation on triangular meshes. The Eikonal equation, and the broader class of Hamilton–Jacobi equations to which it belongs, have a wide range of applications from geometric optics and seismology to biological modeling and analysis of geometry and images. The ability to solve such equations accurately and efficiently provides new capabilities for exploring and visualizing parameter spaces and for solving inverse problems that rely on such equations in the forward model. Efficient solvers on state-of-the-art, parallel architectures require new algorithms that are not, in many cases, optimal, but are better suited to synchronous updates of the solution. In previous work [W. K. Jeong and R. T. Whitaker, SIAM J. Sci. Comput., 30 (2008), pp. 2512–2534], the authors proposed the fast iterative method (FIM) to efficiently solve the Eikonal equation on regular grids. In this paper we extend the fast iterative method to solve Eikonal equations efficiently on triangulated domains on the CPU and on parallel architectures, including graphics processors. We propose a new local update scheme that provides solutions of first-order accuracy for both architectures. We also propose a novel triangle-based update scheme and its corresponding data structure for efficient irregular data mapping to parallel single-instruction multiple-data (SIMD) processors. We provide detailed descriptions of the implementations on a single CPU, a multicore CPU with shared memory, and SIMD architectures with comparative results against state-of-the-art Eikonal solvers. PMID:22641200

  4. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  5. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    2010-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  6. Chapter 11: Sample Design Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Khawaja, M. Sami; Rushton, Josh

    Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.

  7. Highly efficient water-mediated approach to access benzazoles: metal catalyst and base-free synthesis of 2-substituted benzimidazoles, benzoxazoles, and benzothiazoles.

    PubMed

    Bala, Manju; Verma, Praveen Kumar; Sharma, Deepika; Kumar, Neeraj; Singh, Bikram

    2015-05-01

    An efficient water-catalyzed method has been developed for the synthesis of 2-substituted benzimidazoles, benzoxazoles, and benzothiazoles in one step. The present method excludes the usage of toxic metal catalysts and bases to produce benzazoles in good to excellent yields. An efficient and versatile water-mediated method has been established for the synthesis of various 2-arylbenzazoles. The present protocol excludes the usage of any catalyst and additive provided excellent selectivities and yields with high functional group tolerance for the synthesis of 2-arylated benzimidazoles, benzoxazoles, and benzothiazoles. Benzazolones were also synthesized using similar reaction protocol.

  8. Comparison of Spatiotemporal Mapping Techniques for Enormous Etl and Exploitation Patterns

    NASA Astrophysics Data System (ADS)

    Deiotte, R.; La Valley, R.

    2017-10-01

    The need to extract, transform, and exploit enormous volumes of spatiotemporal data has exploded with the rise of social media, advanced military sensors, wearables, automotive tracking, etc. However, current methods of spatiotemporal encoding and exploitation simultaneously limit the use of that information and increase computing complexity. Current spatiotemporal encoding methods from Niemeyer and Usher rely on a Z-order space filling curve, a relative of Peano's 1890 space filling curve, for spatial hashing and interleaving temporal hashes to generate a spatiotemporal encoding. However, there exist other space-filling curves, and that provide different manifold coverings that could promote better hashing techniques for spatial data and have the potential to map spatiotemporal data without interleaving. The concatenation of Niemeyer's and Usher's techniques provide a highly efficient space-time index. However, other methods have advantages and disadvantages regarding computational cost, efficiency, and utility. This paper explores the several methods using a range of sizes of data sets from 1K to 10M observations and provides a comparison of the methods.

  9. Efficient semiconductor light-emitting device and method

    DOEpatents

    Choquette, Kent D.; Lear, Kevin L.; Schneider, Jr., Richard P.

    1996-01-01

    A semiconductor light-emitting device and method. The semiconductor light-emitting device is provided with at least one control layer or control region which includes an annular oxidized portion thereof to channel an injection current into the active region, and to provide a lateral refractive index profile for index guiding the light generated within the device. A periodic composition grading of at least one of the mirror stacks in the device provides a reduced operating voltage of the device. The semiconductor light-emitting device has a high efficiency for light generation, and may be formed either as a resonant-cavity light-emitting diode (RCLED) or as a vertical-cavity surface-emitting laser (VCSEL).

  10. Efficient semiconductor light-emitting device and method

    DOEpatents

    Choquette, K.D.; Lear, K.L.; Schneider, R.P. Jr.

    1996-02-20

    A semiconductor light-emitting device and method are disclosed. The semiconductor light-emitting device is provided with at least one control layer or control region which includes an annular oxidized portion thereof to channel an injection current into the active region, and to provide a lateral refractive index profile for index guiding the light generated within the device. A periodic composition grading of at least one of the mirror stacks in the device provides a reduced operating voltage of the device. The semiconductor light-emitting device has a high efficiency for light generation, and may be formed either as a resonant-cavity light-emitting diode (RCLED) or as a vertical-cavity surface-emitting laser (VCSEL). 12 figs.

  11. Electron linac for medical isotope production with improved energy efficiency and isotope recovery

    DOEpatents

    Noonan, John; Walters, Dean; Virgo, Matt; Lewellen, John

    2015-09-08

    A method and isotope linac system are provided for producing radio-isotopes and for recovering isotopes. The isotope linac is an energy recovery linac (ERL) with an electron beam being transmitted through an isotope-producing target. The electron beam energy is recollected and re-injected into an accelerating structure. The ERL provides improved efficiency with reduced power requirements and provides improved thermal management of an isotope target and an electron-to-x-ray converter.

  12. Study on a New Combination Method and High Efficiency Outer Rotor Type Permanent Magnet Motors

    NASA Astrophysics Data System (ADS)

    Enomoto, Yuji; Kitamura, Masashi; Motegi, Yasuaki; Andoh, Takashi; Ochiai, Makoto; Abukawa, Toshimi

    The segment stator core, high space factor coil, and high efficiency magnet are indispensable technologies in the development of compact and a high efficiency motors. But adoption of the segment stator core and high space factor coil has not progressed in the field of outer rotor type motors, for the reason that the inner components cannot be laser welded together. Therefore, we have examined a segment stator core combination technology for the purposes of getting a large increase in efficiency and realizing miniaturization. We have also developed a characteristic estimation method which provides the most suitable performance for segment stator core motors.

  13. Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells

    NASA Astrophysics Data System (ADS)

    Zimmerman, A. H.

    1987-09-01

    The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.

  14. Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells

    NASA Technical Reports Server (NTRS)

    Zimmerman, A. H.

    1987-01-01

    The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.

  15. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...

  16. Application of kernel functions for accurate similarity search in large chemical databases.

    PubMed

    Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H

    2010-04-29

    Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.

  17. Hydrogen enrichment of synthetic fuel

    NASA Technical Reports Server (NTRS)

    Jay, C. G.

    1978-01-01

    Synthetic gas may be produced at lower cost and higher efficiency by using outside source of hydrogen. Method is compatible with same temperatures and pressures as shift reaction. Process increases efficiency by using less coal and water to provide equal amount of synthetic gas.

  18. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  19. Hot Fusion: an efficient method to clone multiple DNA fragments as well as inverted repeats without ligase.

    PubMed

    Fu, Changlin; Donovan, William P; Shikapwashya-Hasser, Olga; Ye, Xudong; Cole, Robert H

    2014-01-01

    Molecular cloning is utilized in nearly every facet of biological and medical research. We have developed a method, termed Hot Fusion, to efficiently clone one or multiple DNA fragments into plasmid vectors without the use of ligase. The method is directional, produces seamless junctions and is not dependent on the availability of restriction sites for inserts. Fragments are assembled based on shared homology regions of 17-30 bp at the junctions, which greatly simplifies the construct design. Hot Fusion is carried out in a one-step, single tube reaction at 50 °C for one hour followed by cooling to room temperature. In addition to its utility for multi-fragment assembly Hot Fusion provides a highly efficient method for cloning DNA fragments containing inverted repeats for applications such as RNAi. The overall cloning efficiency is in the order of 90-95%.

  20. Hot Fusion: An Efficient Method to Clone Multiple DNA Fragments as Well as Inverted Repeats without Ligase

    PubMed Central

    Fu, Changlin; Donovan, William P.; Shikapwashya-Hasser, Olga; Ye, Xudong; Cole, Robert H.

    2014-01-01

    Molecular cloning is utilized in nearly every facet of biological and medical research. We have developed a method, termed Hot Fusion, to efficiently clone one or multiple DNA fragments into plasmid vectors without the use of ligase. The method is directional, produces seamless junctions and is not dependent on the availability of restriction sites for inserts. Fragments are assembled based on shared homology regions of 17–30 bp at the junctions, which greatly simplifies the construct design. Hot Fusion is carried out in a one-step, single tube reaction at 50°C for one hour followed by cooling to room temperature. In addition to its utility for multi-fragment assembly Hot Fusion provides a highly efficient method for cloning DNA fragments containing inverted repeats for applications such as RNAi. The overall cloning efficiency is in the order of 90–95%. PMID:25551825

  1. Method for providing real-time control of a gaseous propellant rocket propulsion system

    NASA Technical Reports Server (NTRS)

    Morris, Brian G. (Inventor)

    1991-01-01

    The new and improved methods and apparatus disclosed provide effective real-time management of a spacecraft rocket engine powered by gaseous propellants. Real-time measurements representative of the engine performance are compared with predetermined standards to selectively control the supply of propellants to the engine for optimizing its performance as well as efficiently managing the consumption of propellants. A priority system is provided for achieving effective real-time management of the propulsion system by first regulating the propellants to keep the engine operating at an efficient level and thereafter regulating the consumption ratio of the propellants. A lower priority level is provided to balance the consumption of the propellants so significant quantities of unexpended propellants will not be left over at the end of the scheduled mission of the engine.

  2. Steroid hormones in environmental matrices: extraction method comparison.

    PubMed

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  3. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  4. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel andmore » one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a group is similar to all other components as a group. However, some differences were observed. The Supermicro server used 27 percent more power at idle compared to the other brands. The Intel server had a power supply control feature called cold redundancy, and the data suggest that cold redundancy can provide energy savings at low power levels. Test and evaluation methods that might be used by others having limited resources for IT equipment evaluation are explained in the report.« less

  5. Efficient use of highway capacity summary : report to Congress

    DOT National Transportation Integrated Search

    2009-11-01

    This report was developed to summarize the implementation of safety shoulders as travel lanes as a method to increase the efficient use of highway capacity. Its purpose is to provide a succinct overview of efforts to use left or right shoulder lanes ...

  6. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  7. Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, Robert E.

    2015-12-08

    Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.

  8. Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, Robert B.

    Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.

  9. Efficiency of personal dosimetry methods in vascular interventional radiology.

    PubMed

    Bacchim Neto, Fernando Antonio; Alves, Allan Felipe Fattori; Mascarenhas, Yvone Maria; Giacomini, Guilherme; Maués, Nadine Helena Pelegrino Bastos; Nicolucci, Patrícia; de Freitas, Carlos Clayton Macedo; Alvarez, Matheus; Pina, Diana Rodrigues de

    2017-05-01

    The aim of the present study was to determine the efficiency of six methods for calculate the effective dose (E) that is received by health professionals during vascular interventional procedures. We evaluated the efficiency of six methods that are currently used to estimate professionals' E, based on national and international recommendations for interventional radiology. Equivalent doses on the head, neck, chest, abdomen, feet, and hands of seven professionals were monitored during 50 vascular interventional radiology procedures. Professionals' E was calculated for each procedure according to six methods that are commonly employed internationally. To determine the best method, a more efficient E calculation method was used to determine the reference value (reference E) for comparison. The highest equivalent dose were found for the hands (0.34±0.93mSv). The two methods that are described by Brazilian regulations overestimated E by approximately 100% and 200%. The more efficient method was the one that is recommended by the United States National Council on Radiological Protection and Measurements (NCRP). The mean and median differences of this method relative to reference E were close to 0%, and its standard deviation was the lowest among the six methods. The present study showed that the most precise method was the one that is recommended by the NCRP, which uses two dosimeters (one over and one under protective aprons). The use of methods that employ at least two dosimeters are more efficient and provide better information regarding estimates of E and doses for shielded and unshielded regions. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Medical image security using modified chaos-based cryptography approach

    NASA Astrophysics Data System (ADS)

    Talib Gatta, Methaq; Al-latief, Shahad Thamear Abd

    2018-05-01

    The progressive development in telecommunication and networking technologies have led to the increased popularity of telemedicine usage which involve storage and transfer of medical images and related information so security concern is emerged. This paper presents a method to provide the security to the medical images since its play a major role in people healthcare organizations. The main idea in this work based on the chaotic sequence in order to provide efficient encryption method that allows reconstructing the original image from the encrypted image with high quality and minimum distortion in its content and doesn’t effect in human treatment and diagnosing. Experimental results prove the efficiency of the proposed method using some of statistical measures and robust correlation between original image and decrypted image.

  11. A fast semi-discrete Kansa method to solve the two-dimensional spatiotemporal fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Sun, HongGuang; Liu, Xiaoting; Zhang, Yong; Pang, Guofei; Garrard, Rhiannon

    2017-09-01

    Fractional-order diffusion equations (FDEs) extend classical diffusion equations by quantifying anomalous diffusion frequently observed in heterogeneous media. Real-world diffusion can be multi-dimensional, requiring efficient numerical solvers that can handle long-term memory embedded in mass transport. To address this challenge, a semi-discrete Kansa method is developed to approximate the two-dimensional spatiotemporal FDE, where the Kansa approach first discretizes the FDE, then the Gauss-Jacobi quadrature rule solves the corresponding matrix, and finally the Mittag-Leffler function provides an analytical solution for the resultant time-fractional ordinary differential equation. Numerical experiments are then conducted to check how the accuracy and convergence rate of the numerical solution are affected by the distribution mode and number of spatial discretization nodes. Applications further show that the numerical method can efficiently solve two-dimensional spatiotemporal FDE models with either a continuous or discrete mixing measure. Hence this study provides an efficient and fast computational method for modeling super-diffusive, sub-diffusive, and mixed diffusive processes in large, two-dimensional domains with irregular shapes.

  12. Microwave assisted synthesis of bridgehead alkenes.

    PubMed

    Cleary, Leah; Yoo, Hoseong; Shea, Kenneth J

    2011-04-01

    A new, concise method to synthesize triene precursors for the type 2 intramolecular Diels-Alder reaction has been developed. Microwave irradiation of the trienes provides a convenient method for the synthesis of bridgehead alkenes. Higher yields, shorter reaction times, and lower reaction temperatures provide a general and efficient route to this interesting class of molecules.

  13. Microwave Assisted Synthesis of Bridgehead Alkenes

    PubMed Central

    Cleary, Leah; Yoo, Hoseong; Shea, Kenneth J.

    2011-01-01

    A new, concise method to synthesize triene precursors for the type 2 intramolecular Diels–Alder reaction has been developed. Microwave irradiation of the trienes provides a convenient method for the synthesis of bridgehead alkenes. Higher yields, shorter reaction times and lower reaction temperatures provide a general and efficient route to this interesting class of molecules. PMID:21384818

  14. Joint histogram-based cost aggregation for stereo matching.

    PubMed

    Min, Dongbo; Lu, Jiangbo; Do, Minh N

    2013-10-01

    This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.

  15. High-throughput real-time quantitative reverse transcription PCR.

    PubMed

    Bookout, Angie L; Cummins, Carolyn L; Mangelsdorf, David J; Pesola, Jean M; Kramer, Martha F

    2006-02-01

    Extensive detail on the application of the real-time quantitative polymerase chain reaction (QPCR) for the analysis of gene expression is provided in this unit. The protocols are designed for high-throughput, 384-well-format instruments, such as the Applied Biosystems 7900HT, but may be modified to suit any real-time PCR instrument. QPCR primer and probe design and validation are discussed, and three relative quantitation methods are described: the standard curve method, the efficiency-corrected DeltaCt method, and the comparative cycle time, or DeltaDeltaCt method. In addition, a method is provided for absolute quantification of RNA in unknown samples. RNA standards are subjected to RT-PCR in the same manner as the experimental samples, thus accounting for the reaction efficiencies of both procedures. This protocol describes the production and quantitation of synthetic RNA molecules for real-time and non-real-time RT-PCR applications.

  16. Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning.

    PubMed

    Yang, Yimin; Wu, Q M Jonathan

    2016-11-01

    The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.

  17. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1993-01-01

    One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.

  18. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1992-01-01

    One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.

  19. Identification and modification of dynamical regions in proteins for alteration of enzyme catalytic effect

    DOEpatents

    Agarwal, Pratul K.

    2015-11-24

    A method for analysis, control, and manipulation for improvement of the chemical reaction rate of a protein-mediated reaction is provided. Enzymes, which typically comprise protein molecules, are very efficient catalysts that enhance chemical reaction rates by many orders of magnitude. Enzymes are widely used for a number of functions in chemical, biochemical, pharmaceutical, and other purposes. The method identifies key protein vibration modes that control the chemical reaction rate of the protein-mediated reaction, providing identification of the factors that enable the enzymes to achieve the high rate of reaction enhancement. By controlling these factors, the function of enzymes may be modulated, i.e., the activity can either be increased for faster enzyme reaction or it can be decreased when a slower enzyme is desired. This method provides an inexpensive and efficient solution by utilizing computer simulations, in combination with available experimental data, to build suitable models and investigate the enzyme activity.

  20. Identification and modification of dynamical regions in proteins for alteration of enzyme catalytic effect

    DOEpatents

    Agarwal, Pratul K.

    2013-04-09

    A method for analysis, control, and manipulation for improvement of the chemical reaction rate of a protein-mediated reaction is provided. Enzymes, which typically comprise protein molecules, are very efficient catalysts that enhance chemical reaction rates by many orders of magnitude. Enzymes are widely used for a number of functions in chemical, biochemical, pharmaceutical, and other purposes. The method identifies key protein vibration modes that control the chemical reaction rate of the protein-mediated reaction, providing identification of the factors that enable the enzymes to achieve the high rate of reaction enhancement. By controlling these factors, the function of enzymes may be modulated, i.e., the activity can either be increased for faster enzyme reaction or it can be decreased when a slower enzyme is desired. This method provides an inexpensive and efficient solution by utilizing computer simulations, in combination with available experimental data, to build suitable models and investigate the enzyme activity.

  1. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    NASA Technical Reports Server (NTRS)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  2. Design of multi-energy Helds coupling testing system of vertical axis wind power system

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Yang, Z. X.; Li, G. S.; Song, L.; Ma, C.

    2016-08-01

    The conversion efficiency of wind energy is the focus of researches and concerns as one of the renewable energy. The present methods of enhancing the conversion efficiency are mostly improving the wind rotor structure, optimizing the generator parameters and energy storage controller and so on. Because the conversion process involves in energy conversion of multi-energy fields such as wind energy, mechanical energy and electrical energy, the coupling effect between them will influence the overall conversion efficiency. In this paper, using system integration analysis technology, a testing system based on multi-energy field coupling (MEFC) of vertical axis wind power system is proposed. When the maximum efficiency of wind rotor is satisfied, it can match to the generator function parameters according to the output performance of wind rotor. The voltage controller can transform the unstable electric power to the battery on the basis of optimizing the parameters such as charging times, charging voltage. Through the communication connection and regulation of the upper computer system (UCS), it can make the coupling parameters configure to an optimal state, and it improves the overall conversion efficiency. This method can test the whole wind turbine (WT) performance systematically and evaluate the design parameters effectively. It not only provides a testing method for system structure design and parameter optimization of wind rotor, generator and voltage controller, but also provides a new testing method for the whole performance optimization of vertical axis wind energy conversion system (WECS).

  3. A Survey of Methods for Analyzing and Improving GPU Energy Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S

    2014-01-01

    Recent years have witnessed a phenomenal growth in the computational capabilities and applications of GPUs. However, this trend has also led to dramatic increase in their power consumption. This paper surveys research works on analyzing and improving energy efficiency of GPUs. It also provides a classification of these techniques on the basis of their main research idea. Further, it attempts to synthesize research works which compare energy efficiency of GPUs with other computing systems, e.g. FPGAs and CPUs. The aim of this survey is to provide researchers with knowledge of state-of-the-art in GPU power management and motivate them to architectmore » highly energy-efficient GPUs of tomorrow.« less

  4. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    DOE PAGES

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  5. Retrieval of spheroid particle size distribution from spectral extinction data in the independent mode using PCA approach

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Lin, Jian-Zhong

    2013-01-01

    An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.

  6. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  7. Efficient propagation of the hierarchical equations of motion using the matrix product state method

    NASA Astrophysics Data System (ADS)

    Shi, Qiang; Xu, Yang; Yan, Yaming; Xu, Meng

    2018-05-01

    We apply the matrix product state (MPS) method to propagate the hierarchical equations of motion (HEOM). It is shown that the MPS approximation works well in different type of problems, including boson and fermion baths. The MPS method based on the time-dependent variational principle is also found to be applicable to HEOM with over one thousand effective modes. Combining the flexibility of the HEOM in defining the effective modes and the efficiency of the MPS method thus may provide a promising tool in simulating quantum dynamics in condensed phases.

  8. Probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Wing, Kam Liu

    1987-01-01

    In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.

  9. Evaluation Method for Fieldlike-Torque Efficiency by Modulation of the Resonance Field

    NASA Astrophysics Data System (ADS)

    Kim, Changsoo; Kim, Dongseuk; Chun, Byong Sun; Moon, Kyoung-Woong; Hwang, Chanyong

    2018-05-01

    The spin Hall effect has attracted a lot of interest in spintronics because it offers the possibility of a faster switching route with an electric current than with a spin-transfer-torque device. Recently, fieldlike spin-orbit torque has been shown to play an important role in the magnetization switching mechanism. However, there is no simple method for observing the fieldlike spin-orbit torque efficiency. We suggest a method for measuring fieldlike spin-orbit torque using a linear change in the resonance field in spectra of direct-current (dc)-tuned spin-torque ferromagnetic resonance. The fieldlike spin-orbit torque efficiency can be obtained in both a macrospin simulation and in experiments by simply subtracting the Oersted field from the shifted amount of resonance field. This method analyzes the effect of fieldlike torque using dc in a normal metal; therefore, only the dc resistivity and the dimensions of each layer are considered in estimating the fieldlike spin-torque efficiency. The evaluation of fieldlike-torque efficiency of a newly emerging material by modulation of the resonance field provides a shortcut in the development of an alternative magnetization switching device.

  10. Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime

    NASA Astrophysics Data System (ADS)

    Lorin, E.; Yang, X.; Antoine, X.

    2016-06-01

    The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.

  11. Nonnegative least-squares image deblurring: improved gradient projection approaches

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.

    2010-02-01

    The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.

  12. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  13. Treatment of addiction and addiction-related behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dewey, S.L.; Brodie, J.D.; Ashby, C.R. Jr.

    2000-05-02

    The present invention provides a highly efficient method for treating substance addiction and for changing addiction-related behavior of a primate suffering from substance addiction. The method includes administering to a primate an effective amount of a pharmaceutical composition including gamma vinylGABA. The present invention also provides a method of treatment of nicotine addiction by treating a patient with an effective amount of a composition including gamma vinylGABA.

  14. CdTe devices and method of manufacturing same

    DOEpatents

    Gessert, Timothy A.; Noufi, Rommel; Dhere, Ramesh G.; Albin, David S.; Barnes, Teresa; Burst, James; Duenow, Joel N.; Reese, Matthew

    2015-09-29

    A method of producing polycrystalline CdTe materials and devices that incorporate the polycrystalline CdTe materials are provided. In particular, a method of producing polycrystalline p-doped CdTe thin films for use in CdTe solar cells in which the CdTe thin films possess enhanced acceptor densities and minority carrier lifetimes, resulting in enhanced efficiency of the solar cells containing the CdTe material are provided.

  15. Treatment of addiction and addiction-related behavior

    DOEpatents

    Dewey, Stephen L.; Brodie, Jonathan D.; Ashby, Jr., Charles R.

    2000-01-01

    The present invention provides a highly efficient method for treating substance addiction and for changing addiction-related behavior of a primate suffering from substance addiction. The method includes administering to a primate an effective amount of a pharmaceutical composition including gamma vinylGABA. The present invention also provides a method of treatment of nicotine addiction by treating a patient with an effective amount of a composition including gamma vinylGABA.

  16. Methods for understanding super-efficient data envelopment analysis results with an application to hospital inpatient surgery.

    PubMed

    O'Neill, Liam; Dexter, Franklin

    2005-11-01

    We compare two techniques for increasing the transparency and face validity of Data Envelopment Analysis (DEA) results for managers at a single decision-making unit: multifactor efficiency (MFE) and non-radial super-efficiency (NRSE). Both methods incorporate the slack values from the super-efficient DEA model to provide a more robust performance measure than radial super-efficiency scores. MFE and NRSE are equivalent for unique optimal solutions and a single output. MFE incorporates the slack values from multiple output variables, whereas NRSE does not. MFE can be more transparent to managers since it involves no additional optimization steps beyond the DEA, whereas NRSE requires several. We compare results for operating room managers at an Iowa hospital evaluating its growth potential for multiple surgical specialties. In addition, we address the problem of upward bias of the slack values of the super-efficient DEA model.

  17. Display of disulfide-rich proteins by complementary DNA display and disulfide shuffling assisted by protein disulfide isomerase.

    PubMed

    Naimuddin, Mohammed; Kubo, Tai

    2011-12-01

    We report an efficient system to produce and display properly folded disulfide-rich proteins facilitated by coupled complementary DNA (cDNA) display and protein disulfide isomerase-assisted folding. The results show that a neurotoxin protein containing four disulfide linkages can be displayed in the folded state. Furthermore, it can be refolded on a solid support that binds efficiently to its natural acetylcholine receptor. Probing the efficiency of the display proteins prepared by these methods provided up to 8-fold higher enrichment by the selective enrichment method compared with cDNA display alone, more than 10-fold higher binding to its receptor by the binding assays, and more than 10-fold higher affinities by affinity measurements. Cotranslational folding was found to have better efficiency than posttranslational refolding between the two investigated methods. We discuss the utilities of efficient display of such proteins in the preparation of superior quality proteins and protein libraries for directed evolution leading to ligand discovery. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Extending the Multi-level Method for the Simulation of Stochastic Biological Systems.

    PubMed

    Lester, Christopher; Baker, Ruth E; Giles, Michael B; Yates, Christian A

    2016-08-01

    The multi-level method for discrete-state systems, first introduced by Anderson and Higham (SIAM Multiscale Model Simul 10(1):146-179, 2012), is a highly efficient simulation technique that can be used to elucidate statistical characteristics of biochemical reaction networks. A single point estimator is produced in a cost-effective manner by combining a number of estimators of differing accuracy in a telescoping sum, and, as such, the method has the potential to revolutionise the field of stochastic simulation. In this paper, we present several refinements of the multi-level method which render it easier to understand and implement, and also more efficient. Given the substantial and complex nature of the multi-level method, the first part of this work reviews existing literature, with the aim of providing a practical guide to the use of the multi-level method. The second part provides the means for a deft implementation of the technique and concludes with a discussion of a number of open problems.

  19. Constraints on the utility of MnO2 cartridge method for the extraction of radionuclides: A case study using 234Th

    USGS Publications Warehouse

    Baskaran, M.; Swarzenski, P.W.; Biddanda, B.A.

    2009-01-01

    [1] Large volume (102-103 L) seawater samples are routinely processed to investigate the partitioning of particle reactive radionuclides and Ra between solution and size-fractionated suspended particulate matter. One of the most frequently used methods to preconcentrate these nuclides from such large volumes involves extraction onto three filter cartridges (a prefilter for particulate species and two MnO2-coated filters for dissolved species) connected in series. This method assumes that the extraction efficiency is uniform for both MnO2-coated cartridges, that no dissolved species are removed by the prefilter, and that any adsorbed radionuclides are not desorbed from the MnO2-coated cartridges during filtration. In this study, we utilized 234Th-spiked coastal seawater and deionized water to address the removal of dissolved Th onto prefilters and MnO2-coated filter cartridges. Experimental results provide the first data that indicate (1) a small fraction of dissolved Th (<6%) can be removed by the prefilter cartridge; (2) a small fraction of dissolved Th (<5%) retained by the MnO2 surface can also be desorbed, which undermines the assumption of uniform extraction efficiency for Th; and (3) the absolute and relative extraction efficiencies can vary widely. These experiments provide insight on the variability of the extraction efficiency of MnO 2-coated filter cartridges by comparing the relative and absolute efficiencies and recommend the use of a constant efficiency on the combined activity from two filter cartridges connected in series for future studies of dissolved 234Th and other radionuclides in natural waters using sequential filtration/extraction methods. ?? 2009 by the American Geophysical Union.

  20. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan

    2016-04-28

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less

  1. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    PubMed Central

    Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost

    2016-01-01

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167

  2. Suppression of Tla1 gene expression for improved solar conversion efficiency and photosynthetic productivity in plants and algae

    DOEpatents

    Melis, Anastasios; Mitra, Mautusi

    2010-06-29

    The invention provides method and compositions to minimize the chlorophyll antenna size of photosynthesis by decreasing TLA1 gene expression, thereby improving solar conversion efficiencies and photosynthetic productivity in plants, e.g., green microalgae, under bright sunlight conditions.

  3. Effective and efficient agricultural drainage pipe mapping with UAS thermal infrared imagery: a case study

    USDA-ARS?s Scientific Manuscript database

    Effective and efficient methods are needed to map agricultural subsurface drainage systems. Visible (VIS), near infrared (NIR), and/or thermal infrared (TIR) imagery obtained by unmanned aircraft systems (UAS) may provide a means for determining drainage pipe locations. Preliminary UAS surveys wit...

  4. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    ERIC Educational Resources Information Center

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  5. Proline/pipecolinic acid-promoted copper-catalyzed P-arylation.

    PubMed

    Huang, Cheng; Tang, Xu; Fu, Hua; Jiang, Yuyang; Zhao, Yufen

    2006-06-23

    We have developed a convenient and efficient approach for P-arylation of organophosphorus compounds containing P-H. Using commercially available and inexpensive proline and pipecolinic acid as the ligands greatly improved the efficiency of the coupling reactions, so the method can provide an entry to arylphosphonates, arylphosphinates and arylphosphine oxides.

  6. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  7. Method for sealing remote leaks in an enclosure using an aerosol

    DOEpatents

    Modera, Mark P.; Carrie, Francois R.

    1999-01-01

    The invention is a method and device for sealing leaks remotely by means of injecting a previously prepared aerosol into the enclosure being sealed according to a particular sealing efficiency defined by the product of a penetration efficiency and a particle deposition efficiency. By using different limits in the relationship between penetration efficiency and flowrate, the same method according the invention can be used for coating the inside of an enclosure. Specifically the invention is a method and device for preparing, transporting, and depositing a solid phase aerosol to the interior surface of the enclosure relating particle size, particle carrier flow rate, and pressure differential, so that particles deposited there can bridge and substantially seal each leak, with out providing a substantial coating at inside surfaces of the enclosure other than the leak. The particle size and flow parameters can be adjusted to coat the interior of the enclosure (duct) without substantial plugging of the leaks depending on how the particle size and flowrate relationships are chosen.

  8. Evaluation of a Wipe Surface Sample Method for Collection of Bacillus Spores from Nonporous Surfaces▿

    PubMed Central

    Brown, Gary S.; Betty, Rita G.; Brockmann, John E.; Lucero, Daniel A.; Souza, Caroline A.; Walsh, Kathryn S.; Boucher, Raymond M.; Tezak, Mathew; Wilson, Mollye C.; Rudolph, Todd

    2007-01-01

    Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of ±0.12, and for painted wallboard it was 0.29 with a standard deviation of ±0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of ±0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis. PMID:17122390

  9. Evaluation of a wipe surface sample method for collection of Bacillus spores from nonporous surfaces.

    PubMed

    Brown, Gary S; Betty, Rita G; Brockmann, John E; Lucero, Daniel A; Souza, Caroline A; Walsh, Kathryn S; Boucher, Raymond M; Tezak, Mathew; Wilson, Mollye C; Rudolph, Todd

    2007-02-01

    Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of +/-0.12, and for painted wallboard it was 0.29 with a standard deviation of +/-0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of +/-0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis.

  10. A Graphical Method for Estimation of Barometric Efficiency from Continuous Data - Concepts and Application to a Site in the Piedmont, Air Force Plant 6, Marietta, Georgia

    USGS Publications Warehouse

    Gonthier, Gerard

    2007-01-01

    A graphical method that uses continuous water-level and barometric-pressure data was developed to estimate barometric efficiency. A plot of nearly continuous water level (on the y-axis), as a function of nearly continuous barometric pressure (on the x-axis), will plot as a line curved into a series of connected elliptical loops. Each loop represents a barometric-pressure fluctuation. The negative of the slope of the major axis of an elliptical loop will be the ratio of water-level change to barometric-pressure change, which is the sum of the barometric efficiency plus the error. The negative of the slope of the preferred orientation of many elliptical loops is an estimate of the barometric efficiency. The slope of the preferred orientation of many elliptical loops is approximately the median of the slopes of the major axes of the elliptical loops. If water-level change that is not caused by barometric-pressure change does not correlate with barometric-pressure change, the probability that the error will be greater than zero will be the same as the probability that it will be less than zero. As a result, the negative of the median of the slopes for many loops will be close to the barometric efficiency. The graphical method provided a rapid assessment of whether a well was affected by barometric-pressure change and also provided a rapid estimate of barometric efficiency. The graphical method was used to assess which wells at Air Force Plant 6, Marietta, Georgia, had water levels affected by barometric-pressure changes during a 2003 constant-discharge aquifer test. The graphical method was also used to estimate barometric efficiency. Barometric-efficiency estimates from the graphical method were compared to those of four other methods: average of ratios, median of ratios, Clark, and slope. The two methods (the graphical and median-of-ratios methods) that used the median values of water-level change divided by barometric-pressure change appeared to be most resistant to error caused by barometric-pressure-independent water-level change. The graphical method was particularly resistant to large amounts of barometric-pressure-independent water-level change, having an average and standard deviation of error for control wells that was less than one-quarter that of the other four methods. When using the graphical method, it is advisable that more than one person select the slope or that the same person fits the same data several times to minimize the effect of subjectivity. Also, a long study period should be used (at least 60 days) to ensure that loops affected by large amounts of barometric-pressure-independent water-level change do not significantly contribute to error in the barometric-efficiency estimate.

  11. Optimization design of hydroturbine rotors according to the efficiency-strength criteria

    NASA Astrophysics Data System (ADS)

    Bannikov, D. V.; Yesipov, D. V.; Cherny, S. G.; Chirkov, D. V.

    2010-12-01

    The hydroturbine runner designing [1] is optimized by efficient methods for calculation of head loss in entire flow-through part of the turbine and deformation state of the blade. Energy losses are found at modelling of the spatial turbulent flow and engineering semi-empirical formulae. State of deformation is determined from the solution of the linear problem of elasticity for the isolated blade at hydrodynamic pressure with the method of boundary elements. With the use of the proposed system, the problem of the turbine runner design with the capacity of 640 MW providing the preset dependence of efficiency on the turbine work mode (efficiency criterion) is solved. The arising stresses do not exceed the critical value (strength criterion).

  12. Protein immobilization onto various surfaces using a polymer-bound isocyanate

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Jin; Cha, Eun Ji; Park, Hee-Deung

    2015-01-01

    Silane coupling agents have been widely used for immobilizing proteins onto inorganic surfaces. However, the immobilization method using silane coupling agents requires several treatment steps, and its application is limited to only surfaces containing hydroxyl groups. The aim of this study was to develop a novel method to overcome the limitations of the silane-based immobilization method using a polymer-bound isocyanate. Initially, polymer-bound isocyanate was dissolved in organic solvent and then was used to dip-coat inorganic surfaces. Proteins were then immobilized onto the dip-coated surfaces by the formation of urea bonds between the isocyanate groups of the polymer and the amine groups of the protein. The reaction was verified by FT-IR in which NCO stretching peaks disappeared, and CO and NH stretching peaks appeared after immobilization. The immobilization efficiency of the newly developed method was insensitive to reaction temperatures (4-50 °C), but the efficiency increased with reaction time and reached a maximum after 4 h. Furthermore, the method showed comparable immobilization efficiency to the silane-based immobilization method and was applicable to surfaces that cannot form hydroxyl groups. Taken together, the newly developed method provides a simple and efficient platform for immobilizing proteins onto surfaces.

  13. A superhydrophobic cone to facilitate the xenomonitoring of filarial parasites, malaria, and trypanosomes using mosquito excreta/feces.

    PubMed

    Cook, Darren A N; Pilotte, Nils; Minetti, Corrado; Williams, Steven A; Reimer, Lisa J

    2017-11-06

    Background: Molecular xenomonitoring (MX), the testing of insect vectors for the presence of human pathogens, has the potential to provide a non-invasive and cost-effective method for monitoring the prevalence of disease within a community. Current MX methods require the capture and processing of large numbers of mosquitoes, particularly in areas of low endemicity, increasing the time, cost and labour required. Screening the excreta/feces (E/F) released from mosquitoes, rather than whole carcasses, improves the throughput by removing the need to discriminate vector species since non-vectors release ingested pathogens in E/F. It also enables larger numbers of mosquitoes to be processed per pool. However, this new screening approach requires a method of efficiently collecting E/F. Methods: We developed a cone with a superhydrophobic surface to allow for the efficient collection of E/F. Using mosquitoes exposed to either Plasmodium falciparum , Brugia malayi or Trypanosoma brucei brucei, we tested the performance of the superhydrophobic cone alongside two other collection methods. Results: All collection methods enabled the detection of DNA from the three parasites. Using the superhydrophobic cone to deposit E/F into a small tube provided the highest number of positive samples (16 out of 18) and facilitated detection of parasite DNA in E/F from individual mosquitoes. Further tests showed that following a simple washing step, the cone can be reused multiple times, further improving its cost-effectiveness. Conclusions: Incorporating the superhydrophobic cone into mosquito traps or holding containers could provide a simple and efficient method for collecting E/F. Where this is not possible, swabbing the container or using the washing method facilitates the detection of the three parasites used in this study.

  14. Carboxylated multiwalled carbon nanotubes/polydimethylsiloxane, a new coating for 96-blade solid-phase microextraction for determination of phenolic compounds in water.

    PubMed

    Kueseng, Pamornrat; Pawliszyn, Janusz

    2013-11-22

    A new thin-film, carboxylated multiwalled carbon nanotubes/polydimethylsiloxane (MWCNTs-COOH/PDMS) coating was developed for 96-blade solid-phase microextraction (SPME) system followed by high performance liquid chromatography with ultraviolet detection (HPLC-UV). The method provided good extraction efficiency (64-90%) for three spiked levels, with relative standard deviations (RSD)≤6%, and detection limits between 1 and 2 μg/L for three phenolic compounds. The MWCNTs-COOH/PDMS 96-blade SPME system presents advantages over traditional methods due to its simplicity of use, easy coating preparation, low cost and high sample throughput (2.1 min per sample). The developed coating is reusable for a minimum of 110 extractions with good extraction efficiency. The coating provided higher extraction efficiency (3-8 times greater) than pure PDMS coatings. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Massively parallel multicanonical simulations

    NASA Astrophysics Data System (ADS)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  16. Composite SAR imaging using sequential joint sparsity

    NASA Astrophysics Data System (ADS)

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.

    2017-06-01

    This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.

  17. Palladium-Catalyzed Nitromethylation of Aryl Halides: An Orthogonal Formylation Equivalent

    PubMed Central

    Walvoord, Ryan R.; Berritt, Simon; Kozlowski, Marisa C.

    2012-01-01

    An efficient cross-coupling reaction of aryl halides and nitromethane was developed with the use of parallel microscale experimentation. The arylnitromethane products are precursors for numerous useful synthetic products. An efficient method for their direct conversion to the corresponding oximes and aldehydes in a one-pot operation has been discovered. The process exploits inexpensive nitromethane as a carbonyl equivalent, providing a mild and convenient formylation method that is compatible with many functional groups. PMID:22839593

  18. Experimental measurements of a prototype high-concentration Fresnel lens and sun-tracking method for photovoltaic panel's efficiency enhancement

    NASA Astrophysics Data System (ADS)

    Rajaee, Meraj; Ghorashi, Seyed Mohamad Bagher

    2015-08-01

    Concentrator photovoltaic modules are a promising technology for highly efficient solar energy conversion. This system presents several advantages due to additional degrees of freedom that has been provided by the spectral separation such as cost and mass reduction, increase in the incident solar flux on PV cells and performances. This paper has proposed a unique photovoltaic solar cell system that consists of semi-Fresnel lens convergent structure and a novel two axis sun tracking module to enhance the efficiency of solar cell by using less cell area and energy losses. The grooves of this lens are calculated according to the refraction and convergent angles of the light easy for perpendicular incidence angle. The update time interval during tracking causes misalignment of the lens' optical axis versus the sunrays. Then an inventive sun-tracking method is introduced to adjust the module so that the incident rays are always perpendicular to the module's surface. As a result, all rays will be refracted with the predetermined angles. This way the focus area is reduced and smaller cells can be used. We also mentioned different module connections in order to provide compensation method during losses, for networks and power systems. Experimental results show that using semi-Fresnel lens, along with the sun-tracking method increases the efficiency of PV panel.

  19. DNA identification of human remains in Disaster Victim Identification (DVI): An efficient sampling method for muscle, bone, bone marrow and teeth.

    PubMed

    de Boer, Hans H; Maat, George J R; Kadarmo, D Aji; Widodo, Putut T; Kloosterman, Ate D; Kal, Arnoud J

    2018-06-04

    In disaster victim identification (DVI), DNA profiling is considered to be one of the most reliable and efficient means to identify bodies or separated body parts. This requires a post mortem DNA sample, and an ante mortem DNA sample of the presumed victim or their biological relative(s). Usually the collection of an adequate ante mortem sample is technically simple, but the acquisition of a good quality post mortem sample under unfavourable DVI circumstances is complicated due to the variable degree of preservation of the human remains and the high risk of DNA (cross) contamination. This paper provides the community with an efficient method to collect post-mortem DNA samples from muscle, bone, bone marrow and teeth, with a minimal risk of contamination. Our method has been applied in a recent, challenging DVI operation (i.e. the identification of the 298 victims of the MH17 airplane crash in 2014). 98,2% of the collected PM samples provided the DVI team with highly informative DNA genotyping results without the risk of contamination and consequent mistyping the victim's DNA. Moreover, the method is easy, cheap and quick. This paper provides the DVI community with a step-wise instructions with recommendations for the type of tissue to be sampled and the site of excision (preferably the upper leg). Although initially designed for DVI purposes, the method is also suited for the identification of individual victims. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Mild extraction methods using aqueous glucose solution for the analysis of natural dyes in textile artefacts dyed with Dyer's madder (Rubia tinctorum L.).

    PubMed

    Ford, Lauren; Henderson, Robert L; Rayner, Christopher M; Blackburn, Richard S

    2017-03-03

    Madder (Rubia tinctorum L.) has been widely used as a red dye throughout history. Acid-sensitive colorants present in madder, such as glycosides (lucidin primeveroside, ruberythric acid, galiosin) and sensitive aglycons (lucidin), are degraded in the textile back extraction process; in previous literature these sensitive molecules are either absent or present in only low concentrations due to the use of acid in typical textile back extraction processes. Anthraquinone aglycons alizarin and purpurin are usually identified in analysis following harsh back extraction methods, such those using solvent mixtures with concentrated hydrochloric acid at high temperatures. Use of softer extraction techniques potentially allows for dye components present in madder to be extracted without degradation, which can potentially provide more information about the original dye profile, which varies significantly between madder varieties, species and dyeing technique. Herein, a softer extraction method involving aqueous glucose solution was developed and compared to other back extraction techniques on wool dyed with root extract from different varieties of Rubia tinctorum. Efficiencies of the extraction methods were analysed by HPLC coupled with diode array detection. Acidic literature methods were evaluated and they generally caused hydrolysis and degradation of the dye components, with alizarin, lucidin, and purpurin being the main compounds extracted. In contrast, extraction in aqueous glucose solution provides a highly effective method for extraction of madder dyed wool and is shown to efficiently extract lucidin primeveroside and ruberythric acid without causing hydrolysis and also extract aglycons that are present due to hydrolysis during processing of the plant material. Glucose solution is a favourable extraction medium due to its ability to form extensive hydrogen bonding with glycosides present in madder, and displace them from the fibre. This new glucose method offers an efficient process that preserves these sensitive molecules and is a step-change in analysis of madder dyed textiles as it can provide further information about historical dye preparation and dyeing processes that current methods cannot. The method also efficiently extracts glycosides in artificially aged samples, making it applicable for museum textile artefacts. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. 77 FR 39895 - New Analytic Methods and Sampling Procedures for the United States National Residue Program for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-06

    ... Analytic Methods and Sampling Procedures for the United States National Residue Program for Meat, Poultry... implementing several multi-residue methods for analyzing samples of meat, poultry, and egg products for animal.... These modern, high-efficiency methods will conserve resources and provide useful and reliable results...

  2. Coincidence and coherent data analysis methods for gravitational wave bursts in a network of interferometric detectors

    NASA Astrophysics Data System (ADS)

    Arnaud, Nicolas; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Porter, Edward K.

    2003-11-01

    Network data analysis methods are the only way to properly separate real gravitational wave (GW) transient events from detector noise. They can be divided into two generic classes: the coincidence method and the coherent analysis. The former uses lists of selected events provided by each interferometer belonging to the network and tries to correlate them in time to identify a physical signal. Instead of this binary treatment of detector outputs (signal present or absent), the latter method involves first the merging of the interferometer data and looks for a common pattern, consistent with an assumed GW waveform and a given source location in the sky. The thresholds are only applied later, to validate or not the hypothesis made. As coherent algorithms use more complete information than coincidence methods, they are expected to provide better detection performances, but at a higher computational cost. An efficient filter must yield a good compromise between a low false alarm rate (hence triggering on data at a manageable rate) and a high detection efficiency. Therefore, the comparison of the two approaches is achieved using so-called receiving operating characteristics (ROC), giving the relationship between the false alarm rate and the detection efficiency for a given method. This paper investigates this question via Monte Carlo simulations, using the network model developed in a previous article. Its main conclusions are the following. First, a three-interferometer network such as Virgo-LIGO is found to be too small to reach good detection efficiencies at low false alarm rates: larger configurations are suitable to reach a confidence level high enough to validate as true GW a detected event. In addition, an efficient network must contain interferometers with comparable sensitivities: studying the three-interferometer LIGO network shows that the 2-km interferometer with half sensitivity leads to a strong reduction of performances as compared to a network of three interferometers with full sensitivity. Finally, it is shown that coherent analyses are feasible for burst searches and are clearly more efficient than coincidence strategies. Therefore, developing such methods should be an important goal of a worldwide collaborative data analysis.

  3. General Catalytic Enantioselective Access to Monohalomethyl and Trifluoromethyl Cyclopropanes.

    PubMed

    Huang, Wei-Sheng; Schlinquer, Claire; Poisson, Thomas; Pannecoucke, Xavier; Charette, André B; Jubault, Philippe

    2018-05-29

    An efficient catalytic enantioselective access to chiral functionalized trifluoromethyl cyclopropanes from two classes of diazo compounds and alpha-trifluoromethyl styrenes using Rh2((S)-BTPCP)4 as a catalyst is described. This method provides an efficient and practical strategy for the synthesis of highly functionalized CF3-cyclopropanes with excellent diastereoselectivities (up to 20:1) and enantioselectivities (up to 99% ee). The depicted methodology represents up to date the most efficient catalytic enantioselective method to access highly decorated chiral CF3-cyclopropanes. Extension to chiral monohalomethyl cyclopropanes in high ee is also reported. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Structural reliability analysis under evidence theory using the active learning kriging model

    NASA Astrophysics Data System (ADS)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsa Terzic, Gabriele Bassi

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less

  6. Comparative study on antibody immobilization strategies for efficient circulating tumor cell capture.

    PubMed

    Ates, Hatice Ceren; Ozgur, Ebru; Kulah, Haluk

    2018-03-23

    Methods for isolation and quantification of circulating tumor cells (CTCs) are attracting more attention every day, as the data for their unprecedented clinical utility continue to grow. However, the challenge is that CTCs are extremely rare (as low as 1 in a billion of blood cells) and a highly sensitive and specific technology is required to isolate CTCs from blood cells. Methods utilizing microfluidic systems for immunoaffinity-based CTC capture are preferred, especially when purity is the prime requirement. However, antibody immobilization strategy significantly affects the efficiency of such systems. In this study, two covalent and two bioaffinity antibody immobilization methods were assessed with respect to their CTC capture efficiency and selectivity, using an anti-epithelial cell adhesion molecule (EpCAM) as the capture antibody. Surface functionalization was realized on plain SiO 2 surfaces, as well as in microfluidic channels. Surfaces functionalized with different antibody immobilization methods are physically and chemically characterized at each step of functionalization. MCF-7 breast cancer and CCRF-CEM acute lymphoblastic leukemia cell lines were used as EpCAM positive and negative cell models, respectively, to assess CTC capture efficiency and selectivity. Comparisons reveal that bioaffinity based antibody immobilization involving streptavidin attachment with glutaraldehyde linker gave the highest cell capture efficiency. On the other hand, a covalent antibody immobilization method involving direct antibody binding by N-(3-dimethylaminopropyl)-N'-ethylcarbodiimide hydrochloride (EDC)-N-hydroxysuccinimide (NHS) reaction was found to be more time and cost efficient with a similar cell capture efficiency. All methods provided very high selectivity for CTCs with EpCAM expression. It was also demonstrated that antibody immobilization via EDC-NHS reaction in a microfluidic channel leads to high capture efficiency and selectivity.

  7. Three-dimensional implicit lambda methods

    NASA Technical Reports Server (NTRS)

    Napolitano, M.; Dadone, A.

    1983-01-01

    This paper derives the three dimensional lambda-formulation equations for a general orthogonal curvilinear coordinate system and provides various block-explicit and block-implicit methods for solving them, numerically. Three model problems, characterized by subsonic, supersonic and transonic flow conditions, are used to assess the reliability and compare the efficiency of the proposed methods.

  8. Measuring landscape esthetics: the scenic beauty estimation method

    Treesearch

    Terry C. Daniel; Ron S. Boster

    1976-01-01

    The Scenic Beauty Estimation Method (SBE) provides quantitative measures of esthetic preferences for alternative wildland management systems. Extensive experimentation and testing with user, interest, and professional groups validated the method. SBE shows promise as an efficient and objective means for assessing the scenic beauty of public forests and wildlands, and...

  9. Manipulation of a quasi-natural cell block for high-efficiency transplantation of adherent somatic cells

    PubMed Central

    Chung, H.J.; Hassan, M.M.; Park, J.O.; Kim, H.J.; Hong, S.T.

    2015-01-01

    Recent advances have raised hope that transplantation of adherent somatic cells could provide dramatic new therapies for various diseases. However, current methods for transplanting adherent somatic cells are not efficient enough for therapeutic applications. Here, we report the development of a novel method to generate quasi-natural cell blocks for high-efficiency transplantation of adherent somatic cells. The blocks were created by providing a unique environment in which cultured cells generated their own extracellular matrix. Initially, stromal cells isolated from mice were expanded in vitro in liquid cell culture medium followed by transferring the cells into a hydrogel shell. After incubation for 1 day with mechanical agitation, the encapsulated cell mass was perforated with a thin needle and then incubated for an additional 6 days to form a quasi-natural cell block. Allograft transplantation of the cell block into C57BL/6 mice resulted in perfect adaptation of the allograft and complete integration into the tissue of the recipient. This method could be widely applied for repairing damaged cells or tissues, stem cell transplantation, ex vivo gene therapy, or plastic surgery. PMID:25742639

  10. A High Order, Locally-Adaptive Method for the Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Chan, Daniel

    1998-11-01

    I have extended the FOSLS method of Cai, Manteuffel and McCormick (1997) and implemented it within the framework of a spectral element formulation using the Legendre polynomial basis function. The FOSLS method solves the Navier-Stokes equations as a system of coupled first-order equations and provides the ellipticity that is needed for fast iterative matrix solvers like multigrid to operate efficiently. Each element is treated as an object and its properties are self-contained. Only C^0 continuity is imposed across element interfaces; this design allows local grid refinement and coarsening without the burden of having an elaborate data structure, since only information along element boundaries is needed. With the FORTRAN 90 programming environment, I can maintain a high computational efficiency by employing a hybrid parallel processing model. The OpenMP directives provides parallelism in the loop level which is executed in a shared-memory SMP and the MPI protocol allows the distribution of elements to a cluster of SMP's connected via a commodity network. This talk will provide timing results and a comparison with a second order finite difference method.

  11. Energy Efficient Homes and Small Buildings. Vocational Education, Industrial Arts Curriculum Guide. Bulletin 1698.

    ERIC Educational Resources Information Center

    Louisiana State Dept. of Education, Baton Rouge. Div. of Vocational Education.

    This curriculum guide provides high school carpentry, construction, or drafting course teachers with material related to retrofitting a building for energy conservation. Section 1 discusses how design and construction methods affect energy use. Section 2 focuses on care and maintenance of energy efficient buildings. In addition to informative…

  12. Impact-Based Training Evaluation Model (IBTEM) for School Supervisors in Indonesia

    ERIC Educational Resources Information Center

    Sutarto; Usman, Husaini; Jaedun, Amat

    2016-01-01

    This article represents a study aiming at developing: (1) an IBTEM which is capable to promote partnership between training providers and their client institutions, easy to understand, effective, efficient; and (2) an IBTEM implementation guide which is comprehensive, coherent, easy to understand, effective, and efficient. The method used in the…

  13. A novel way to establish fertilization recommendations based on agronomic efficiency and a sustainable yield index for rice crops.

    PubMed

    Liu, Chuang; Liu, Yi; Li, Zhiguo; Zhang, Guoshi; Chen, Fang

    2017-04-24

    A simpler approach for establishing fertilizer recommendations for major crops is urgently required to improve the application efficiency of commercial fertilizers in China. To address this need, we developed a method based on field data drawn from the China Program of the International Plant Nutrition Institute (IPNI) rice experiments and investigations carried out in southeastern China during 2001 to 2012. Our results show that, using agronomic efficiencies and a sustainable yield index (SYI), this new method for establishing fertilizer recommendations robustly estimated the mean rice yield (7.6 t/ha) and mean nutrient supply capacities (186, 60, and 96 kg/ha of N, P 2 O 5 , and K 2 O, respectively) of fertilizers in the study region. In addition, there were significant differences in rice yield response, economic cost/benefit ratio, and nutrient-use efficiencies associated with agronomic efficiencies ranked as high, medium and low. Thus, ranking agronomic efficiency could strengthen linear models relating rice yields and SYI. Our results also indicate that the new method provides better recommendations in terms of rice yield, SYI, and profitability than previous methods. Hence, we believe it is an effective approach for improving recommended applications of commercial fertilizers to rice (and potentially other crops).

  14. Incentives and provider payment methods.

    PubMed

    Barnum, H; Kutzin, J; Saxenian, H

    1995-01-01

    The mode of payment creates powerful incentives affecting provider behavior and the efficiency, equity and quality outcomes of health finance reforms. This article examines provider incentives as well as administrative costs, and institutional conditions for successful implementation associated with provider payment alternatives. The alternatives considered are budget reforms, capitation, fee-for-service, and case-based reimbursement. We conclude that competition, whether through a regulated private sector or within a public system, has the potential to improve the performance of any payment method. All methods generate both adverse and beneficial incentives. Systems with mixed forms of provider payment can provide tradeoffs to offset the disadvantages of individual modes. Low-income countries should avoid complex payment systems requiring higher levels of institutional development.

  15. An efficient transport solver for tokamak plasmas

    DOE PAGES

    Park, Jin Myung; Murakami, Masanori; St. John, H. E.; ...

    2017-01-03

    A simple approach to efficiently solve a coupled set of 1-D diffusion-type transport equations with a stiff transport model for tokamak plasmas is presented based on the 4th order accurate Interpolated Differential Operator scheme along with a nonlinear iteration method derived from a root-finding algorithm. Here, numerical tests using the Trapped Gyro-Landau-Fluid model show that the presented high order method provides an accurate transport solution using a small number of grid points with robust nonlinear convergence.

  16. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby; Wang, Zhen; Berrill, Mark A

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  17. Design of automated oil sludge treatment unit

    NASA Astrophysics Data System (ADS)

    Chukhareva, N.; Korotchenko, T.; Yurkin, A.

    2015-11-01

    The article provides the feasibility study of contemporary oil sludge treatment methods. The basic parameters of a new resource-efficient oil sludge treatment unit that allows extracting as much oil as possible and disposing other components in efficient way have been outlined. Based on the calculation results, it has been revealed that in order to reduce the cost of the treatment unit and the expenses related to sludge disposal, it is essential to apply various combinations of the existing treatment methods.

  18. Efficient forced vibration reanalysis method for rotating electric machines

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Suzuki, Hiromitsu; Kuroishi, Masakatsu; Nakai, Hideo

    2015-01-01

    Rotating electric machines are subject to forced vibration by magnetic force excitation with wide-band frequency spectrum that are dependent on the operating conditions. Therefore, when designing the electric machines, it is inevitable to compute the vibration response of the machines at various operating conditions efficiently and accurately. This paper presents an efficient frequency-domain vibration analysis method for the electric machines. The method enables the efficient re-analysis of the vibration response of electric machines at various operating conditions without the necessity to re-compute the harmonic response by finite element analyses. Theoretical background of the proposed method is provided, which is based on the modal reduction of the magnetic force excitation by a set of amplitude-modulated standing-waves. The method is applied to the forced response vibration of the interior permanent magnet motor at a fixed operating condition. The results computed by the proposed method agree very well with those computed by the conventional harmonic response analysis by the FEA. The proposed method is then applied to the spin-up test condition to demonstrate its applicability to various operating conditions. It is observed that the proposed method can successfully be applied to the spin-up test conditions, and the measured dominant frequency peaks in the frequency response can be well captured by the proposed approach.

  19. Method and apparatus for dissociating metals from metal compounds extracted into supercritical fluids

    DOEpatents

    Wai, Chien M.; Hunt, Fred H.; Smart, Neil G.; Lin, Yuehe

    2000-01-01

    A method for dissociating metal-ligand complexes in a supercritical fluid by treating the metal-ligand complex with heat and/or reducing or oxidizing agents is described. Once the metal-ligand complex is dissociated, the resulting metal and/or metal oxide form fine particles of substantially uniform size. In preferred embodiments, the solvent is supercritical carbon dioxide and the ligand is a .beta.-diketone such as hexafluoroacetylacetone or dibutyldiacetate. In other preferred embodiments, the metals in the metal-ligand complex are copper, silver, gold, tungsten, titanium, tantalum, tin, or mixtures thereof. In preferred embodiments, the reducing agent is hydrogen. The method provides an efficient process for dissociating metal-ligand complexes and produces easily-collected metal particles free from hydrocarbon solvent impurities. The ligand and the supercritical fluid can be regenerated to provide an economic, efficient process.

  20. Spectral analysis for GNSS coordinate time series using chirp Fourier transform

    NASA Astrophysics Data System (ADS)

    Feng, Shengtao; Bo, Wanju; Ma, Qingzun; Wang, Zifan

    2017-12-01

    Spectral analysis for global navigation satellite system (GNSS) coordinate time series provides a principal tool to understand the intrinsic mechanism that affects tectonic movements. Spectral analysis methods such as the fast Fourier transform, Lomb-Scargle spectrum, evolutionary power spectrum, wavelet power spectrum, etc. are used to find periodic characteristics in time series. Among spectral analysis methods, the chirp Fourier transform (CFT) with less stringent requirements is tested with synthetic and actual GNSS coordinate time series, which proves the accuracy and efficiency of the method. With the length of series only limited to even numbers, CFT provides a convenient tool for windowed spectral analysis. The results of ideal synthetic data prove CFT accurate and efficient, while the results of actual data show that CFT is usable to derive periodic information from GNSS coordinate time series.

  1. Organic electroluminescent devices and method for improving energy efficiency and optical stability thereof

    DOEpatents

    Heller, Christian Maria

    2004-04-27

    An organic electroluminescent device ("OELD") has a controllable brightness, an improved energy efficiency, and stable optical output at low brightness. The OELD is activated with a series of voltage pulses, each of which has a maximum voltage value that corresponds to the maximum power efficiency when the OELD is activated. The frequency of the pulses, or the duty cycle, or both are chosen to provide the desired average brightness.

  2. Highly efficient volume hologram multiplexing in thick dye-doped jelly-like gelatin.

    PubMed

    Katarkevich, Vasili M; Rubinov, Anatoli N; Efendiev, Terlan Sh

    2014-08-01

    Dye-doped jelly-like gelatin is a thick-layer self-developing photosensitive medium that allows single and multiplexed volume phase holograms to be successfully recorded using pulsed laser radiation. In this Letter, we present a method for multiplexed recording of volume holograms in a dye-doped jelly-like gelatin, which provides significant increase in their diffraction efficiency. The method is based on the recovery of the photobleached dye molecule concentration in the hologram recording zone of gel, thanks to molecule diffusion from other unexposed gel areas. As an example, an optical recording of a multiplexed hologram consisting of three superimposed Bragg gratings with mean values of the diffraction efficiency and angular selectivity of ∼75% and ∼21', respectively, is demonstrated by using the proposed method.

  3. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  4. Validation of green-solvent extraction combined with chromatographic chemical fingerprint to evaluate quality of Stevia rebaudiana Bertoni.

    PubMed

    Teo, Chin Chye; Tan, Swee Ngin; Yong, Jean Wan Hong; Hew, Choy Sin; Ong, Eng Shi

    2009-02-01

    An approach that combined green-solvent methods of extraction with chromatographic chemical fingerprint and pattern recognition tools such as principal component analysis (PCA) was used to evaluate the quality of medicinal plants. Pressurized hot water extraction (PHWE) and microwave-assisted extraction (MAE) were used and their extraction efficiencies to extract two bioactive compounds, namely stevioside (SV) and rebaudioside A (RA), from Stevia rebaudiana Bertoni (SB) under different cultivation conditions were compared. The proposed methods showed that SV and RA could be extracted from SB using pure water under optimized conditions. The extraction efficiency of the methods was observed to be higher or comparable to heating under reflux with water. The method precision (RSD, n = 6) was found to vary from 1.91 to 2.86% for the two different methods on different days. Compared to PHWE, MAE has higher extraction efficiency with shorter extraction time. MAE was also found to extract more chemical constituents and provide distinctive chemical fingerprints for quality control purposes. Thus, a combination of MAE with chromatographic chemical fingerprints and PCA provided a simple and rapid approach for the comparison and classification of medicinal plants from different growth conditions. Hence, the current work highlighted the importance of extraction method in chemical fingerprinting for the classification of medicinal plants from different cultivation conditions with the aid of pattern recognition tools used.

  5. Efficient method for assessing channel instability near bridges

    USGS Publications Warehouse

    Robinson, Bret A.; Thompson, R.E.

    1993-01-01

    Efficient methods for data collection and processing are required to complete channel-instability assessments at 5,600 bridge sites in Indiana at an affordable cost and within a reasonable time frame while maintaining the quality of the assessments. To provide this needed efficiency and quality control, a data-collection form was developed that specifies the data to be collected and the order of data collection. This form represents a modification of previous forms that grouped variables according to type rather than by order of collection. Assessments completed during two field seasons showed that greater efficiency was achieved by using a fill-in-the-blank form that organizes the data to be recorded in a specified order: in the vehicle, from the roadway, in the upstream channel, under the bridge, and in the downstream channel.

  6. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  7. Efficient Terahertz Wide-Angle NUFFT-Based Inverse Synthetic Aperture Imaging Considering Spherical Wavefront.

    PubMed

    Gao, Jingkun; Deng, Bin; Qin, Yuliang; Wang, Hongqiang; Li, Xiang

    2016-12-14

    An efficient wide-angle inverse synthetic aperture imaging method considering the spherical wavefront effects and suitable for the terahertz band is presented. Firstly, the echo signal model under spherical wave assumption is established, and the detailed wavefront curvature compensation method accelerated by 1D fast Fourier transform (FFT) is discussed. Then, to speed up the reconstruction procedure, the fast Gaussian gridding (FGG)-based nonuniform FFT (NUFFT) is employed to focus the image. Finally, proof-of-principle experiments are carried out and the results are compared with the ones obtained by the convolution back-projection (CBP) algorithm. The results demonstrate the effectiveness and the efficiency of the presented method. This imaging method can be directly used in the field of nondestructive detection and can also be used to provide a solution for the calculation of the far-field RCSs (Radar Cross Section) of targets in the terahertz regime.

  8. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  9. Thorough subcells diagnosis in a multi-junction solar cell via absolute electroluminescence-efficiency measurements

    PubMed Central

    Chen, Shaoqiang; Zhu, Lin; Yoshita, Masahiro; Mochizuki, Toshimitsu; Kim, Changsu; Akiyama, Hidefumi; Imaizumi, Mitsuru; Kanemitsu, Yoshihiko

    2015-01-01

    World-wide studies on multi-junction (tandem) solar cells have led to record-breaking improvements in conversion efficiencies year after year. To obtain detailed and proper feedback for solar-cell design and fabrication, it is necessary to establish standard methods for diagnosing subcells in fabricated tandem devices. Here, we propose a potential standard method to quantify the detailed subcell properties of multi-junction solar cells based on absolute measurements of electroluminescence (EL) external quantum efficiency in addition to the conventional solar-cell external-quantum-efficiency measurements. We demonstrate that the absolute-EL-quantum-efficiency measurements provide I–V relations of individual subcells without the need for referencing measured I–V data, which is in stark contrast to previous works. Moreover, our measurements quantify the absolute rates of junction loss, non-radiative loss, radiative loss, and luminescence coupling in the subcells, which constitute the “balance sheets” of tandem solar cells. PMID:25592484

  10. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  11. A novel technique based on in vitro oocyte injection to improve CRISPR/Cas9 gene editing in zebrafish

    PubMed Central

    Xie, Shao-Lin; Bian, Wan-Ping; Wang, Chao; Junaid, Muhammad; Zou, Ji-Xing; Pei, De-Sheng

    2016-01-01

    Contemporary improvements in the type II clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) system offer a convenient way for genome editing in zebrafish. However, the low efficiencies of genome editing and germline transmission require a time-intensive and laborious screening work. Here, we reported a method based on in vitro oocyte storage by injecting oocytes in advance and incubating them in oocyte storage medium to significantly improve the efficiencies of genome editing and germline transmission by in vitro fertilization (IVF) in zebrafish. Compared to conventional methods, the prior micro-injection of zebrafish oocytes improved the efficiency of genome editing, especially for the sgRNAs with low targeting efficiency. Due to high throughputs, simplicity and flexible design, this novel strategy will provide an efficient alternative to increase the speed of generating heritable mutants in zebrafish by using CRISPR/Cas9 system. PMID:27680290

  12. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  13. A High-Order Method Using Unstructured Grids for the Aeroacoustic Analysis of Realistic Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Lockard, David P.

    1999-01-01

    A method for the prediction of acoustic scatter from complex geometries is presented. The discontinuous Galerkin method provides a framework for the development of a high-order method using unstructured grids. The method's compact form contributes to its accuracy and efficiency, and makes the method well suited for distributed memory parallel computing platforms. Mesh refinement studies are presented to validate the expected convergence properties of the method, and to establish the absolute levels of a error one can expect at a given level of resolution. For a two-dimensional shear layer instability wave and for three-dimensional wave propagation, the method is demonstrated to be insensitive to mesh smoothness. Simulations of scatter from a two-dimensional slat configuration and a three-dimensional blended-wing-body demonstrate the capability of the method to efficiently treat realistic geometries.

  14. Efficient scatter model for simulation of ultrasound images from computed tomography data

    NASA Astrophysics Data System (ADS)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  15. Printable, flexible and stretchable diamond for thermal management

    DOEpatents

    Rogers, John A; Kim, Tae Ho; Choi, Won Mook; Kim, Dae Hyeong; Meitl, Matthew; Menard, Etienne; Carlisle, John

    2013-06-25

    Various heat-sinked components and methods of making heat-sinked components are disclosed where diamond in thermal contact with one or more heat-generating components are capable of dissipating heat, thereby providing thermally-regulated components. Thermally conductive diamond is provided in patterns capable of providing efficient and maximum heat transfer away from components that may be susceptible to damage by elevated temperatures. The devices and methods are used to cool flexible electronics, integrated circuits and other complex electronics that tend to generate significant heat. Also provided are methods of making printable diamond patterns that can be used in a range of devices and device components.

  16. Factors Affecting the Adoption of Telemedicine: A Three-Country Empirical Investigation

    ERIC Educational Resources Information Center

    Mansouri-Rad, Parand

    2012-01-01

    Telemedicine improves access to information and healthcare services. Not only more cost effective and more efficient method of providing health care than the traditional methods, telemedicine is the most convenient method of delivering healthcare. However, the adoption of telemedicine has been challenging. The purpose of this dissertation is to…

  17. Evaluation of a multiclass, multiresidue liquid chromatography-tandem mass spectrometry method for analysis of 120 veterinary drugs in bovine kidney

    USDA-ARS?s Scientific Manuscript database

    Traditionally, regulatory monitoring of veterinary drug residues in food animal tissues involves the use of several single-class methods to cover a wide analytical scope. Multiclass, multiresidue methods of analysis tend to provide greater overall laboratory efficiency than the use of multiple meth...

  18. [Application of the mixed programming with Labview and Matlab in biomedical signal analysis].

    PubMed

    Yu, Lu; Zhang, Yongde; Sha, Xianzheng

    2011-01-01

    This paper introduces the method of mixed programming with Labview and Matlab, and applies this method in a pulse wave pre-processing and feature detecting system. The method has been proved suitable, efficient and accurate, which has provided a new kind of approach for biomedical signal analysis.

  19. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  20. A Comparison of Wavetable and FM Data Reduction Methods for Resynthesis of Musical Sounds

    NASA Astrophysics Data System (ADS)

    Horner, Andrew

    An ideal music-synthesis technique provides both high-level spectral control and efficient computation. Simple playback of recorded samples lacks spectral control, while additive sine-wave synthesis is inefficient. Wavetable and frequencymodulation synthesis, however, are two popular synthesis techniques that are very efficient and use only a few control parameters.

  1. Estimating the Efficiency of Therapy Groups in a College Counseling Center

    ERIC Educational Resources Information Center

    Weatherford, Ryan D.

    2017-01-01

    College counseling centers are facing rapidly increasing demands for services and are tasked to find efficient ways of providing adequate services while managing limited space. The use of therapy groups has been proposed as a method of managing demand. This brief report examines the clinical time savings of a traditional group therapy program in a…

  2. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, C.E.; Boothe, R.W.

    1994-02-15

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements. 6 figures.

  3. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, C.E.; Boothe, R.W.

    1996-01-23

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements. 6 figs.

  4. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, Charles E.; Boothe, Richard W.

    1996-01-01

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements.

  5. Method and apparatus for improved efficiency in a pulse-width-modulated alternating current motor drive

    DOEpatents

    Konrad, Charles E.; Boothe, Richard W.

    1994-01-01

    A scheme for optimizing the efficiency of an AC motor drive operated in a pulse-width-modulated mode provides that the modulation frequency of the power furnished to the motor is a function of commanded motor torque and is higher at lower torque requirements than at higher torque requirements.

  6. Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space

    DOEpatents

    Schach Von Wittenau, Alexis E.

    2003-01-01

    A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.

  7. Solvent-stir bar microextraction system using pure tris-(2-ethylhexyl) phosphate as supported liquid membrane: A new and efficient design for the extraction of malondialdehyde from biological fluids.

    PubMed

    Fashi, Armin; Salarian, Amir Ahmad; Zamani, Abbasali

    2018-05-15

    A novel and efficient device of solvent stir-bar microextraction (SSBME) system coupled with GC-FID detection was introduced for the pre-concentration and determination of malondialdehyde (MDA) in different biological matrices. In the proposed device, a piece of porous hollow fiber was located on a magnetic rotor by using a stainless steel-wire (as a mechanical support) and the whole device could stir with the magnetic rotor in sample solution cell. The device provided higher pre-concentration factor and better precision in comparison with conventional SBME due to the reproducible, stable and high contact area between the stirred sample and the hollow fiber. Organic solvent type, donor and acceptor phase pH, temperature, electrolyte concentration, agitation speed, extraction time, and sample volume as the effective factors on the SSBME efficiency, were examined and optimized. Pure tris-(2-ethylhexyl) phosphate (TEHP) was examined for the first time as supported liquid membrane (SLM) for the determination of MDA by SSBME method. In contrast to the conventional SLMs of SBME in the literature, the SLM of TEHP was highly stable in contact with biological fluids and provided the highest extraction efficiency. Under optimized extraction conditions, the method provided satisfactory linearity in the range 1-500 ng mL -1 , low LODs (0.3-0.7 ng mL -1 ), good repeatability and reproducibility (RSD% (n = 5) < 4.5) with the pre-concentration factors higher than 130-fold. To verify the accuracy of the proposed method, the traditional spectrophotometric TBA (2-thiobarbituric acid) test was used as a reference method. Finally, the proposed method was successfully applied for the determination and quantification of MDA in biological fluids. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Design of Ultrathin Pt-Based Multimetallic Nanostructures for Efficient Oxygen Reduction Electrocatalysis.

    PubMed

    Lai, Jianping; Guo, Shaojun

    2017-12-01

    Nanocatalysts with high platinum (Pt) utilization efficiency are attracting extensive attention for oxygen reduction reactions (ORR) conducted at the cathode of fuel cells. Ultrathin Pt-based multimetallic nanostructures show obvious advantages in accelerating the sluggish cathodic ORR due to their ultrahigh Pt utilization efficiency. A focus on recent important developments is provided in using wet chemistry techniques for making/tuning the multimetallic nanostructures with high Pt utilization efficiency for boosting ORR activity and durability. First, new synthetic methods for multimetallic core/shell nanoparticles with ultrathin shell sizes for achieving highly efficient ORR catalysts are reviewed. To obtain better ORR activity and stability, multimetallic nanowires or nanosheets with well-defined structure and surface are further highlighted. Furthermore, ultrathin Pt-based multimetallic nanoframes that feature 3D molecularly accessible surfaces for achieving more efficient ORR catalysis are discussed. Finally, the remaining challenges and outlooks for the future will be provided for this promising research field. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Efficient and Robust Optimization for Building Energy Simulation

    PubMed Central

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-01-01

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell’s Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell’s method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell’s Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell’s Hybrid method presently used in HVACSIM+. PMID:27325907

  10. Efficient and Robust Optimization for Building Energy Simulation.

    PubMed

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-06-15

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.

  11. A Simulation Approach to Assessing Sampling Strategies for Insect Pests: An Example with the Balsam Gall Midge

    PubMed Central

    Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.

    2013-01-01

    Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556

  12. Engineered AAVs for efficient noninvasive gene delivery to the central and peripheral nervous systems

    PubMed Central

    Chan, Ken Y; Jang, Min J; Yoo, Bryan B; Greenbaum, Alon; Ravi, Namita; Wu, Wei-Li; Sánchez-Guardado, Luis; Lois, Carlos; Mazmanian, Sarkis K; Deverman, Benjamin E; Gradinaru, Viviana

    2017-01-01

    Adeno-associated viruses (AAVs) are commonly used for in vivo gene transfer. Nevertheless, AAVs that provide efficient transduction across specific organs or cell populations are needed. Here, we describe AAV-PHP.eB and AAV-PHP.S, capsids that efficiently transduce the central and peripheral nervous systems, respectively. In the adult mouse, intravenous administration of 1×1011 vector genomes (vg) of AAV-PHP.eB transduced 69% of cortical and 55% of striatal neurons, while 1×1012 vg AAV-PHP.S transduced 82% of dorsal root ganglion neurons, as well as cardiac and enteric neurons. The efficiency of these vectors facilitates robust co-transduction and stochastic, multicolor labeling for individual cell morphology studies. To support such efforts, we provide methods for labeling a tunable fraction of cells without compromising color diversity. Furthermore, when used with cell type-specific promoters, these AAVs provide targeted gene expression across the nervous system and enable efficient and versatile gene manipulation throughout the nervous system of transgenic and non-transgenic animals. PMID:28671695

  13. Treatment of addiction and addiction-related behavior

    DOEpatents

    Dewey, Stephen L.; Brodie, Jonathan D.; Ashby, Jr., Charles R.

    2004-12-07

    The present invention provides a highly efficient method for treating substance addiction and for changing addiction-related behavior of a mammal suffering from substance addiction. The method includes administering to a mammal an effective amount of gamma vinylGABA or a pharmaceutically acceptable salt thereof. The present invention also provides a method of treatment of cocaine, morphine, heroin, nicotine, amphetamine, methamphetamine, or ethanol addiction by treating a mammal with an effective amount of gamma vinylGABA or a pharmaceutically acceptable salt thereof.

  14. Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h

    2010-11-01

    In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.

  15. Research on Generating Method of Embedded Software Test Document Based on Dynamic Model

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.

  16. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    NASA Astrophysics Data System (ADS)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  17. Microcidal effects of a new pelleting process.

    PubMed

    Ekperigin, H E; McCapes, R H; Redus, R; Ritchie, W L; Cameron, W J; Nagaraja, K V; Noll, S

    1990-09-01

    The microcidal efficiency of a new pelleting process was evaluated in four trials. Also, different methods of measuring temperature and moisture were compared and attempts were made to determine the influence on efficiency of pH changes occurring during processing. In the new process, the traditional boiler-conditioner was replaced by an Anaerobic Pasteurizing Conditioning (APC) System. Microcidal efficiency of the APC System, by itself or in conjunction with a pellet mill, appeared to be 100% against Escherichia coli and nonlactose-fermenters, 99% against aerobic mesophiles, and 90% against fungi. These levels of efficiency were attained when the temperature and moisture of feed conditioned in the APC System for 4.6 +/- .5 min were 82.9 +/- 2.4 C and 14.9 +/- .3%, respectively. On-line temperature probes were reliable and provided quick, accurate estimates of feed temperature. The near infrared scanner and microwave oven methods of measuring moisture were much quicker but less accurate than the in vacuo method. There were no differences among the pH of samples of raw, conditioned, and pelleted feed.

  18. Configurable memory system and method for providing atomic counting operations in a memory device

    DOEpatents

    Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin

    2010-09-14

    A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.

  19. Perspective: Ring-polymer instanton theory

    NASA Astrophysics Data System (ADS)

    Richardson, Jeremy O.

    2018-05-01

    Since the earliest explorations of quantum mechanics, it has been a topic of great interest that quantum tunneling allows particles to penetrate classically insurmountable barriers. Instanton theory provides a simple description of these processes in terms of dominant tunneling pathways. Using a ring-polymer discretization, an efficient computational method is obtained for applying this theory to compute reaction rates and tunneling splittings in molecular systems. Unlike other quantum-dynamics approaches, the method scales well with the number of degrees of freedom, and for many polyatomic systems, the method may provide the most accurate predictions which can be practically computed. Instanton theory thus has the capability to produce useful data for many fields of low-temperature chemistry including spectroscopy, atmospheric and astrochemistry, as well as surface science. There is however still room for improvement in the efficiency of the numerical algorithms, and new theories are under development for describing tunneling in nonadiabatic transitions.

  20. Evaluation of microplate immunocapture method for detection of Vibrio cholerae, Salmonella Typhi and Shigella flexneri from food.

    PubMed

    Fakruddin, Md; Hossain, Md Nur; Ahmed, Monzur Morshed

    2017-08-29

    Improved methods with better separation and concentration ability for detection of foodborne pathogens are in constant need. The aim of this study was to evaluate microplate immunocapture (IC) method for detection of Salmonella Typhi, Shigella flexneri and Vibrio cholerae from food samples to provide a better alternative to conventional culture based methods. The IC method was optimized for incubation time, bacterial concentration, and capture efficiency. 6 h incubation and log 6 CFU/ml cell concentration provided optimal results. The method was shown to be highly specific for the pathogens concerned. Capture efficiency (CE) was around 100% of the target pathogens, whereas CE was either zero or very low for non-target pathogens. The IC method also showed better pathogen detection ability at different concentrations of cells from artificially contaminated food samples in comparison with culture based methods. Performance parameter of the method was also comparable (Detection limit- 25 CFU/25 g; sensitivity 100%; specificity-96.8%; Accuracy-96.7%), even better than culture based methods (Detection limit- 125 CFU/25 g; sensitivity 95.9%; specificity-97%; Accuracy-96.2%). The IC method poses to be the potential to be used as a method of choice for detection of foodborne pathogens in routine laboratory practice after proper validation.

  1. Study of vesicle size distribution dependence on pH value based on nanopore resistive pulse method

    NASA Astrophysics Data System (ADS)

    Lin, Yuqing; Rudzevich, Yauheni; Wearne, Adam; Lumpkin, Daniel; Morales, Joselyn; Nemec, Kathleen; Tatulian, Suren; Lupan, Oleg; Chow, Lee

    2013-03-01

    Vesicles are low-micron to sub-micron spheres formed by a lipid bilayer shell and serve as potential vehicles for drug delivery. The size of vesicle is proposed to be one of the instrumental variables affecting delivery efficiency since the size is correlated to factors like circulation and residence time in blood, the rate for cell endocytosis, and efficiency in cell targeting. In this work, we demonstrate accessible and reliable detection and size distribution measurement employing a glass nanopore device based on the resistive pulse method. This novel method enables us to investigate the size distribution dependence of pH difference across the membrane of vesicles with very small sample volume and rapid speed. This provides useful information for optimizing the efficiency of drug delivery in a pH sensitive environment.

  2. Implementing "lean" principles to improve the efficiency of the endoscopy department of a community hospital: a case study.

    PubMed

    Laing, Karen; Baumgartner, Katherine

    2005-01-01

    Many endoscopy units are looking for ways to improve their efficiency without increasing the number of staff, purchasing additional equipment, or making the patients feel as if they have been rushed through the care process. To accomplish this, a few hospitals have looked to other industries for help. Recently, "lean" methods and tools from the manufacturing industry, have been applied successfully in health care systems, and have proven to be an effective way to eliminate waste and redundancy in workplace processes. The "lean" method and tools in service organizations focuses on providing the most efficient and effective flow of service and products. This article will describe the journey of one endoscopy department within a community hospital to illustrate application of "lean" methods and tools and results.

  3. An algebraic equation solution process formulated in anticipation of banded linear equations.

    DOT National Transportation Integrated Search

    1971-01-01

    A general method for the solution of large, sparsely banded, positive-definite, coefficient matrices is presented. The goal in developing the method was to produce an efficient and reliable solution process and to provide the user-programmer with a p...

  4. Comparison of four molecular methods to type Salmonella Enteritidis strains.

    PubMed

    Campioni, Fábio; Pitondo-Silva, André; Bergamini, Alzira M M; Falcão, Juliana P

    2015-05-01

    This study compared the pulsed-field gel electrophoresis (PFGE), enterobacterial repetitive intergenic consensus-PCR (ERIC-PCR), multilocus variable-number of tanden-repeat analysis (MLVA), and multilocus sequence typing (MLST) methods for typing 188 Salmonella Enteritidis strains from different sources isolated over a 24-year period in Brazil. PFGE and ERIC-PCR were more efficient than MLVA for subtyping the strains. However, MLVA provided additional epidemiological information for those strains. In addition, MLST showed the Brazilian strains as belonging to the main clonal complex of S. Enteritidis, CC11, and provided the first report of two new STs in the S. enterica database but could not properly subtype the strains. Our results showed that the use of PFGE or ERIC-PCR together with MLVA is suitable to efficiently subtype S. Enteritidis strains and provide important epidemiological information. © 2015 APMIS. Published by John Wiley & Sons Ltd.

  5. Method and apparatus for improved observation of in-situ combustion processes

    DOEpatents

    Lee, D.O.; Montoya, P.C.; Wayland, J.R. Jr.

    Method and apparatus are provided for obtaining accurate dynamic measurements for passage of phase fronts through a core sample in a test fixture. Flow-through grid structures are provided for electrodes to permit data to be obtained before, during and after passage of a front there-through. Such electrodes are incorporated in a test apparatus for obtaining electrical characteristics of the core sample. With the inventive structure a method is provided for measurement of instabilities in a phase front progressing through the medium. Availability of accurate dynamic data representing parameters descriptive of material characteristics before, during and after passage of a front provides a more efficient method for enhanced recovery of oil using a fire flood technique. 6 figures, 2 tables.

  6. Efficient Isolation Protocol for B and T Lymphocytes from Human Palatine Tonsils

    PubMed Central

    Assadian, Farzaneh; Sandström, Karl; Laurell, Göran; Svensson, Catharina; Akusjärvi, Göran; Punga, Tanel

    2015-01-01

    Tonsils form a part of the immune system providing the first line of defense against inhaled pathogens. Usually the term “tonsils” refers to the palatine tonsils situated at the lateral walls of the oral part of the pharynx. Surgically removed palatine tonsils provide a convenient accessible source of B and T lymphocytes to study the interplay between foreign pathogens and the host immune system. This video protocol describes the dissection and processing of surgically removed human palatine tonsils, followed by the isolation of the individual B and T cell populations from the same tissue sample. We present a method, which efficiently separates tonsillar B and T lymphocytes using an antibody-dependent affinity protocol. Further, we use the method to demonstrate that human adenovirus infects specifically the tonsillar T cell fraction. The established protocol is generally applicable to efficiently and rapidly isolate tonsillar B and T cell populations to study the role of different types of pathogens in tonsillar immune responses. PMID:26650582

  7. Microbial electrosynthetic cells

    DOEpatents

    May, Harold D.; Marshall, Christopher W.; Labelle, Edward V.

    2018-01-30

    Methods are provided for microbial electrosynthesis of H.sub.2 and organic compounds such as methane and acetate. Method of producing mature electrosynthetic microbial populations by continuous culture is also provided. Microbial populations produced in accordance with the embodiments as shown to efficiently synthesize H.sub.2, methane and acetate in the presence of CO.sub.2 and a voltage potential. The production of biodegradable and renewable plastics from electricity and carbon dioxide is also disclosed.

  8. Interaction sorting method for molecular dynamics on multi-core SIMD CPU architecture.

    PubMed

    Matvienko, Sergey; Alemasov, Nikolay; Fomin, Eduard

    2015-02-01

    Molecular dynamics (MD) is widely used in computational biology for studying binding mechanisms of molecules, molecular transport, conformational transitions, protein folding, etc. The method is computationally expensive; thus, the demand for the development of novel, much more efficient algorithms is still high. Therefore, the new algorithm designed in 2007 and called interaction sorting (IS) clearly attracted interest, as it outperformed the most efficient MD algorithms. In this work, a new IS modification is proposed which allows the algorithm to utilize SIMD processor instructions. This paper shows that the improvement provides an additional gain in performance, 9% to 45% in comparison to the original IS method.

  9. Lignin-blocking treatment of biomass and uses thereof

    DOEpatents

    Yang, Bin [Hanover, NH; Wyman, Charles E [Norwich, VT

    2009-10-20

    Disclosed is a method for converting cellulose in a lignocellulosic biomass. The method provides for a lignin-blocking polypeptide and/or protein treatment of high lignin solids. The treatment enhances cellulase availability in cellulose conversion. Cellulase efficiencies are improved by the protein or polypeptide treatment. The treatment may be used in combination with steam explosion and acid prehydrolysis techniques. Hydrolysis yields from lignin containing biomass are enhanced 5-20%, and enzyme utilization is increased from 10% to 50%. Thus, a more efficient and economical method of processing lignin containing biomass materials utilizes a polypeptide/protein treatment step that effectively blocks lignin binding of cellulase.

  10. Flexible, reconfigurable, power efficient transmitter and method

    NASA Technical Reports Server (NTRS)

    Bishop, James W. (Inventor); Zaki, Nazrul H. Mohd (Inventor); Newman, David Childress (Inventor); Bundick, Steven N. (Inventor)

    2011-01-01

    A flexible, reconfigurable, power efficient transmitter device and method is provided. In one embodiment, the method includes receiving outbound data and determining a mode of operation. When operating in a first mode the method may include modulation mapping the outbound data according a modulation scheme to provide first modulation mapped digital data, converting the first modulation mapped digital data to an analog signal that comprises an intermediate frequency (IF) analog signal, upconverting the IF analog signal to produce a first modulated radio frequency (RF) signal based on a local oscillator signal, amplifying the first RF modulated signal to produce a first RF output signal, and outputting the first RF output signal via an isolator. In a second mode of operation method may include modulation mapping the outbound data according a modulation scheme to provide second modulation mapped digital data, converting the second modulation mapped digital data to a first digital baseband signal, conditioning the first digital baseband signal to provide a first analog baseband signal, modulating one or more carriers with the first analog baseband signal to produce a second modulated RF signal based on a local oscillator signal, amplifying the second RF modulated signal to produce a second RF output signal, and outputting the second RF output signal via the isolator. The digital baseband signal may comprise an in-phase (I) digital baseband signal and a quadrature (Q) baseband signal.

  11. OpenEIS. Users Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Woohyun; Lutes, Robert G.; Katipamula, Srinivas

    This document is a users guide for OpenEIS, a software code designed to provide standard methods for authoring, sharing, testing, using and improving algorithms for operational building energy efficiency.

  12. Incorporating additional targets into learning trials for individuals with autism spectrum disorder.

    PubMed

    Nottingham, Casey L; Vladescu, Jason C; Kodak, Tiffany M

    2015-01-01

    Recently, researchers have investigated the effectiveness and efficiency of presenting secondary targets during learning trials for individuals with autism spectrum disorder (ASD). This instructional method may be more efficient than typical methods used with learners with ASD, because learners may acquire secondary targets without additional instruction. This review will discuss the recent literature on providing secondary targets during teaching trials for individuals with ASD, identify common aspects and results among these studies, and identify areas for future research. © Society for the Experimental Analysis of Behavior.

  13. Role of messenger RNA-ribosome complex in complementary DNA display.

    PubMed

    Naimuddin, Mohammed; Ohtsuka, Isao; Kitamura, Koichiro; Kudou, Motonori; Kimura, Shinnosuke

    2013-07-15

    In vitro display technologies such as ribosome display and messenger RNA (mRNA)/complementary DNA (cDNA) display are powerful methods that can generate library diversities on the order of 10(10-14). However, in mRNA and cDNA display methods, the end use diversity is two orders of magnitude lower than initial diversity and is dependent on the downstream processes that act as limiting factors. We found that in our previous cDNA display protocol, the purification of protein fusions by the use of streptavidin matrices from cell-free translation mixtures had poor efficiency (∼10-15%) that seriously affected the diversity of the purified library. Here, we have investigated and optimized the protocols that provided remarkable purification efficiencies. The stalled ribosome in the mRNA-ribosome complex was found to impede this purification efficiency. Among the various conditions tested, destabilization of ribosomes by appropriate concentration of metal chelating agents in combination with an optimal temperature of 30°C were found to be crucial and effective for nearly complete isolation of protein fusions from the cell-free translation system. Thus, this protocol provided 8- to 10-fold increased efficiency of purification over the previous method and results in retaining the diversity of the library by approximately an order of magnitude-important for directed evolution. We also discuss the possible effects in the fabrication of protein chips. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  15. 77 FR 28790 - Medical Loss Ratio Requirements Under the Patient Protection and Affordable Care Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-16

    ... information will be available on the HHS Web site, HealthCare.gov , providing an efficient method of public... Sources, Methods, and Limitations On December 1, 2010, we published an interim final rule (75 FR 74864... impacts of the MLR rule, the data contain certain limitations; we developed imputation methods to account...

  16. Multipole expansion method for supernova neutrino oscillations

    DOE PAGES

    Duan, Huaiyu; Shalgar, Shashank

    2014-10-31

    Here, we demonstrate a multipole expansion method to calculate collective neutrino oscillations in supernovae using the neutrino bulb model. We show that it is much more efficient to solve multi-angle neutrino oscillations in multipole basis than in angle basis. The multipole expansion method also provides interesting insights into multi-angle calculations that were accomplished previously in angle basis.

  17. A rapid method to assess grape rust mites on leaves and observations from case studies in western Oregon vineyards

    USDA-ARS?s Scientific Manuscript database

    A rapid method for extracting eriophyoid mites was adapted from previous studies to provide growers and IPM consultants with a practical, efficient, and reliable tool to monitor for rust mites in vineyards. The rinse in bag (RIB) method allows quick extraction of mites from collected plant parts (sh...

  18. Innovative Methods for Collecting and Analyzing Qualitative Data: Vignettes and Pre-Structured Cases.

    ERIC Educational Resources Information Center

    Miles, Matthew B.

    Two innovative methods for collecting and analyzing qualitative data are vignettes and pre-structured cases. Vignettes are descriptions of situations or problems written by a professional, with a suggested outline and comments provided by a researcher. Advantages of this method are strength of impact of the written descriptions and efficiency of…

  19. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell

    2011-01-01

    Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.

  20. The Status and Promise of Advanced M&V: An Overview of “M&V 2.0” Methods, Tools, and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franconi, Ellen; Gee, Matt; Goldberg, Miriam

    Advanced measurement and verification (M&V) of energy efficiency savings, often referred to as M&V 2.0 or advanced M&V, is currently an object of much industry attention. Thus far, however, there has been a lack of clarity about what techniques M&V 2.0 includes, how those techniques differ from traditional approaches, what the key considerations are for their use, and what value propositions M&V 2.0 presents to different stakeholders. The objective of this paper is to provide background information and frame key discussion points related to advanced M&V. The paper identifies the benefits, methods, and requirements of advanced M&V and outlines keymore » technical issues for applying these methods. It presents an overview of the distinguishing elements of M&V 2.0 tools and of how the industry is addressing needs for tool testing, consistency, and standardization, and it identifies opportunities for collaboration. In this paper, we consider two key features of M&V 2.0: (1) automated analytics that can provide ongoing, near-real-time savings estimates, and (2) increased data granularity in terms of frequency, volume, or end-use detail. Greater data granularity for large numbers of customers, such as that derived from comprehensive implementation of advanced metering infrastructure (AMI) systems, leads to very large data volumes. This drives interest in automated processing systems. It is worth noting, however, that automated processing can provide value even when applied to less granular data, such as monthly consumption data series. Likewise, more granular data, such as interval or end-use data, delivers value with or without automated processing, provided the processing is manageable. But it is the combination of greater data detail with automated processing that offers the greatest opportunity for value. Using M&V methods that capture load shapes together with automated processing1 can determine savings in near-real time to provide stakeholders with more timely and detailed information. This information can be used to inform ongoing building operations, provide early input on energy efficiency program design, or assess the impact of efficiency by location and time of day. Stakeholders who can make use of such information include regulators, energy efficiency program administrators, program evaluators, contractors and aggregators, building owners, the investment community, and grid planners. Although each stakeholder has its own priorities and challenges related to savings measurement and verification, the potential exists for all to draw from a single set of efficiency valuation data. Such an integrated approach could provide a base consistency across stakeholder uses.« less

  1. Determination of the efficiency of ethanol oxidation in a proton exchange membrane electrolysis cell

    NASA Astrophysics Data System (ADS)

    Altarawneh, Rakan M.; Majidi, Pasha; Pickup, Peter G.

    2017-05-01

    Products and residual ethanol in the anode and cathode exhausts of an ethanol electrolysis cell (EEC) have been analyzed by proton NMR and infrared spectrometry under a variety of operating conditions. This provides a full accounting of the fate of ethanol entering the cell, including the stoichiometry of the ethanol oxidation reaction (i.e. the average number of electrons transferred per ethanol molecule), product distribution and the crossover of ethanol and products through the membrane. The reaction stoichiometry (nav) is the key parameter that determines the faradaic efficiency of both EECs and direct ethanol fuel cells. Values determined independently from the product distribution, amount of ethanol consumed, and a simple electrochemical method based on the dependence of the current on the flow rate of the ethanol solution are compared. It is shown that the electrochemical method yields results that are consistent with those based on the product distribution, and based on the consumption of ethanol when crossover is accounted for. Since quantitative analysis of the cathode exhaust is challenging, the electrochemical method provides a valuable alternative for routine determination of nav, and hence the faradaic efficiency of the cell.

  2. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  3. High-efficient and high-content cytotoxic recording via dynamic and continuous cell-based impedance biosensor technology.

    PubMed

    Hu, Ning; Fang, Jiaru; Zou, Ling; Wan, Hao; Pan, Yuxiang; Su, Kaiqi; Zhang, Xi; Wang, Ping

    2016-10-01

    Cell-based bioassays were effective method to assess the compound toxicity by cell viability, and the traditional label-based methods missed much information of cell growth due to endpoint detection, while the higher throughputs were demanded to obtain dynamic information. Cell-based biosensor methods can dynamically and continuously monitor with cell viability, however, the dynamic information was often ignored or seldom utilized in the toxin and drug assessment. Here, we reported a high-efficient and high-content cytotoxic recording method via dynamic and continuous cell-based impedance biosensor technology. The dynamic cell viability, inhibition ratio and growth rate were derived from the dynamic response curves from the cell-based impedance biosensor. The results showed that the biosensors has the dose-dependent manners to diarrhetic shellfish toxin, okadiac acid based on the analysis of the dynamic cell viability and cell growth status. Moreover, the throughputs of dynamic cytotoxicity were compared between cell-based biosensor methods and label-based endpoint methods. This cell-based impedance biosensor can provide a flexible, cost and label-efficient platform of cell viability assessment in the shellfish toxin screening fields.

  4. Study of Hydrogen Recovery Systems for Gas Vented While Refueling Liquid-Hydrogen Fueled Aircraft

    NASA Technical Reports Server (NTRS)

    Baker, C. R.

    1979-01-01

    Methods of capturing and reliquefying the cold hydrogen vapor produced during the fueling of aircraft designed to utilize liquid hydrogen fuel were investigated. An assessment of the most practical, economic, and energy efficient of the hydrogen recovery methods is provided.

  5. Research of waste heat energy efficiency for absorption heat pump recycling thermal power plant circulating water

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Zhang, Yu; Zhou, Liansheng; E, Zhijun; Wang, Kun; Wang, Ziyue; Li, Guohao; Qu, Bin

    2018-02-01

    The waste heat energy efficiency for absorption heat pump recycling thermal power plant circulating water has been analyzed. After the operation of heat pump, the influences on power generation and heat generation of unit were taken into account. In the light of the characteristics of heat pump in different operation stages, the energy efficiency of heat pump was evaluated comprehensively on both sides of benefits belonging to electricity and benefits belonging to heat, which adopted the method of contrast test. Thus, the reference of energy efficiency for same type projects was provided.

  6. Computation of full energy peak efficiency for nuclear power plant radioactive plume using remote scintillation gamma-ray spectrometry.

    PubMed

    Grozdov, D S; Kolotov, V P; Lavrukhin, Yu E

    2016-04-01

    A method of full energy peak efficiency estimation in the space around scintillation detector, including the presence of a collimator, has been developed. It is based on a mathematical convolution of the experimental results with the following data extrapolation. The efficiency data showed the average uncertainty less than 10%. Software to calculate integral efficiency for nuclear power plant plume was elaborated. The paper also provides results of nuclear power plant plume height estimation by analysis of the spectral data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  8. Quantification of alginate by aggregation induced by calcium ions and fluorescent polycations.

    PubMed

    Zheng, Hewen; Korendovych, Ivan V; Luk, Yan-Yeung

    2016-01-01

    For quantification of polysaccharides, including heparins and alginates, the commonly used carbazole assay involves hydrolysis of the polysaccharide to form a mixture of UV-active dye conjugate products. Here, we describe two efficient detection and quantification methods that make use of the negative charges of the alginate polymer and do not involve degradation of the targeted polysaccharide. The first method utilizes calcium ions to induce formation of hydrogel-like aggregates with alginate polymer; the aggregates can be quantified readily by staining with a crystal violet dye. This method does not require purification of alginate from the culture medium and can measure the large amount of alginate that is produced by a mucoid Pseudomonas aeruginosa culture. The second method employs polycations tethering a fluorescent dye to form suspension aggregates with the alginate polyanion. Encasing the fluorescent dye in the aggregates provides an increased scattering intensity with a sensitivity comparable to that of the conventional carbazole assay. Both approaches provide efficient methods for monitoring alginate production by mucoid P. aeruginosa. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Identifying and managing inappropriate hospital utilization: a policy synthesis.

    PubMed Central

    Payne, S M

    1987-01-01

    Utilization review, the assessment of the appropriateness and efficiency of hospital care through review of the medical record, and utilization management, deliberate action by payers or hospital administrators to influence providers of hospital services to increase the efficiency and effectiveness with which services are provided, are valuable but relatively unfamiliar strategies for containing hospital costs. The purpose of this synthesis is to increase awareness of the scope of and potential for these approaches among health services managers and administrators, third-party payers, policy analysts, and health services researchers. The synthesis will assist the reader to trace the conceptual context and the historical development of utilization review from unstructured methods using individual physicians' professional judgment to structured methods using explicit criteria; to establish the context of utilization review and clarify its uses; to understand the concepts and tools used in assessing the efficiency of hospital use; and to select, design, and evaluate utilization review and utilization management programs. The extent of inappropriate (medical unnecessary) hospital utilization and the factors associated with it are described. Implications for managers, providers, and third-party payers in targeting utilization review and in designing and evaluating utilization management programs are discussed. PMID:3121538

  10. Biorefining compounds and organocatalytic upgrading methods

    DOEpatents

    Chen, Eugene Y.; Liu, Dajiang

    2017-11-28

    The invention provides new methods for the direct umpolung self-condensation of 5-hydroxymethylfurfural (HMF) by organocatalysis, thereby upgrading the readily available substrate into 5,5'-di(hydroxymethyl) furoin (DHMF). While many efficient catalyst systems have been developed for conversion of plant biomass resources into HMF, the invention now provides methods to convert such nonfood biomass directly into DHMF by a simple process as described herein. The invention also provides highly effective new methods for upgrading other biomass furaldehydes and related compound to liquid fuels. The methods include the organocatalytic self-condensation (umpolung) of biomass furaldehydes into (C.sub.8-C.sub.12)furoin intermediates, followed by hydrogenation, etherification or esterification into oxygenated biodiesel, or hydrodeoxygenation by metal-acid tandem catalysis into premium hydrocarbon fuels.

  11. Biorefining compounds and organocatalytic upgrading methods

    DOEpatents

    Chen, Eugene Y.; Liu, Dajiang

    2016-10-18

    The invention provides new methods for the direct umpolung self-condensation of 5-hydroxymethylfurfural (HMF) by organocatalysis, thereby upgrading the readily available substrate into 5,5'-di(hydroxymethyl)furoin (DHMF). While many efficient catalyst systems have been developed for conversion of plant biomass resources into HMF, the invention now provides methods to convert such nonfood biomass directly into DHMF by a simple process as described herein. The invention also provides highly effective new methods for upgrading other biomass furaldehydes and related compound to liquid fuels. The methods include the organocatalytic self-condensation (umpolung) of biomass furaldehydes into (C.sub.8-C.sub.12)furoin intermediates, followed by hydrogenation, etherification or esterification into oxygenated biodiesel, or hydrodeoxygenation by metal-acid tandem catalysis into premium hydrocarbon fuels.

  12. Energy Efficiency and Demand Response for Residential Applications

    NASA Astrophysics Data System (ADS)

    Wellons, Christopher J., II

    The purpose of this thesis is to analyze the costs, feasibility and benefits of implementing energy efficient devices and demand response programs to a residential consumer environment. Energy efficiency and demand response are important for many reasons, including grid stabilization. With energy demand increasing, as the years' pass, the drain on the grid is going up. There are two key solutions to this problem, increasing supply by building more power plants and decreasing demand during peak periods, by increasing participation in demand response programs and by upgrading residential and commercial customers to energy efficient devices, to lower demand throughout the day. This thesis focuses on utilizing demand response methods and energy efficient device to reduce demand. Four simulations were created to analyze these methods. These simulations show the importance of energy efficiency and demand response participation to help stabilize the grid, integrate more alternative energy resources, and reduce emissions from fossil fuel generating facilities. The results of these numerical analyses show that demand response and energy efficiency can be beneficial to consumers and utilities. With demand response being the most beneficial to the utility and energy efficiency, specifically LED lighting, providing the most benefits to the consumer.

  13. Improvement of ruthenium based decarboxylation of carboxylic acids

    USDA-ARS?s Scientific Manuscript database

    The removal of oxygen atoms from biobased carboxylic acids is an attractive route to provide the drop in replacement feedstocks that industry needs to continue to provide high performance products. Through the use of ruthenium catalysis, an efficient method where this process can be accomplished on ...

  14. Use of Microcomputer to Manage Assessment Data.

    ERIC Educational Resources Information Center

    Vance, Booney; Hayden, David

    1982-01-01

    Examples are provided of a computerized special education management system used to manage assessment data for exceptional students. The system is designed to provide a simple yet efficient method of tracking data from educational and psychological evaluations (specifically the Wechsler Intelligence Scale for Children--Revised scores). (CL)

  15. Efficient Numerical Methods for Nonlinear-Facilitated Transport and Exchange in a Blood-Tissue Exchange Unit

    PubMed Central

    Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.

    2010-01-01

    The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808

  16. Low cost and efficient kurtosis-based deflationary ICA method: application to MRS sources separation problem.

    PubMed

    Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L

    2016-08-01

    Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.

  17. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  18. Performance evaluation of a health insurance in Nigeria using optimal resource use: health care providers perspectives

    PubMed Central

    2014-01-01

    Background Performance measures are often neglected during the transition period of national health insurance scheme implementation in many low and middle income countries. These measurements evaluate the extent to which various aspects of the schemes meet their key objectives. This study assesses the implementation of a health insurance scheme using optimal resource use domains and examines possible factors that influence each domain, according to providers’ perspectives. Methods A retrospective, cross-sectional survey was done between August and December 2010 in Kaduna state, and 466 health care provider personnel were interviewed. Optimal-resource-use was defined in four domains: provider payment mechanism (capitation and fee-for-service payment methods), benefit package, administrative efficiency, and active monitoring mechanism. Logistic regression analysis was used to identify provider factors that may influence each domain. Results In the provider payment mechanism domain, capitation payment method (95%) performed better than fee-for-service payment method (62%). Benefit package domain performed strongly (97%), while active monitoring mechanism performed weakly (37%). In the administrative efficiency domain, both promptness of referral system (80%) and prompt arrival of funds (93%) performed well. At the individual level, providers with fewer enrolees encountered difficulties with reimbursement. Other factors significantly influenced each of the optimal-resource-use domains. Conclusions Fee-for-service payment method and claims review, in the provider payment and active monitoring mechanisms, respectively, performed weakly according to the providers’ (at individual-level) perspectives. A short-fall on the supply-side of health insurance could lead to a direct or indirect adverse effect on the demand-side of the scheme. Capitation payment per enrolees should be revised to conform to economic circumstances. Performance indicators and providers’ characteristics and experiences associated with resource use can assist policy makers to monitor and evaluate health insurance implementation. PMID:24628889

  19. Sterilization Efficiency of Spore forming Bacteria in Powdery Food by Atmospheric Pressure Plasmas Sterilizer

    NASA Astrophysics Data System (ADS)

    Nagata, Masayoshi; Tanaka, Masashi; Kikuchi, Yusuke

    2015-09-01

    To provide food sterilization method capable of killing highly heat resistant spore forming bacteria, we have studied effects of plasma treatment method at atmospheric pressure in order to develop a new high speed plasma sterilization apparatus with a low cost and a high efficiency. It is also difficult even for the plasma treatment to sterilize powdery food including spices such as soybean, basil and turmeric. This paper describes that an introduction of mechanical rotation of a treatment space increases the efficiency so that perfect inactivation of spore forming bacteria in these materials by a short treatment time has been demonstrated in our experiments. We also will discuss the sterilization mechanism by dielectric barrier discharge.

  20. Numerical method of lines for the relaxational dynamics of nematic liquid crystals.

    PubMed

    Bhattacharjee, A K; Menon, Gautam I; Adhikari, R

    2008-08-01

    We propose an efficient numerical scheme, based on the method of lines, for solving the Landau-de Gennes equations describing the relaxational dynamics of nematic liquid crystals. Our method is computationally easy to implement, balancing requirements of efficiency and accuracy. We benchmark our method through the study of the following problems: the isotropic-nematic interface, growth of nematic droplets in the isotropic phase, and the kinetics of coarsening following a quench into the nematic phase. Our results, obtained through solutions of the full coarse-grained equations of motion with no approximations, provide a stringent test of the de Gennes ansatz for the isotropic-nematic interface, illustrate the anisotropic character of droplets in the nucleation regime, and validate dynamical scaling in the coarsening regime.

  1. Novel switching method for single-phase NPC three-level inverter with neutral-point voltage control

    NASA Astrophysics Data System (ADS)

    Lee, June-Seok; Lee, Seung-Joo; Lee, Kyo-Beum

    2018-02-01

    This paper proposes a novel switching method with the neutral-point voltage control in a single-phase neutral-point-clamped three-level inverter (SP-NPCI) used in photovoltaic systems. A proposed novel switching method for the SP-NPCI improves the efficiency. The main concept is to fix the switching state of one leg. As a result, the switching loss decreases and the total efficiency is improved. In addition, it enables the maximum power-point-tracking operation to be performed by applying the proposed neutral-point voltage control algorithm. This control is implemented by modifying the reference signal. Simulation and experimental results provide verification of the performance of a novel switching method with the neutral-point voltage control.

  2. Gene delivery by microfluidic flow-through electroporation based on constant DC and AC field.

    PubMed

    Geng, Tao; Zhan, Yihong; Lu, Chang

    2012-01-01

    Electroporation is one of the most widely used physical methods to deliver exogenous nucleic acids into cells with high efficiency and low toxicity. Conventional electroporation systems typically require expensive pulse generators to provide short electrical pulses at high voltage. In this work, we demonstrate a flow-through electroporation method for continuous transfection of cells based on disposable chips, a syringe pump, and a low-cost power supply that provides a constant voltage. We successfully transfect cells using either DC or AC voltage with high flow rates (ranging from 40 µl/min to 20 ml/min) and high efficiency (up to 75%). We also enable the entire cell membrane to be uniformly permeabilized and dramatically improve gene delivery by inducing complex migrations of cells during the flow.

  3. Efficient calcium lactate production by fermentation coupled with crystallization-based in situ product removal.

    PubMed

    Xu, Ke; Xu, Ping

    2014-07-01

    Lactic acid is a platform chemical with various industrial applications, and its derivative, calcium lactate, is an important food additive. Fermentation coupled with in situ product removal (ISPR) can provide more outputs with high productivity. The method used in this study was based on calcium lactate crystallization. Three cycles of crystallization were performed during the fermentation course using a Bacillus coagulans strain H-1. As compared to fed-batch fermentation, this method showed 1.7 times higher average productivity considering seed culture, with 74.4% more L-lactic acid produced in the fermentation with ISPR. Thus, fermentation coupled with crystallization-based ISPR may be a biotechnological alternative that provides an efficient system for production of calcium lactate or lactic acid. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Formal methods for test case generation

    NASA Technical Reports Server (NTRS)

    Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)

    2011-01-01

    The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.

  5. Techniques of EMG signal analysis: detection, processing, classification and applications

    PubMed Central

    Hussain, M.S.; Mohd-Yasin, F.

    2006-01-01

    Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694

  6. Approximate Solutions for Flow with a Stretching Boundary due to Partial Slip

    PubMed Central

    Filobello-Nino, U.; Vazquez-Leal, H.; Sarmiento-Reyes, A.; Benhammouda, B.; Jimenez-Fernandez, V. M.; Pereyra-Diaz, D.; Perez-Sesma, A.; Cervantes-Perez, J.; Huerta-Chua, J.; Sanchez-Orea, J.; Contreras-Hernandez, A. D.

    2014-01-01

    The homotopy perturbation method (HPM) is coupled with versions of Laplace-Padé and Padé methods to provide an approximate solution to the nonlinear differential equation that describes the behaviour of a flow with a stretching flat boundary due to partial slip. Comparing results between approximate and numerical solutions, we concluded that our results are capable of providing an accurate solution and are extremely efficient. PMID:27433526

  7. Development and application of carbon nanotubes assisted electromembrane extraction (CNTs/EME) for the determination of buprenorphine as a model of basic drugs from urine samples.

    PubMed

    Hasheminasab, Kobra Sadat; Fakhari, Ali Reza

    2013-03-12

    In this work carbon nanotubes assisted electromembrane extraction (CNTs/EME) coupled with capillary electrophoresis (CE) and ultraviolet (UV) detection was developed for the determination of buprenorphine as a model of basic drugs from urine samples. Carbon nanotubes reinforced hollow fiber was used in this research. Here the CNTs serve as a sorbent and provide an additional pathway for solute transport. The presence of CNTs in the hollow fiber wall increased the effective surface area and the overall partition coefficient on the membrane; and lead to an enhancement in the analyte transport. For investigating the influence of the presence of CNTs in the SLM on the extraction efficiency, a comparative study was carried out between EME and CNTs/EME methods. Optimization of the variables affecting these methods was carried out in order to achieve the best extraction efficiency. Optimal extractions were accomplished with NPOE as the SLM, with 200V as the driving force, and with pH 2.0 in the donor and pH 1.0 in the acceptor solutions with the whole assembly agitated at 750rpm after 25min and 15min for EME and CNTs/EME, respectively. Under the optimized conditions, in comparison with the conventional EME method, CNTs/EME provided higher extraction efficiencies in shorter time. This method provided lower limit of detection (1ngmL(-1)), higher preconcentration factor (185) and higher recovery (92). Finally, the applicability of this method was evaluated by the extraction and determination of buprenorphine in patients' urine samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Recovery of Bacillus Spore Contaminants from Rough Surfaces: a Challenge to Space Mission Cleanliness Control▿

    PubMed Central

    Probst, Alexander; Facius, Rainer; Wirth, Reinhard; Wolf, Marco; Moissl-Eichinger, Christine

    2011-01-01

    Microbial contaminants on spacecraft can threaten the scientific integrity of space missions due to probable interference with life detection experiments. Therefore, space agencies measure the cultivable spore load (“bioburden”) of a spacecraft. A recent study has reported an insufficient recovery of Bacillus atrophaeus spores from Vectran fabric, a typical spacecraft airbag material (A. Probst, R. Facius, R. Wirth, and C. Moissl-Eichinger, Appl. Environ. Microbiol. 76:5148-5158, 2010). Here, 10 different sampling methods were compared for B. atrophaeus spore recovery from this rough textile, revealing significantly different efficiencies (0.5 to 15.4%). The most efficient method, based on the wipe-rinse technique (foam-spatula protocol; 13.2% efficiency), was then compared to the current European Space Agency (ESA) standard wipe assay in sampling four different kinds of spacecraft-related surfaces. Results indicate that the novel protocol out-performed the standard method with an average efficiency of 41.1% compared to 13.9% for the standard method. Additional experiments were performed by sampling Vectran fabric seeded with seven different spore concentrations and five different Bacillus species (B. atrophaeus, B. anthracis Sterne, B. megaterium, B. thuringiensis, and B. safensis). Among these, B. atrophaeus spores were recovered with the highest (13.2%) efficiency and B. anthracis Sterne spores were recovered with the lowest (0.3%) efficiency. Different inoculation methods of seeding spores on test surfaces (spotting and aerosolization) resulted in different spore recovery efficiencies. The results of this study provide a step forward in understanding the spore distribution on and recovery from rough surfaces. The results presented will contribute relevant knowledge to the fields of astrobiology and B. anthracis research. PMID:21216908

  9. Resolved-particle simulation by the Physalis method: Enhancements and new capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierakowski, Adam J., E-mail: sierakowski@jhu.edu; Prosperetti, Andrea; Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede

    2016-03-15

    We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrativemore » simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.« less

  10. Evaluation de l'eco-efficience des processus de mise a niveau d'helicopteres en tant qu'alternative a la fin de vie

    NASA Astrophysics Data System (ADS)

    Rancher, Alexandre

    Classic industrial production methods generate significant pressures on natural resources as well as environmental constraints related to product end-of-life management. Closed-loop supply chains are often seen as more eco-efficient alternatives, well known to provide substantial economic and environmental benefits at the scale of the product life cycle. This is notably achieved through important reductions in the overall cost of production, in the needs for new materials and energies, and in the proportion of end-of-life components going to landfill. Due to their modular designs and the particular dynamics of helicopter service life, light nonpressurized helicopters have proven to be highly receptive to partial or total remanufacture and upgrade, extending their service life, enhancing their performance and modernizing their equipment, often for only a fraction of the cost of a new aircraft. However, little environmental data is available in order to assess the overall eco-efficiency of helicopter upgrade processes. This study resulted in the creation of a method for the systemic characterization of the processes encountered during the helicopter service life. The arrangement of these processes over time has enabled the construction of helicopter operation cycles, representative of the helicopter service life. These operation cycles have then been characterized, following various criteria based on helicopter designs and usage profiles, in order to study and compare their respective eco-efficiency. A case study is provided to illustrate the application of the method, based on a currently operating industrial business model of helicopter upgrade. This case study intends to provide a first-level assessment of the potential economic, technical and environmental benefits from remanufacturing and upgrading a helicopter, as an alternative production channel. The study found that compared to its replacement, upgrading a former airframe to a more recent design is generally a more eco-efficient decision. Important reductions were found in most of the profiles assessed, notably, reductions of up to 51 % in terms of production costs, 77.5 % in waste going to landfill, and up to 54 % in energy consumption. The method developed can be seen as a decision-helping tool intended for both operators and manufacturers. The method takes into account Design-for-Environment (DfE) guidelines and Material Recovery Opportunities (MRO), providing better understanding of the adaptability of a given design to fulfill the requirements of optimized reverse supply chains.

  11. Unlocking biodiversity and conservation studies in high diversity environments using environmental DNA (eDNA): a test with Guianese freshwater fishes.

    PubMed

    Cilleros, Kévin; Valentini, Alice; Allard, Luc; Dejean, Tony; Etienne, Roselyne; Grenouillet, Gaël; Iribar, Amaia; Taberlet, Pierre; Vigouroux, Régis; Brosse, Sébastien

    2018-05-16

    Determining the species compositions of local assemblages is a prerequisite to understanding how anthropogenic disturbances affect biodiversity. However, biodiversity measurements often remain incomplete due to the limited efficiency of sampling methods. This is particularly true in freshwater tropical environments that host rich fish assemblages, for which assessments are uncertain and often rely on destructive methods. Developing an efficient and non-destructive method to assess biodiversity in tropical freshwaters is highly important. In this study, we tested the efficiency of environmental DNA (eDNA) metabarcoding to assess the fish diversity of 39 Guianese sites. We compared the diversity and composition of assemblages obtained using traditional and metabarcoding methods. More than 7,000 individual fish belonging to 203 Guianese fish species were collected by traditional sampling methods, and ~17 million reads were produced by metabarcoding, among which ~8 million reads were assigned to 148 fish taxonomic units, including 132 fish species. The two methods detected a similar number of species at each site, but the species identities partially matched. The assemblage compositions from the different drainage basins were better discriminated using metabarcoding, revealing that while traditional methods provide a more complete but spatially limited inventory of fish assemblages, metabarcoding provides a more partial but spatially extensive inventory. eDNA metabarcoding can therefore be used for rapid and large-scale biodiversity assessments, while at a local scale, the two approaches are complementary and enable an understanding of realistic fish biodiversity. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. High-efficiency solar cell and method for fabrication

    DOEpatents

    Hou, Hong Q.; Reinhardt, Kitt C.

    1999-01-01

    A high-efficiency 3- or 4-junction solar cell is disclosed with a theoretical AM0 energy conversion efficiency of about 40%. The solar cell includes p-n junctions formed from indium gallium arsenide nitride (InGaAsN), gallium arsenide (GaAs) and indium gallium aluminum phosphide (InGaAlP) separated by n-p tunnel junctions. An optional germanium (Ge) p-n junction can be formed in the substrate upon which the other p-n junctions are grown. The bandgap energies for each p-n junction are tailored to provide substantially equal short-circuit currents for each p-n junction, thereby eliminating current bottlenecks and improving the overall energy conversion efficiency of the solar cell. Additionally, the use of an InGaAsN p-n junction overcomes super-bandgap energy losses that are present in conventional multi-junction solar cells. A method is also disclosed for fabricating the high-efficiency 3- or 4-junction solar cell by metal-organic chemical vapor deposition (MOCVD).

  13. High-efficiency solar cell and method for fabrication

    DOEpatents

    Hou, H.Q.; Reinhardt, K.C.

    1999-08-31

    A high-efficiency 3- or 4-junction solar cell is disclosed with a theoretical AM0 energy conversion efficiency of about 40%. The solar cell includes p-n junctions formed from indium gallium arsenide nitride (InGaAsN), gallium arsenide (GaAs) and indium gallium aluminum phosphide (InGaAlP) separated by n-p tunnel junctions. An optional germanium (Ge) p-n junction can be formed in the substrate upon which the other p-n junctions are grown. The bandgap energies for each p-n junction are tailored to provide substantially equal short-circuit currents for each p-n junction, thereby eliminating current bottlenecks and improving the overall energy conversion efficiency of the solar cell. Additionally, the use of an InGaAsN p-n junction overcomes super-bandgap energy losses that are present in conventional multi-junction solar cells. A method is also disclosed for fabricating the high-efficiency 3- or 4-junction solar cell by metal-organic chemical vapor deposition (MOCVD). 4 figs.

  14. The Location of Sources of Human Computer Processed Cerebral Potentials for the Automated Assessment of Visual Field Impairment

    PubMed Central

    Leisman, Gerald; Ashkenazi, Maureen

    1979-01-01

    Objective psychophysical techniques for investigating visual fields are described. The paper concerns methods for the collection and analysis of evoked potentials using a small laboratory computer and provides efficient methods for obtaining information about the conduction pathways of the visual system.

  15. Materials and methods for efficient succinate and malate production

    DOEpatents

    Jantama, Kaemwich; Haupt, Mark John; Zhang, Xueli; Moore, Jonathan C; Shanmugam, Keelnatham T; Ingram, Lonnie O'Neal

    2014-04-08

    Genetically engineered microorganisms have been constructed to produce succinate and malate in mineral salt media in pH-controlled batch fermentations without the addition of plasmids or foreign genes. The subject invention also provides methods of producing succinate and malate comprising the culture of genetically modified microorganisms.

  16. Constructing I[subscript h] Symmetrical Fullerenes from Pentagons

    ERIC Educational Resources Information Center

    Gan, Li-Hua

    2008-01-01

    Twelve pentagons are sufficient and necessary to form a fullerene cage. According to this structural feature of fullerenes, we propose a simple and efficient method for the construction of I[subscript h] symmetrical fullerenes from pentagons. This method does not require complicated mathematical knowledge; yet it provides an excellent paradigm for…

  17. Efficient computational methods to study new and innovative signal detection techniques in SETI

    NASA Technical Reports Server (NTRS)

    Deans, Stanley R.

    1991-01-01

    The purpose of the research reported here is to provide a rapid computational method for computing various statistical parameters associated with overlapped Hann spectra. These results are important for the Targeted Search part of the Search for ExtraTerrestrial Intelligence (SETI) Microwave Observing Project.

  18. Efficient engineering of marker-free synthetic allotetraploids of Saccharomyces.

    PubMed

    Alexander, William G; Peris, David; Pfannenstiel, Brandon T; Opulente, Dana A; Kuang, Meihua; Hittinger, Chris Todd

    2016-04-01

    Saccharomyces interspecies hybrids are critical biocatalysts in the fermented beverage industry, including in the production of lager beers, Belgian ales, ciders, and cold-fermented wines. Current methods for making synthetic interspecies hybrids are cumbersome and/or require genome modifications. We have developed a simple, robust, and efficient method for generating allotetraploid strains of prototrophic Saccharomyces without sporulation or nuclear genome manipulation. S. cerevisiae×S. eubayanus, S. cerevisiae×S. kudriavzevii, and S. cerevisiae×S. uvarum designer hybrid strains were created as synthetic lager, Belgian, and cider strains, respectively. The ploidy and hybrid nature of the strains were confirmed using flow cytometry and PCR-RFLP analysis, respectively. This method provides an efficient means for producing novel synthetic hybrids for beverage and biofuel production, as well as for constructing tetraploids to be used for basic research in evolutionary genetics and genome stability. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny

    NASA Astrophysics Data System (ADS)

    Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell

    Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.

  20. A systematic review of health care efficiency measures.

    PubMed

    Hussey, Peter S; de Vries, Han; Romley, John; Wang, Margaret C; Chen, Susan S; Shekelle, Paul G; McGlynn, Elizabeth A

    2009-06-01

    To review and characterize existing health care efficiency measures in order to facilitate a common understanding about the adequacy of these methods. Review of the MedLine and EconLit databases for articles published from 1990 to 2008, as well as search of the "gray" literature for additional measures developed by private organizations. We performed a systematic review for existing efficiency measures. We classified the efficiency measures by perspective, outputs, inputs, methods used, and reporting of scientific soundness. We identified 265 measures in the peer-reviewed literature and eight measures in the gray literature, with little overlap between the two sets of measures. Almost all of the measures did not explicitly consider the quality of care. Thus, if quality varies substantially across groups, which is likely in some cases, the measures reflect only the costs of care, not efficiency. Evidence on the measures' scientific soundness was mostly lacking: evidence on reliability or validity was reported for six measures (2.3 percent) and sensitivity analyses were reported for 67 measures (25.3 percent). Efficiency measures have been subjected to few rigorous evaluations of reliability and validity, and methods of accounting for quality of care in efficiency measurement are not well developed at this time. Use of these measures without greater understanding of these issues is likely to engender resistance from providers and could lead to unintended consequences.

  1. Solar cell collector

    NASA Technical Reports Server (NTRS)

    Evans, J. C., Jr. (Inventor)

    1978-01-01

    A method is provided for the fabrication of a photovoltaic device which possesses an efficient collector system for the conduction of the current generated by incident photons to the external circuitry of the device.

  2. Chapter 7: Refrigerator Recycling Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy-Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Keeling, Josh; Bruchs, Doug

    Refrigerator recycling programs are designed to save energy by removing operable, albeit less efficient, refrigerators from service. By offering free pickup, providing incentives, and disseminating information about the operating cost of less efficient refrigerators, these programs are designed to encourage consumers to: - Limit the use of secondary refrigerators -Relinquish refrigerators previously used as primary units when they are replaced (rather than keeping the existing refrigerator as a secondary unit) -Prevent the continued use of less efficient refrigerators in another household through a direct transfer (giving it away or selling it) or indirect transfer (resale on the used appliance market).more » Commonly implemented by third-party contractors (who collect and decommission participating appliances), these programs generate energy savings through the retirement of inefficient appliances. The decommissioning process captures environmentally harmful refrigerants and foam, and enables recycling of the plastic, metal, and wiring components.« less

  3. Different methods to fabricate efficient planar perovskite solar cells based on solution-processing Nb2O5 as electron transporting layer

    NASA Astrophysics Data System (ADS)

    Guo, Heng; Yang, Jian; Pu, Bingxue; Zhang, Haiyan; Niu, Xiaobin

    2018-01-01

    Organo-lead perovskites as light harvesters have represented a hot field of research on high-efficiency perovskite solar cells. Previous approaches to increasing the solar cell efficiency have focused on optimization of the morphology of perovskite film. In fact, the electron transporting layer (ETL) also has a significant impact on solar cell performance. Herein, we introduce a facile and low temperature solution-processing method to deposit Nb2O5 film as ETL for PSCs. Based on Nb2O5 ETL, we investigate the effect of the annealing time for the perovskite films via different solution processing, relating it to the perovskite film morphology and its influence on the device working mechanisms. These results shed light on the origin of photovoltaic performance voltage in perovskite solar cells, and provide a path to further increase their efficiency.

  4. An Efficient, Noniterative Method of Identifying the Cost-Effectiveness Frontier.

    PubMed

    Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D

    2016-01-01

    Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we also provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. © The Author(s) 2015.

  5. An Efficient, Non-iterative Method of Identifying the Cost-Effectiveness Frontier

    PubMed Central

    Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D.

    2015-01-01

    Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we additionally provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. PMID:25926282

  6. Financing Lifelong Learning for All: An International Perspective. Working Paper.

    ERIC Educational Resources Information Center

    Burke, Gerald

    Recent international discussions provide information on various countries' responses to lifelong learning, including the following: (1) existing unmet needs and emerging needs for education and training; (2) funds required compared with what was provided; and (3) methods for acquiring additional funds, among them efficiency measures leading to…

  7. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  8. Prediction of noise field of a propfan at angle of attack

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    1991-01-01

    A method for predicting the noise field of a propfan operating at an angle of attack to the oncoming flow is presented. The method takes advantage of the high-blade-count of the advanced propeller designs to provide an accurate and efficient formula for predicting their noise field. The formula, which is written in terms of the Airy function and its derivative, provides a very attractive alternative to the use of numerical integration. A preliminary comparison shows rather favorable agreement between the predictions from the present method and the experimental data.

  9. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  10. $n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.

    PubMed

    Wu, Yue; Hua, Zhongyun; Zhou, Yicong

    2016-11-01

    Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.

  11. Increased glycosylation efficiency of recombinant proteins in Escherichia coli by auto-induction.

    PubMed

    Ding, Ning; Yang, Chunguang; Sun, Shenxia; Han, Lichi; Ruan, Yao; Guo, Longhua; Hu, Xuejun; Zhang, Jianing

    2017-03-25

    Escherichia coli cells have been considered as promising hosts for producing N-glycosylated proteins since the successful production of N-glycosylated protein in E. coli with the pgl (N-linked protein glycosylation) locus from Campylobacter jejuni. However, one hurdle in producing N-glycosylated proteins in large scale using E. coli is inefficient glycan glycosylation. In this study, we developed a strategy for the production of N-glycosylated proteins with high efficiency via an optimized auto-induction method. The 10th human fibronectin type III domain (FN3) was engineered with native glycosylation sequon DFNRSK and optimized DQNAT sequon in C-terminus with flexible linker as acceptor protein models. The resulting glycosylation efficiencies were confirmed by Western blots with anti-FLAG M1 antibody. Increased efficiency of glycosylation was obtained by changing the conventional IPTG induction to auto-induction method, which increased the glycosylation efficiencies from 60% and 75% up to 90% and 100% respectively. Moreover, in the condition of inserting the glycosylation sequon in the loop of FN3 (the acceptor sequon with local structural conformation), the glycosylation efficiency was increased from 35% to 80% by our optimized auto-induction procedures. To justify the potential for general application of the optimized auto-induction method, the reconstituted lsg locus from Haemophilus influenzae and PglB from C. jejuni were utilized, and this led to 100% glycosylation efficiency. Our studies provided quantitative evidence that the optimized auto-induction method will facilitate the large-scale production of pure exogenous N-glycosylation proteins in E. coli cells. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Organocatalytic sequential α-amination/Corey-Chaykovsky reaction of aldehydes: a high yield synthesis of 4-hydroxypyrazolidine derivatives.

    PubMed

    Kumar, B Senthil; Venkataramasubramanian, V; Sudalai, Arumugam

    2012-05-18

    A tandem reaction of in situ generated α-amino aldehydes with dimethyloxosulfonium methylide under Corey-Chaykovsky reaction conditions proceeds efficiently to give 4-hydroxypyrazolidine derivatives in high yields with excellent enantio- and diastereoselectivities. This organocatalytic sequential method provides for the efficient synthesis of anti-1,2-aminoalcohols, structural subunits present in several bioactive molecules as well.

  13. Geometric multigrid for an implicit-time immersed boundary method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.

    2014-10-12

    The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less

  14. Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.

    PubMed

    Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A

    2017-04-01

    Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.

  15. Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability

    NASA Astrophysics Data System (ADS)

    Guruvareddiar, Palanivel; Joseph, Biju K.

    2014-03-01

    Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.

  16. C Language Integrated Production System, Ada Version

    NASA Technical Reports Server (NTRS)

    Culbert, Chris; Riley, Gary; Savely, Robert T.; Melebeck, Clovis J.; White, Wesley A.; Mcgregor, Terry L.; Ferguson, Melisa; Razavipour, Reza

    1992-01-01

    CLIPS/Ada provides capabilities of CLIPS v4.3 but uses Ada as source language for CLIPS executable code. Implements forward-chaining rule-based language. Program contains inference engine and language syntax providing framework for construction of expert-system program. Also includes features for debugging application program. Based on Rete algorithm which provides efficient method for performing repeated matching of patterns. Written in Ada.

  17. Identification of suitable genes contributes to lung adenocarcinoma clustering by multiple meta-analysis methods.

    PubMed

    Yang, Ze-Hui; Zheng, Rui; Gao, Yuan; Zhang, Qiang

    2016-09-01

    With the widespread application of high-throughput technology, numerous meta-analysis methods have been proposed for differential expression profiling across multiple studies. We identified the suitable differentially expressed (DE) genes that contributed to lung adenocarcinoma (ADC) clustering based on seven popular multiple meta-analysis methods. Seven microarray expression profiles of ADC and normal controls were extracted from the ArrayExpress database. The Bioconductor was used to perform the data preliminary preprocessing. Then, DE genes across multiple studies were identified. Hierarchical clustering was applied to compare the classification performance for microarray data samples. The classification efficiency was compared based on accuracy, sensitivity and specificity. Across seven datasets, 573 ADC cases and 222 normal controls were collected. After filtering out unexpressed and noninformative genes, 3688 genes were remained for further analysis. The classification efficiency analysis showed that DE genes identified by sum of ranks method separated ADC from normal controls with the best accuracy, sensitivity and specificity of 0.953, 0.969 and 0.932, respectively. The gene set with the highest classification accuracy mainly participated in the regulation of response to external stimulus (P = 7.97E-04), cyclic nucleotide-mediated signaling (P = 0.01), regulation of cell morphogenesis (P = 0.01) and regulation of cell proliferation (P = 0.01). Evaluation of DE genes identified by different meta-analysis methods in classification efficiency provided a new perspective to the choice of the suitable method in a given application. Varying meta-analysis methods always present varying abilities, so synthetic consideration should be taken when providing meta-analysis methods for particular research. © 2015 John Wiley & Sons Ltd.

  18. Full coverage of perovskite layer onto ZnO nanorods via a modified sequential two-step deposition method for efficiency enhancement in perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Ruankham, Pipat; Wongratanaphisan, Duangmanee; Gardchareon, Atcharawon; Phadungdhitidhada, Surachet; Choopun, Supab; Sagawa, Takashi

    2017-07-01

    Full coverage of perovskite layer onto ZnO nanorod substrates with less pinholes is crucial for achieving high-efficiency perovskite solar cells. In this work, a two-step sequential deposition method is modified to achieve an appropriate property of perovskite (MAPbI3) film. Surface treatment of perovskite layer and its precursor have been systematically performed and their morphologies have been investigated. By pre-wetting of lead iodide (PbI2) and letting it dry before reacting with methylammonium iodide (MAI) provide better coverage of perovskite film onto ZnO nanorod substrate than one without any treatment. An additional MAI deposition followed with toluene drop-casting technique on the perovskite film is also found to increase the coverage and enhance the transformation of PbI2 to MAPbI3. These lead to longer charge carrier lifetime, resulting in an enhanced power conversion efficiency (PCE) from 1.21% to 3.05%. The modified method could been applied to a complex ZnO nanorods/TiO2 nanoparticles substrate. The enhancement in PCE to 3.41% is observed. These imply that our introduced method provides a simple way to obtain the full coverage and better transformation to MAPbI3 phase for enhancement in performances of perovskite solar cells.

  19. Configuration interaction singles natural orbitals: An orbital basis for an efficient and size intensive multireference description of electronic excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Yinan; Levine, Benjamin G., E-mail: levine@chemistry.msu.edu; Hohenstein, Edward G.

    2015-01-14

    Multireference quantum chemical methods, such as the complete active space self-consistent field (CASSCF) method, have long been the state of the art for computing regions of potential energy surfaces (PESs) where complex, multiconfigurational wavefunctions are required, such as near conical intersections. Herein, we present a computationally efficient alternative to the widely used CASSCF method based on a complete active space configuration interaction (CASCI) expansion built from the state-averaged natural orbitals of configuration interaction singles calculations (CISNOs). This CISNO-CASCI approach is shown to predict vertical excitation energies of molecules with closed-shell ground states similar to those predicted by state averaged (SA)-CASSCFmore » in many cases and to provide an excellent reference for a perturbative treatment of dynamic electron correlation. Absolute energies computed at the CISNO-CASCI level are found to be variationally superior, on average, to other CASCI methods. Unlike SA-CASSCF, CISNO-CASCI provides vertical excitation energies which are both size intensive and size consistent, thus suggesting that CISNO-CASCI would be preferable to SA-CASSCF for the study of systems with multiple excitable centers. The fact that SA-CASSCF and some other CASCI methods do not provide a size intensive/consistent description of excited states is attributed to changes in the orbitals that occur upon introduction of non-interacting subsystems. Finally, CISNO-CASCI is found to provide a suitable description of the PES surrounding a biradicaloid conical intersection in ethylene.« less

  20. Camouflage target reconnaissance based on hyperspectral imaging technology

    NASA Astrophysics Data System (ADS)

    Hua, Wenshen; Guo, Tong; Liu, Xun

    2015-08-01

    Efficient camouflaged target reconnaissance technology makes great influence on modern warfare. Hyperspectral images can provide large spectral range and high spectral resolution, which are invaluable in discriminating between camouflaged targets and backgrounds. Hyperspectral target detection and classification technology are utilized to achieve single class and multi-class camouflaged targets reconnaissance respectively. Constrained energy minimization (CEM), a widely used algorithm in hyperspectral target detection, is employed to achieve one class camouflage target reconnaissance. Then, support vector machine (SVM), a classification method, is proposed to achieve multi-class camouflage target reconnaissance. Experiments have been conducted to demonstrate the efficiency of the proposed method.

  1. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  2. An efficient formulation of robot arm dynamics for control and computer simulation

    NASA Astrophysics Data System (ADS)

    Lee, C. S. G.; Nigam, R.

    This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.

  3. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  4. Broadband metamaterial lens antennas with special properties by controlling both refractive-index distribution and feed directivity

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Shi, Chuan Bo; Chen, Tian Yi; Qing Qi, Mei; Li, Yun Bo; Cui, Tie Jun

    2018-04-01

    A new method is proposed to design gradient refractive-index metamaterial lens antennas by optimizing both the refractive-index distribution of the lens and the feed directivity. Comparing to the conventional design methods, source optimization provides a new degree of freedom to control aperture fields effectively. To demonstrate this method, two lenses with special properties based on this method are designed, to emit high-efficiency plane waves and fan-shaped beams, respectively. Both lenses have good performance and wide frequency band from 12 to 18 GHz, verifying the validity of the proposed method. The plane-wave emitting lens realized a high aperture efficiency of 75%, and the fan-beam lens achieved a high gain of 15 dB over board bandwidth. The experimental results have good agreement with the design targets and full-wave simulations.

  5. Toward better drug repositioning: prioritizing and integrating existing methods into efficient pipelines.

    PubMed

    Jin, Guangxu; Wong, Stephen T C

    2014-05-01

    Recycling old drugs, rescuing shelved drugs and extending patents' lives make drug repositioning an attractive form of drug discovery. Drug repositioning accounts for approximately 30% of the newly US Food and Drug Administration (FDA)-approved drugs and vaccines in recent years. The prevalence of drug-repositioning studies has resulted in a variety of innovative computational methods for the identification of new opportunities for the use of old drugs. Questions often arise from customizing or optimizing these methods into efficient drug-repositioning pipelines for alternative applications. It requires a comprehensive understanding of the available methods gained by evaluating both biological and pharmaceutical knowledge and the elucidated mechanism-of-action of drugs. Here, we provide guidance for prioritizing and integrating drug-repositioning methods for specific drug-repositioning pipelines. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A Mixed-Methods Research Framework for Healthcare Process Improvement.

    PubMed

    Bastian, Nathaniel D; Munoz, David; Ventura, Marta

    2016-01-01

    The healthcare system in the United States is spiraling out of control due to ever-increasing costs without significant improvements in quality, access to care, satisfaction, and efficiency. Efficient workflow is paramount to improving healthcare value while maintaining the utmost standards of patient care and provider satisfaction in high stress environments. This article provides healthcare managers and quality engineers with a practical healthcare process improvement framework to assess, measure and improve clinical workflow processes. The proposed mixed-methods research framework integrates qualitative and quantitative tools to foster the improvement of processes and workflow in a systematic way. The framework consists of three distinct phases: 1) stakeholder analysis, 2a) survey design, 2b) time-motion study, and 3) process improvement. The proposed framework is applied to the pediatric intensive care unit of the Penn State Hershey Children's Hospital. The implementation of this methodology led to identification and categorization of different workflow tasks and activities into both value-added and non-value added in an effort to provide more valuable and higher quality patient care. Based upon the lessons learned from the case study, the three-phase methodology provides a better, broader, leaner, and holistic assessment of clinical workflow. The proposed framework can be implemented in various healthcare settings to support continuous improvement efforts in which complexity is a daily element that impacts workflow. We proffer a general methodology for process improvement in a healthcare setting, providing decision makers and stakeholders with a useful framework to help their organizations improve efficiency. Published by Elsevier Inc.

  7. Rapid and effective processing of blood specimens for diagnostic PCR using filter paper and Chelex-100.

    PubMed Central

    Polski, J M; Kimzey, S; Percival, R W; Grosso, L E

    1998-01-01

    AIM: To provide a more efficient method for isolating DNA from peripheral blood for use in diagnostic DNA mutation analysis. METHODS: The use of blood impregnated filter paper and Chelex-100 in DNA isolation was evaluated and compared with standard DNA isolation techniques. RESULTS: In polymerase chain reaction (PCR) based assays of five point mutations, identical results were obtained with DNA isolated routinely from peripheral blood and isolated using the filter paper and Chelex-100 method. CONCLUSION: In the clinical setting, this method provides a useful alternative to conventional DNA isolation. It is easily implemented and inexpensive, and provides sufficient, stable DNA for multiple assays. The potential for specimen contamination is reduced because most of the steps are performed in a single microcentrifuge tube. In addition, this method provides for easy storage and transport of samples from the point of acquisition. PMID:9893748

  8. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  9. Conductive polymer/fullerene blend thin films with honeycomb framework for transparent photovoltaic application

    DOEpatents

    Cotlet, Mircea; Wang, Hsing-Lin; Tsai, Hsinhan; Xu, Zhihua

    2015-04-21

    Optoelectronic devices and thin-film semiconductor compositions and methods for making same are disclosed. The methods provide for the synthesis of the disclosed composition. The thin-film semiconductor compositions disclosed herein have a unique configuration that exhibits efficient photo-induced charge transfer and high transparency to visible light.

  10. Item Pocket Method to Allow Response Review and Change in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2013-01-01

    Most computerized adaptive testing (CAT) programs do not allow test takers to review and change their responses because it could seriously deteriorate the efficiency of measurement and make tests vulnerable to manipulative test-taking strategies. Several modified testing methods have been developed that provide restricted review options while…

  11. Global challenges/chemistry solutions: Promoting personal safety and national security

    USDA-ARS?s Scientific Manuscript database

    Joe Alper: Can you provide a little background about why there is a need for this type of assay? Mark Carter: Ricin is considered a biosecurity threat agent. A more efficient detection method was required. Joe Alper: How are these type of assays done today, or are current methods unsuitable for ...

  12. A cost comparison of five midstory removal methods

    Treesearch

    Brian G. Bailey; Michael R. Saunders; Zachary E. Lowe

    2011-01-01

    Within mature hardwood forests, midstory removal treatments have been shown to provide the adequate light and growing space needed for early establishment of intermediate-shade-tolerant species. As the method gains popularity, it is worthwhile to determine what manner of removal is most cost-efficient. Th is study compared five midstory removal treatments across 10...

  13. Investigation of methods and equipment for compaction of composite mixtures during their granulation

    NASA Astrophysics Data System (ADS)

    Shkarpetkin, E. A.; Osokin, A. V.; Sabaev, V. G.

    2018-03-01

    The article presents the results of a literature analysis of the methods of compaction of materials, analytical and experimental research on the creation of a roller compacting device and the determination of its operation modes, which provides an efficient preliminary compaction of a composite mixture based on technogenic materials.

  14. Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors.

    PubMed

    Woodard, Dawn B; Crainiceanu, Ciprian; Ruppert, David

    2013-01-01

    We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across subjects. We propose Bayesian inference of the joint functional and exposure models, and give a method for efficient computation. We contrast our approach with existing state-of-the-art methods for regression with functional predictors, and show that our method is more effective and efficient for data that include features occurring at varying locations. We apply our methodology to a large and complex dataset from the Sleep Heart Health Study, to quantify the association between sleep characteristics and health outcomes. Software and technical appendices are provided in online supplemental materials.

  15. A multilevel finite element method for Fredholm integral eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Xie, Hehu; Zhou, Tao

    2015-12-01

    In this work, we proposed a multigrid finite element (MFE) method for solving the Fredholm integral eigenvalue problems. The main motivation for such studies is to compute the Karhunen-Loève expansions of random fields, which play an important role in the applications of uncertainty quantification. In our MFE framework, solving the eigenvalue problem is converted to doing a series of integral iterations and eigenvalue solving in the coarsest mesh. Then, any existing efficient integration scheme can be used for the associated integration process. The error estimates are provided, and the computational complexity is analyzed. It is noticed that the total computational work of our method is comparable with a single integration step in the finest mesh. Several numerical experiments are presented to validate the efficiency of the proposed numerical method.

  16. Efficient method of image edge detection based on FSVM

    NASA Astrophysics Data System (ADS)

    Cai, Aiping; Xiong, Xiaomei

    2013-07-01

    For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.

  17. Comparison of Chromatic Dispersion Compensation Method Efficiency for 10 Gbit/S RZ-OOK and NRZ-OOK Wdm-Pon Transmission Systems

    NASA Astrophysics Data System (ADS)

    Alsevska, A.; Dilendorfs, V.; Spolitis, S.; Bobrovs, Vj.

    2017-12-01

    In the paper, the authors compare efficiency of two physical dispersion compensation methods for single channel and 8-channel WDM fibre-optical transmission systems using return-to-zero (RZ) and non-return-to-zero (NRZ) line codes for operation within optical C-band frequencies by means of computer simulations. As one of the most important destructive effects in fibre optical transmission systems (FOTS) is chromatic dispersion (CD), it is very important to reduce its negative effect on a transmitted signal. Dispersion compensation methods that were implemented in the research were dispersion compensating fibre (DCF) and fibre Bragg grating (FBG). The main goal of the paper was to find out which dispersion compensation method (DCF or FBG) provided the highest performance increase for fibre-optical transmission system and provided the longest transmission distance after dispersion compensation was implemented at different locations in the fibre-optical line while RZ or NRZ line codes were used. In the paper the reference point of signal quality for all measurements, which were obtained at the receiver, was BER<10-12.

  18. Tripartite equilibrium strategy for a carbon tax setting problem in air passenger transport.

    PubMed

    Xu, Jiuping; Qiu, Rui; Tao, Zhimiao; Xie, Heping

    2018-03-01

    Carbon emissions in air passenger transport have become increasing serious with the rapidly development of aviation industry. Combined with a tripartite equilibrium strategy, this paper proposes a multi-level multi-objective model for an air passenger transport carbon tax setting problem (CTSP) among an international organization, an airline and passengers with the fuzzy uncertainty. The proposed model is simplified to an equivalent crisp model by a weighted sum procedure and a Karush-Kuhn-Tucker (KKT) transformation method. To solve the equivalent crisp model, a fuzzy logic controlled genetic algorithm with entropy-Bolitzmann selection (FLC-GA with EBS) is designed as an integrated solution method. Then, a numerical example is provided to demonstrate the practicality and efficiency of the optimization method. Results show that the cap tax mechanism is an important part of air passenger trans'port carbon emission mitigation and thus, it should be effectively applied to air passenger transport. These results also indicate that the proposed method can provide efficient ways of mitigating carbon emissions for air passenger transport, and therefore assist decision makers in formulating relevant strategies under multiple scenarios.

  19. Community Landscapes: An Integrative Approach to Determine Overlapping Network Module Hierarchy, Identify Key Nodes and Predict Network Dynamics

    PubMed Central

    Kovács, István A.; Palotai, Robin; Szalay, Máté S.; Csermely, Peter

    2010-01-01

    Background Network communities help the functional organization and evolution of complex networks. However, the development of a method, which is both fast and accurate, provides modular overlaps and partitions of a heterogeneous network, has proven to be rather difficult. Methodology/Principal Findings Here we introduce the novel concept of ModuLand, an integrative method family determining overlapping network modules as hills of an influence function-based, centrality-type community landscape, and including several widely used modularization methods as special cases. As various adaptations of the method family, we developed several algorithms, which provide an efficient analysis of weighted and directed networks, and (1) determine pervasively overlapping modules with high resolution; (2) uncover a detailed hierarchical network structure allowing an efficient, zoom-in analysis of large networks; (3) allow the determination of key network nodes and (4) help to predict network dynamics. Conclusions/Significance The concept opens a wide range of possibilities to develop new approaches and applications including network routing, classification, comparison and prediction. PMID:20824084

  20. A guide for roadside vegetation management

    DOT National Transportation Integrated Search

    2009-10-01

    Implementing a comprehensive turf management program significantly reduces the overall cost of managing the vegetation along state roadways. This guide provides methods for efficiently and effectively managing the activities that will achieve and mai...

  1. Active magnetic regenerator

    DOEpatents

    Barclay, John A.; Steyert, William A.

    1982-01-01

    The disclosure is directed to an active magnetic regenerator apparatus and method. Brayton, Stirling, Ericsson, and Carnot cycles and the like may be utilized in an active magnetic regenerator to provide efficient refrigeration over relatively large temperature ranges.

  2. A tool for simulating parallel branch-and-bound methods

    NASA Astrophysics Data System (ADS)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  3. Computing the Baker-Campbell-Hausdorff series and the Zassenhaus product

    NASA Astrophysics Data System (ADS)

    Weyrauch, Michael; Scholz, Daniel

    2009-09-01

    The Baker-Campbell-Hausdorff (BCH) series and the Zassenhaus product are of fundamental importance for the theory of Lie groups and their applications in physics and physical chemistry. Standard methods for the explicit construction of the BCH and Zassenhaus terms yield polynomial representations, which must be translated into the usually required commutator representation. We prove that a new translation proposed recently yields a correct representation of the BCH and Zassenhaus terms. This representation entails fewer terms than the well-known Dynkin-Specht-Wever representation, which is of relevance for practical applications. Furthermore, various methods for the computation of the BCH and Zassenhaus terms are compared, and a new efficient approach for the calculation of the Zassenhaus terms is proposed. Mathematica implementations for the most efficient algorithms are provided together with comparisons of efficiency.

  4. Wavepacket dynamics and the multi-configurational time-dependent Hartree approach

    NASA Astrophysics Data System (ADS)

    Manthe, Uwe

    2017-06-01

    Multi-configurational time-dependent Hartree (MCTDH) based approaches are efficient, accurate, and versatile methods for high-dimensional quantum dynamics simulations. Applications range from detailed investigations of polyatomic reaction processes in the gas phase to high-dimensional simulations studying the dynamics of condensed phase systems described by typical solid state physics model Hamiltonians. The present article presents an overview of the different areas of application and provides a comprehensive review of the underlying theory. The concepts and guiding ideas underlying the MCTDH approach and its multi-mode and multi-layer extensions are discussed in detail. The general structure of the equations of motion is highlighted. The representation of the Hamiltonian and the correlated discrete variable representation (CDVR), which provides an efficient multi-dimensional quadrature in MCTDH calculations, are discussed. Methods which facilitate the calculation of eigenstates, the evaluation of correlation functions, and the efficient representation of thermal ensembles in MCTDH calculations are described. Different schemes for the treatment of indistinguishable particles in MCTDH calculations and recent developments towards a unified multi-layer MCTDH theory for systems including bosons and fermions are discussed.

  5. A novel method for transmitting southern rice black-streaked dwarf virus to rice without insect vector.

    PubMed

    Yu, Lu; Shi, Jing; Cao, Lianlian; Zhang, Guoping; Wang, Wenli; Hu, Deyu; Song, Baoan

    2017-08-15

    Southern rice black-streaked dwarf virus (SRBSDV) has spread from the south of China to the north of Vietnam in the past few years, and has severely influenced rice production. However, previous study of traditional SRBSDV transmission method by the natural virus vector, the white-backed planthopper (WBPH, Sogatella furcifera), in the laboratory, researchers are frequently confronted with lack of enough viral samples due to the limited life span of infected vectors and rice plants and low virus acquisition and inoculation efficiency by the vector. Meanwhile, traditional mechanical inoculation of virus only apply to dicotyledon because of the higher content of lignin in the leaves of the monocot. Therefore, establishing an efficient and persistent-transmitting model, with a shorter virus transmission time and a higher virus transmission efficiency, for screening novel anti-SRBSDV drugs is an urgent need. In this study, we firstly reported a novel method for transmitting SRBSDV in rice using the bud-cutting method. The transmission efficiency of SRBSDV in rice was investigated via the polymerase chain reaction (PCR) method and the replication of SRBSDV in rice was also investigated via the proteomics analysis. Rice infected with SRBSDV using the bud-cutting method exhibited similar symptoms to those infected by the WBPH, and the transmission efficiency (>80.00%), which was determined using the PCR method, and the virus transmission time (30 min) were superior to those achieved that transmitted by the WBPH. Proteomics analysis confirmed that SRBSDV P1, P2, P3, P4, P5-1, P5-2, P6, P8, P9-1, P9-2, and P10 proteins were present in infected rice seedlings infected via the bud-cutting method. The results showed that SRBSDV could be successfully transmitted via the bud-cutting method and plants infected SRBSDV exhibited the symptoms were similar to those transmitted by the WBPH. Therefore, the use of the bud-cutting method to generate a cheap, efficient, reliable supply of SRBSDV-infected rice seedlings should aid the development of disease control strategies. Meanwhile, this method also could provide a new idea for the other virus transmission in monocot.

  6. Managing for efficiency in health care: the case of Greek public hospitals.

    PubMed

    Mitropoulos, Panagiotis; Mitropoulos, Ioannis; Sissouras, Aris

    2013-12-01

    This paper evaluates the efficiency of public hospitals with two alternative conceptual models. One model targets resource usage directly to assess production efficiency, while the other model incorporates financial results to assess economic efficiency. Performance analysis of these models was conducted in two stages. In stage one, we utilized data envelopment analysis to obtain the efficiency score of each hospital, while in stage two we took into account the influence of the operational environment on efficiency by regressing those scores on explanatory variables that concern the performance of hospital services. We applied these methods to evaluate 96 general hospitals in the Greek national health system. The results indicate that, although the average efficiency scores in both models have remained relatively stable compared to past assessments, internal changes in hospital performances do exist. This study provides a clear framework for policy implications to increase the overall efficiency of general hospitals.

  7. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  8. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE PAGES

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa; ...

    2016-06-22

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  9. Pabon Lasso and Data Envelopment Analysis: A Complementary Approach to Hospital Performance Measurement

    PubMed Central

    Mehrtak, Mohammad; Yusefzadeh, Hasan; Jaafaripooyan, Ebrahim

    2014-01-01

    Background: Performance measurement is essential to the management of health care organizations to which efficiency is per se a vital indicator. Present study accordingly aims to measure the efficiency of hospitals employing two distinct methods. Methods: Data Envelopment Analysis and Pabon Lasso Model were jointly applied to calculate the efficiency of all general hospitals located in Iranian Eastern Azerbijan Province. Data was collected using hospitals’ monthly performance forms and analyzed and displayed by MS Visio and DEAP software. Results: In accord with Pabon Lasso model, 44.5% of the hospitals were entirely efficient, whilst DEA revealed 61% to be efficient. As such, 39% of the hospitals, by the Pabon Lasso, were wholly inefficient; based on DEA though; the relevant figure was only 22.2%. Finally, 16.5% of hospitals as calculated by Pabon Lasso and 16.7% by DEA were relatively efficient. DEA appeared to show more hospitals as efficient as opposed to the Pabon Lasso model. Conclusion: Simultaneous use of two models rendered complementary and corroborative results as both evidently reveal efficient hospitals. However, their results should be compared with prudence. Whilst the Pabon Lasso inefficient zone is fully clear, DEA does not provide such a crystal clear limit for inefficiency. PMID:24999147

  10. A streamlined artificial variable free version of simplex method.

    PubMed

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  11. A Streamlined Artificial Variable Free Version of Simplex Method

    PubMed Central

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement. PMID:25767883

  12. Adaptive controller for volumetric display of neuroimaging studies

    NASA Astrophysics Data System (ADS)

    Bleiberg, Ben; Senseney, Justin; Caban, Jesus

    2014-03-01

    Volumetric display of medical images is an increasingly relevant method for examining an imaging acquisition as the prevalence of thin-slice imaging increases in clinical studies. Current mouse and keyboard implementations for volumetric control provide neither the sensitivity nor specificity required to manipulate a volumetric display for efficient reading in a clinical setting. Solutions to efficient volumetric manipulation provide more sensitivity by removing the binary nature of actions controlled by keyboard clicks, but specificity is lost because a single action may change display in several directions. When specificity is then further addressed by re-implementing hardware binary functions through the introduction of mode control, the result is a cumbersome interface that fails to achieve the revolutionary benefit required for adoption of a new technology. We address the specificity versus sensitivity problem of volumetric interfaces by providing adaptive positional awareness to the volumetric control device by manipulating communication between hardware driver and existing software methods for volumetric display of medical images. This creates a tethered effect for volumetric display, providing a smooth interface that improves on existing hardware approaches to volumetric scene manipulation.

  13. Efficient multidimensional regularization for Volterra series estimation

    NASA Astrophysics Data System (ADS)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  14. High efficient perovskite solar cell material CH3NH3PbI3: Synthesis of films and their characterization

    NASA Astrophysics Data System (ADS)

    Bera, Amrita Mandal; Wargulski, Dan Ralf; Unold, Thomas

    2018-04-01

    Hybrid organometal perovskites have been emerged as promising solar cell material and have exhibited solar cell efficiency more than 20%. Thin films of Methylammonium lead iodide CH3NH3PbI3 perovskite materials have been synthesized by two different (one step and two steps) methods and their morphological properties have been studied by scanning electron microscopy and optical microscope imaging. The morphology of the perovskite layer is one of the most important parameters which affect solar cell efficiency. The morphology of the films revealed that two steps method provides better surface coverage than the one step method. However, the grain sizes were smaller in case of two steps method. The films prepared by two steps methods on different substrates revealed that the grain size also depend on the substrate where an increase of the grain size was found from glass substrate to FTO with TiO2 blocking layer to FTO without any change in the surface coverage area. Present study reveals that an improved quality of films can be obtained by two steps method by an optimization of synthesis processes.

  15. A Robust, "One-Pot" Method for Acquiring Kinetic Data for Hammett Plots Used to Demonstrate Transmission of Substituent Effects in Reactions of Aromatic Ethyl Esters

    ERIC Educational Resources Information Center

    Yau, Hon Man; Haines, Ronald S.; Harper, Jason B.

    2015-01-01

    A "one-pot" method for acquiring kinetic data for the reactions of a series of substituted aromatic esters with potassium hydroxide using [supserscript 13]C NMR spectroscopy is described, which provides an efficient way to obtain sufficient data to demonstrate the Hammett equation in undergraduate laboratories. The method is…

  16. Radiation Damage Workshop

    NASA Technical Reports Server (NTRS)

    Stella, P. M.

    1984-01-01

    The availability of data regarding the radiation behavior of GaAs and silicon solar cells is discussed as well as efforts to provide sufficient information. Other materials are considered too immature for reasonable radiation evaluation. The lack of concern over the possible catastrophic radiation degradation in cascade cells is a potentially serious problem. Lithium counterdoping shows potential for removing damage in irradiated P-type material, although initial efficiencies are not comparable to current state of the art. The possibility of refining the lithium doping method to maintain high initial efficiencies and combining it with radiation tolerant structures such as thin BSF cells or vertical junction cells could provide a substantial improvement in EOL efficiencies. Laser annealing of junctions, either those formed ion implantation or diffusion, may not only improve initial cell performance but might also reduce the radiation degradation rate.

  17. Fast and Efficient Stochastic Optimization for Analytic Continuation

    DOE PAGES

    Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...

    2016-09-28

    In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less

  18. Linkage disequilibrium interval mapping of quantitative trait loci.

    PubMed

    Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte

    2006-03-16

    For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.

  19. A novel method to prepare concentrated conidial biomass formulation of Trichoderma harzianum for seed application.

    PubMed

    Singh, P C; Nautiyal, C S

    2012-12-01

    To prepare concentrated formulation of Trichoderma harzianum MTCC-3841 (NBRI-1055) with high colony forming units (CFU), long shelf life and efficient in root colonization by a simple scrapping method. NBRI-1055 spores scrapped from potato dextrose agar plates were used to prepare a concentrated formulation after optimizing carrier material, moisture content and spore harvest time. The process provides an advantage of maintaining optimum moisture level by the addition of water rather than dehydration. The formulation had an initial 11-12 log(10) CFU g(-1). Its concentrated form reduces its application amount by 100 times (10 g 100 kg(-1) seed) and provides 3-4 log(10) CFU seed(-1). Shelf life of the product was experimentally determined at 30 and 40 °C and predicted at other temperatures following Arrhenius equation. The concentrated formulation as compared to similar products provides an extra advantage of smaller packaging for storage and transportation, cutting down product cost. Seed application of the formulation recorded significant increase in plant growth promotion. Stable and effective formulation of Trichoderma harzianum NBRI-1055 was obtained by a simple scrapping method. A new method for the production of concentrated, stable, effective and cost efficient formulation of T. harzianum has been validated for seed application. © 2012 The Society for Applied Microbiology.

  20. Electrooxidative Tandem Cyclization of Activated Alkynes with Sulfinic Acids To Access Sulfonated Indenones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, Jiangwei; Shi, Wenyan; Zhang, Fan

    An,electrooxidative direct arylsulfonlylation of yones sulfintc acids via a radical tandem cyclization strategy has been developed for the construction of sulfonated ilicIenones:under oxidant, free conditions. This method provides a simple and efficient approach to prepare various sulfonylindenones in good to,excellent:Tyidds,, demonstrating the tremendous prospect of utilizing electrocatalysis in oxidative coupling, Notably, this reaction could Be easily scaled up with good, efficiency.

  1. High efficiency magnetic bearings

    NASA Technical Reports Server (NTRS)

    Studer, Philip A.; Jayaraman, Chaitanya P.; Anand, Davinder K.; Kirk, James A.

    1993-01-01

    Research activities concerning high efficiency permanent magnet plus electromagnet (PM/EM) pancake magnetic bearings at the University of Maryland are reported. A description of the construction and working of the magnetic bearing is provided. Next, parameters needed to describe the bearing are explained. Then, methods developed for the design and testing of magnetic bearings are summarized. Finally, a new magnetic bearing which allows active torque control in the off axes directions is discussed.

  2. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.

    PubMed

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-12-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

  3. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

    PubMed Central

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111

  4. Dynamic Server-Based KML Code Generator Method for Level-of-Detail Traversal of Geospatial Data

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory; Mixon, Brian; Linger, TIm

    2013-01-01

    Web-based geospatial client applications such as Google Earth and NASA World Wind must listen to data requests, access appropriate stored data, and compile a data response to the requesting client application. This process occurs repeatedly to support multiple client requests and application instances. Newer Web-based geospatial clients also provide user-interactive functionality that is dependent on fast and efficient server responses. With massively large datasets, server-client interaction can become severely impeded because the server must determine the best way to assemble data to meet the client applications request. In client applications such as Google Earth, the user interactively wanders through the data using visually guided panning and zooming actions. With these actions, the client application is continually issuing data requests to the server without knowledge of the server s data structure or extraction/assembly paradigm. A method for efficiently controlling the networked access of a Web-based geospatial browser to server-based datasets in particular, massively sized datasets has been developed. The method specifically uses the Keyhole Markup Language (KML), an Open Geospatial Consortium (OGS) standard used by Google Earth and other KML-compliant geospatial client applications. The innovation is based on establishing a dynamic cascading KML strategy that is initiated by a KML launch file provided by a data server host to a Google Earth or similar KMLcompliant geospatial client application user. Upon execution, the launch KML code issues a request for image data covering an initial geographic region. The server responds with the requested data along with subsequent dynamically generated KML code that directs the client application to make follow-on requests for higher level of detail (LOD) imagery to replace the initial imagery as the user navigates into the dataset. The approach provides an efficient data traversal path and mechanism that can be flexibly established for any dataset regardless of size or other characteristics. The method yields significant improvements in userinteractive geospatial client and data server interaction and associated network bandwidth requirements. The innovation uses a C- or PHP-code-like grammar that provides a high degree of processing flexibility. A set of language lexer and parser elements is provided that offers a complete language grammar for writing and executing language directives. A script is wrapped and passed to the geospatial data server by a client application as a component of a standard KML-compliant statement. The approach provides an efficient means for a geospatial client application to request server preprocessing of data prior to client delivery. Data is structured in a quadtree format. As the user zooms into the dataset, geographic regions are subdivided into four child regions. Conversely, as the user zooms out, four child regions collapse into a single, lower-LOD region. The approach provides an efficient data traversal path and mechanism that can be flexibly established for any dataset regardless of size or other characteristics.

  5. An approach to optimize the batch mixing process for improving the quality consistency of the products made from traditional Chinese medicines*

    PubMed Central

    Yan, Bin-jun; Qu, Hai-bin

    2013-01-01

    The efficacy of traditional Chinese medicine (TCM) is based on the combined effects of its constituents. Variation in chemical composition between batches of TCM has always been the deterring factor in achieving consistency in efficacy. The batch mixing process can significantly reduce the batch-to-batch quality variation in TCM extracts by mixing them in a well-designed proportion. However, reducing the quality variation without sacrificing too much of the production efficiency is one of the challenges. Accordingly, an innovative and practical batch mixing method aimed at providing acceptable efficiency for industrial production of TCM products is proposed in this work, which uses a minimum number of batches of extracts to meet the content limits. The important factors affecting the utilization ratio of the extracts (URE) were studied by simulations. The results have shown that URE was affected by the correlation between the contents of constituents, and URE decreased with the increase in the number of targets and the relative standard deviations of the contents. URE could be increased by increasing the number of storage tanks. The results have provided a reference for designing the batch mixing process. The proposed method has possible application value in reducing the quality variation in TCM and providing acceptable production efficiency simultaneously. PMID:24190450

  6. An approach to optimize the batch mixing process for improving the quality consistency of the products made from traditional Chinese medicines.

    PubMed

    Yan, Bin-jun; Qu, Hai-bin

    2013-11-01

    The efficacy of traditional Chinese medicine (TCM) is based on the combined effects of its constituents. Variation in chemical composition between batches of TCM has always been the deterring factor in achieving consistency in efficacy. The batch mixing process can significantly reduce the batch-to-batch quality variation in TCM extracts by mixing them in a well-designed proportion. However, reducing the quality variation without sacrificing too much of the production efficiency is one of the challenges. Accordingly, an innovative and practical batch mixing method aimed at providing acceptable efficiency for industrial production of TCM products is proposed in this work, which uses a minimum number of batches of extracts to meet the content limits. The important factors affecting the utilization ratio of the extracts (URE) were studied by simulations. The results have shown that URE was affected by the correlation between the contents of constituents, and URE decreased with the increase in the number of targets and the relative standard deviations of the contents. URE could be increased by increasing the number of storage tanks. The results have provided a reference for designing the batch mixing process. The proposed method has possible application value in reducing the quality variation in TCM and providing acceptable production efficiency simultaneously.

  7. New density estimation methods for charged particle beams with applications to microbunching instability

    NASA Astrophysics Data System (ADS)

    Terzić, Balša; Bassi, Gabriele

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzić, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  8. CB4-03: An Eye on the Future: A Review of Data Virtualization Techniques to Improve Research Analytics

    PubMed Central

    Richter, Jack; McFarland, Lela; Bredfeldt, Christine

    2012-01-01

    Background/Aims Integrating data across systems can be a daunting process. The traditional method of moving data to a common location, mapping fields with different formats and meanings, and performing data cleaning activities to ensure valid and reliable integration across systems can be both expensive and extremely time consuming. As the scope of needed research data increases, the traditional methodology may not be sustainable. Data Virtualization provides an alternative to traditional methods that may reduce the effort required to integrate data across disparate systems. Objective Our goal was to survey new methods in data integration, cloud computing, enterprise data management and virtual data management for opportunities to increase the efficiency of producing VDW and similar data sets. Methods Kaiser Permanente Information Technology (KPIT), in collaboration with the Mid-Atlantic Permanente Research Institute (MAPRI) reviewed methodologies in the burgeoning field of Data Virtualization. We identified potential strengths and weaknesses of new approaches to data integration. For each method, we evaluated its potential application for producing effective research data sets. Results Data Virtualization provides opportunities to reduce the amount of data movement required to integrate data sources on different platforms in order to produce research data sets. Additionally, Data Virtualization also includes methods for managing “fuzzy” matching used to match fields known to have poor reliability such as names, addresses and social security numbers. These methods could improve the efficiency of integrating state and federal data such as patient race, death, and tumors with internal electronic health record data. Discussion The emerging field of Data Virtualization has considerable potential for increasing the efficiency of producing research data sets. An important next step will be to develop a proof of concept project that will help us understand to benefits and drawbacks of these techniques.

  9. A new method of SC image processing for confluence estimation.

    PubMed

    Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina

    2017-10-01

    Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Method and system for advancement of a borehole using a high power laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moxley, Joel F.; Land, Mark S.; Rinzler, Charles C.

    2014-09-09

    There is provided a system, apparatus and methods for the laser drilling of a borehole in the earth. There is further provided with in the systems a means for delivering high power laser energy down a deep borehole, while maintaining the high power to advance such boreholes deep into the earth and at highly efficient advancement rates, a laser bottom hole assembly, and fluid directing techniques and assemblies for removing the displaced material from the borehole.

  11. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  12. Multi-Stakeholder Dynamic Optimization Framework for System-of-Systems Development and Evolution

    NASA Astrophysics Data System (ADS)

    Fang, Zhemei

    Architecture design for an "acknowledged" System-of-Systems (SoS), under performance uncertainty and constrained resources, remains a difficult problem. Composing an SoS via a proper mix of systems under the special control structure of an "acknowledged" SoS requires efficient distribution of the limited resources. However, due to the special traits of SoS, achieving an efficient distribution of the resources is not a trivial challenge. Currently, the major causes that lead to inefficient resource management for an "acknowledged" SoS include: 1) no central SoS managers with absolute authority to address conflict; 2) difficult balance between current and future decisions; 3) various uncertainties during development and operations (e.g., technology maturation, policy stability); 4) diverse sources of the resources; 5) high complexity in efficient formulation and computation due to the previous four factors. Although it is beyond the scope of this dissertation to simultaneously address all the five items, the thesis will focus on the first, second, and fifth points, and partially cover the third point. In a word, the dissertation aims to develop a generic framework for "acknowledged" SoS that leads to appropriate mathematical formulation and a solution approach that generates a near-optimal set of multi-stage architectural decisions with limited collaboration between conflicted and independent stakeholders. This dissertation proposes a multi-stakeholder dynamic optimization (MUSTDO) method, which integrates approximate dynamic programming and transfer contract coordination mechanism. The method solves a multi-stage architecture selection problem with an embedded formal, but simple, transfer contract coordination mechanism to address resource conflict. Once the values of transfer contract are calculated appropriately, even though the SoS participants make independent decisions, the aggregate solutions are close to the solutions from a hypothetical ideal centralized case where the top-level SoS managers have full authority. In addition, the thesis builds the bridge between a given SoS problem and the mathematical interpretations of the MUSTDO method using a three-phase approach for real world applications. The method is applied to two case studies: one in the defense realm and one in the commercial realm. The first application uses a naval warfare scenario to demonstrate that the aggregated capabilities in the decentralized case using MUSTDO method are close to the aggregated capabilities in a hypothetical centralized case. This evidence demonstrates that the MUSTDO method can help approach the SoS-level optimality with limited funding resource even if the participants make independent decisions. The solution also provides suggestions to the participants about the sequence of architecting decisions and the amount of transfer contract to be sent out to maximize individual capability over time. The suggested decisions incorporate the potential capability increase in the future, which differentiates itself from allocating all the resources to the current development. The quantified numbers of transfer contract in this case study are equivalent capabilities that are relevant to equipment loan or technology transfer. The second case study applies the MUSTDO-based framework to address a multi-airline fleet allocation problem with emissions allowances constraint provided by the regulators. Two representative airlines including the low-cost airline and the legacy airline aim to maximize individual profit by allocating six type of aircraft to a given ten-route network under the emissions constraint. Both the deterministic and stochastic experiments verify the effectiveness of the MUSTDO method by comparing the profit in the decentralized case and profit in a utopian centralized case. Meanwhile, sensitivity studies demonstrate that higher minimum demand requirement and lower discount factor can further improve the efficiency of emissions allowances utilization in MUSTDO method. Comparing to an alternate grandfathering approach, the MUSTDO method can guarantee a high-level efficiency of resource allocation by avoiding failed allocation decisions due to inaccurate information for the regulators. In summary, the framework aids the SoS managers and participants in the selection of the best architecture over a period of time with limited resources; the framework helps the decision makers to understand how they can affect each other and cooperate to achieve a more efficient solution without sharing full information. The major contribution of this dissertation includes: 1) provide a method to address multi-stage SoS composition decisions over time with resource constraint; 2) provide a method to manage resource conflict for stakeholders in an "acknowledged" system-of-systems; 2) provide a new perspective of long-term interactions between stakeholders in an SoS; 3) provide procedural framework to implement the MUSTDO method; 4) provide comparison of different applications of the MUSTDO framework in distinct fields.

  13. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  14. Rapid thermal processing by stamping

    DOEpatents

    Stradins, Pauls; Wang, Qi

    2013-03-05

    A rapid thermal processing device and methods are provided for thermal processing of samples such as semiconductor wafers. The device has components including a stamp (35) having a stamping surface and a heater or cooler (40) to bring it to a selected processing temperature, a sample holder (20) for holding a sample (10) in position for intimate contact with the stamping surface; and positioning components (25) for moving the stamping surface and the stamp (35) in and away from intimate, substantially non-pressured contact. Methods for using and making such devices are also provided. These devices and methods allow inexpensive, efficient, easily controllable thermal processing.

  15. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  16. Method for simultaneous overlapped communications between neighboring processors in a multiple

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1991-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  17. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  18. [The socio-hygienic monitoring as an integral system for health risk assessment and risk management at the regional level].

    PubMed

    Kuzmin, S V; Gurvich, V B; Dikonskaya, O V; Malykh, O L; Yarushin, S V; Romanov, S V; Kornilkov, A S

    2013-01-01

    The information and analytical framework for the introduction of health risk assessment and risk management methodologies in the Sverdlovsk Region is the system of socio-hygienic monitoring. Techniques of risk management that take into account the choice of most cost-effective and efficient actions for improvement of the sanitary and epidemiologic situation at the level of the region, municipality, or a business entity of the Russian Federation, have been developed and proposed. To assess the efficiency of planning and activities for health risk management common method approaches and economic methods of "cost-effectiveness" and "cost-benefit" analyses provided in method recommendations and introduced in the Russian Federation are applied.

  19. Positivity-preserving well-balanced discontinuous Galerkin methods for the shallow water flows in open channels

    NASA Astrophysics Data System (ADS)

    Qian, Shouguo; Li, Gang; Shao, Fengjing; Xing, Yulong

    2018-05-01

    We construct and study efficient high order discontinuous Galerkin methods for the shallow water flows in open channels with irregular geometry and a non-flat bottom topography in this paper. The proposed methods are well-balanced for the still water steady state solution, and can preserve the non-negativity of wet cross section numerically. The well-balanced property is obtained via a novel source term separation and discretization. A simple positivity-preserving limiter is employed to provide efficient and robust simulations near the wetting and drying fronts. Numerical examples are performed to verify the well-balanced property, the non-negativity of the wet cross section, and good performance for both continuous and discontinuous solutions.

  20. System and method for networking electrochemical devices

    DOEpatents

    Williams, Mark C.; Wimer, John G.; Archer, David H.

    1995-01-01

    An improved electrochemically active system and method including a plurality of electrochemical devices, such as fuel cells and fluid separation devices, in which the anode and cathode process-fluid flow chambers are connected in fluid-flow arrangements so that the operating parameters of each of said plurality of electrochemical devices which are dependent upon process-fluid parameters may be individually controlled to provide improved operating efficiency. The improvements in operation include improved power efficiency and improved fuel utilization in fuel cell power generating systems and reduced power consumption in fluid separation devices and the like through interstage process fluid parameter control for series networked electrochemical devices. The improved networking method includes recycling of various process flows to enhance the overall control scheme.

  1. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  2. Intracellular generation of single-strand template increases the knock-in efficiency by combining CRISPR/Cas9 with AAV.

    PubMed

    Xiao, Qing; Min, Taishan; Ma, Shuangping; Hu, Lingna; Chen, Hongyan; Lu, Daru

    2018-04-18

    Targeted integration of transgenes facilitates functional genomic research and holds prospect for gene therapy. The established microhomology-mediated end-joining (MMEJ)-based strategy leads to the precise gene knock-in with easily constructed donor, yet the limited efficiency remains to be further improved. Here, we show that single-strand DNA (ssDNA) donor contributes to efficient increase of knock-in efficiency and establishes a method to achieve the intracellular linearization of long ssDNA donor. We identified that the CRISPR/Cas9 system is responsible for breaking double-strand DNA (dsDNA) of palindromic structure in inverted terminal repeats (ITRs) region of recombinant adeno-associated virus (AAV), leading to the inhibition of viral second-strand DNA synthesis. Combing Cas9 plasmids targeting genome and ITR with AAV donor delivery, the precise knock-in of gene cassette was achieved, with 13-14% of the donor insertion events being mediated by MMEJ in HEK 293T cells. This study describes a novel method to integrate large single-strand transgene cassettes into the genomes, increasing knock-in efficiency by 13.6-19.5-fold relative to conventional AAV-mediated method. It also provides a comprehensive solution to the challenges of complicated production and difficult delivery with large exogenous fragments.

  3. Method for Evaluating Energy Use of Dishwashers, Clothes Washers, and Clothes Dryers: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eastment, M.; Hendron, R.

    Building America teams are researching opportunities to improve energy efficiency for some of the more challenging end-uses, such as lighting (both fixed and occupant-provided), appliances (clothes washer, dishwasher, clothes dryer, refrigerator, and range), and miscellaneous electric loads, which are all heavily dependent on occupant behavior and product choices. These end-uses have grown to be a much more significant fraction of total household energy use (as much as 50% for very efficient homes) as energy efficient homes have become more commonplace through programs such as ENERGY STAR and Building America. As modern appliances become more sophisticated the residential energy analyst ismore » faced with a daunting task in trying to calculate the energy savings of high efficiency appliances. Unfortunately, most whole-building simulation tools do not allow the input of detailed appliance specifications. Using DOE test procedures the method outlined in this paper presents a reasonable way to generate inputs for whole-building energy-simulation tools. The information necessary to generate these inputs is available on Energy-Guide labels, the ENERGY-STAR website, California Energy Commission's Appliance website and manufacturer's literature. Building America has developed a standard method for analyzing the effect of high efficiency appliances on whole-building energy consumption when compared to the Building America's Research Benchmark building.« less

  4. Using the entire history in the analysis of nested case cohort samples.

    PubMed

    Rivera, C L; Lumley, T

    2016-08-15

    Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. YAMAT-seq: an efficient method for high-throughput sequencing of mature transfer RNAs

    PubMed Central

    Shigematsu, Megumi; Honda, Shozo; Loher, Phillipe; Telonis, Aristeidis G.; Rigoutsos, Isidore

    2017-01-01

    Abstract Besides translation, transfer RNAs (tRNAs) play many non-canonical roles in various biological pathways and exhibit highly variable expression profiles. To unravel the emerging complexities of tRNA biology and molecular mechanisms underlying them, an efficient tRNA sequencing method is required. However, the rigid structure of tRNA has been presenting a challenge to the development of such methods. We report the development of Y-shaped Adapter-ligated MAture TRNA sequencing (YAMAT-seq), an efficient and convenient method for high-throughput sequencing of mature tRNAs. YAMAT-seq circumvents the issue of inefficient adapter ligation, a characteristic of conventional RNA sequencing methods for mature tRNAs, by employing the efficient and specific ligation of Y-shaped adapter to mature tRNAs using T4 RNA Ligase 2. Subsequent cDNA amplification and next-generation sequencing successfully yield numerous mature tRNA sequences. YAMAT-seq has high specificity for mature tRNAs and high sensitivity to detect most isoacceptors from minute amount of total RNA. Moreover, YAMAT-seq shows quantitative capability to estimate expression levels of mature tRNAs, and has high reproducibility and broad applicability for various cell lines. YAMAT-seq thus provides high-throughput technique for identifying tRNA profiles and their regulations in various transcriptomes, which could play important regulatory roles in translation and other biological processes. PMID:28108659

  6. An accurate method for solving a class of fractional Sturm-Liouville eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Kashkari, Bothayna S. H.; Syam, Muhammed I.

    2018-06-01

    This article is devoted to both theoretical and numerical study of the eigenvalues of nonsingular fractional second-order Sturm-Liouville problem. In this paper, we implement a fractional-order Legendre Tau method to approximate the eigenvalues. This method transforms the Sturm-Liouville problem to a sparse nonsingular linear system which is solved using the continuation method. Theoretical results for the considered problem are provided and proved. Numerical results are presented to show the efficiency of the proposed method.

  7. Efficient Design and Analysis of Lightweight Reinforced Core Sandwich and PRSEUS Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Yarrington, Phillip W.; Lucking, Ryan C.; Collier, Craig S.; Ainsworth, James J.; Toubia, Elias A.

    2012-01-01

    Design, analysis, and sizing methods for two novel structural panel concepts have been developed and incorporated into the HyperSizer Structural Sizing Software. Reinforced Core Sandwich (RCS) panels consist of a foam core with reinforcing composite webs connecting composite facesheets. Boeing s Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) panels use a pultruded unidirectional composite rod to provide axial stiffness along with integrated transverse frames and stitching. Both of these structural concepts are ovencured and have shown great promise applications in lightweight structures, but have suffered from the lack of efficient sizing capabilities similar to those that exist for honeycomb sandwich, foam sandwich, hat stiffened, and other, more traditional concepts. Now, with accurate design methods for RCS and PRSEUS panels available in HyperSizer, these concepts can be traded and used in designs as is done with the more traditional structural concepts. The methods developed to enable sizing of RCS and PRSEUS are outlined, as are results showing the validity and utility of the methods. Applications include several large NASA heavy lift launch vehicle structures.

  8. Discriminative non-negative matrix factorization (DNMF) and its application to the fault diagnosis of diesel engine

    NASA Astrophysics Data System (ADS)

    Yang, Yong-sheng; Ming, An-bo; Zhang, You-yun; Zhu, Yong-sheng

    2017-10-01

    Diesel engines, widely used in engineering, are very important for the running of equipments and their fault diagnosis have attracted much attention. In the past several decades, the image based fault diagnosis methods have provided efficient ways for the diesel engine fault diagnosis. By introducing the class information into the traditional non-negative matrix factorization (NMF), an improved NMF algorithm named as discriminative NMF (DNMF) was developed and a novel imaged based fault diagnosis method was proposed by the combination of the DNMF and the KNN classifier. Experiments performed on the fault diagnosis of diesel engine were used to validate the efficacy of the proposed method. It is shown that the fault conditions of diesel engine can be efficiently classified by the proposed method using the coefficient matrix obtained by DNMF. Compared with the original NMF (ONMF) and principle component analysis (PCA), the DNMF can represent the class information more efficiently because the class characters of basis matrices obtained by the DNMF are more visible than those in the basis matrices obtained by the ONMF and PCA.

  9. Applied Use Value of Scientific Information for Management of Ecosystem Services

    NASA Astrophysics Data System (ADS)

    Raunikar, R. P.; Forney, W.; Bernknopf, R.; Mishra, S.

    2012-12-01

    The U.S. Geological Survey has developed and applied methods for quantifying the value of scientific information (VOI) that are based on the applied use value of the information. In particular the applied use value of U.S. Geological Survey information often includes efficient management of ecosystem services. The economic nature of U.S. Geological Survey scientific information is largely equivalent to that of any information, but we focus application of our VOI quantification methods on the information products provided freely to the public by the U.S. Geological Survey. We describe VOI economics in general and illustrate by referring to previous studies that use the evolving applied use value methods, which includes examples of the siting of landfills in Louden County, the mineral exploration efficiencies of finer resolution geologic maps in Canada, and improved agricultural production and groundwater protection in Eastern Iowa possible with Landsat moderate resolution satellite imagery. Finally, we describe the adaptation of the applied use value method to the case of streamgage information used to improve the efficiency of water markets in New Mexico.

  10. Nested sparse grid collocation method with delay and transformation for subsurface flow and transport problems

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-06-01

    In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.

  11. The impact of physician payment methods on raising the efficiency of the healthcare system: an international comparison.

    PubMed

    Simoens, Steven; Giuffrida, Antonio

    2004-01-01

    This article reviews policies on physician payment methods that Organisation for Economic Cooperation and Development (OECD) countries have implemented to promote an efficient deployment of physicians. Countries' experiences show that payment by fee-for-service, capitation and salary influences physician activity levels and productivity. However, the impact of these simple payment methods is complex and may be diluted by clinical, demographic, ethical and organisational factors. Policies that have attempted to curb health expenditure by controlling fee levels have sometimes been eroded by physicians increasing the volume of service supply, or providing services that attract higher fees. Flexible blended payment methods based on the combination of a fixed component, through either capitation or salary, and a variable component, through fee-for-service, may produce a desirable mix of incentives. Integrating such blended payment methods with mechanisms to monitor physician activity may offer potential success.

  12. A method suitable for DNA extraction from humus-rich soil.

    PubMed

    Miao, Tianjin; Gao, Song; Jiang, Shengwei; Kan, Guoshi; Liu, Pengju; Wu, Xianming; An, Yingfeng; Yao, Shuo

    2014-11-01

    A rapid and convenient method for extracting DNA from soil is presented. Soil DNA is extracted by direct cell lysis in the presence of EDTA, SDS, phenol, chloroform and isoamyl alcohol (3-methyl-1-butanol) followed by precipitation with 2-propanol. The extracted DNA is purified by modified DNA purification kit and DNA gel extraction kit. With this method, DNA extracted from humus-rich dark brown forest soil was free from humic substances and, therefore, could be used for efficient PCR amplification and restriction digestion. In contrast, DNA sample extracted with the traditional CTAB-based method had lower yield and purity, and no DNA could be extracted from the same soil sample with a commonly-used commercial soil DNA isolation kit. In addition, this method is time-saving and convenient, providing an efficient choice especially for DNA extraction from humus-rich soils.

  13. Study of Adaptive Mathematical Models for Deriving Automated Pilot Performance Measurement Techniques. Volume I. Model Development.

    ERIC Educational Resources Information Center

    Connelly, Edward A.; And Others

    A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is documented in this report. The ultimate application of the research is to provide methods for automatically measuring pilot performance in a flight simulator or from recorded in-flight data. An efficient method of…

  14. Examining the Effectiveness of Student Authentication and Authenticity in Online Learning at Community Colleges

    ERIC Educational Resources Information Center

    Hoshiar, Mitra; Dunlap, Jody; Li, Jinyi; Friedel, Janice Nahra

    2014-01-01

    Online learning is rapidly becoming one of the most prevalent delivery methods of learning in institutions of higher education. It provides college students, especially adult students, an alternative, convenient, and cost-efficient method to earn their credentials, upgrade their skills and knowledge, and keep or upgrade their employment. But at…

  15. Length polymorphism scanning is an efficient approach for revealing chloroplast DNA variation.

    Treesearch

    Matthew E. Horning; Richard C. Cronn

    2006-01-01

    Phylogeographic and population genetic screens of chloroplast DNA (cpDNA) provide insights into seedbased gene flow in angiosperms, yet studies are frequently hampered by the low mutation rate of this genome. Detection methods for intraspecific variation can be either direct (DNA sequencing) or indirect (PCR-RFLP), although no single method incorporates the best...

  16. Overcoming Language and Literacy Barriers: Using Student Response System Technology to Collect Quality Program Evaluation Data from Immigrant Participants

    ERIC Educational Resources Information Center

    Walker, Susan K.; Mao, Dung

    2016-01-01

    Student response system technology was employed for parenting education program evaluation data collection with Karen adults. The technology, with translation and use of an interpreter, provided an efficient and secure method that respected oral language and collective learning preferences and accommodated literacy needs. The method was popular…

  17. Engineering crop nutrient efficiency for sustainable agriculture.

    PubMed

    Chen, Liyu; Liao, Hong

    2017-10-01

    Increasing crop yields can provide food, animal feed, bioenergy feedstocks and biomaterials to meet increasing global demand; however, the methods used to increase yield can negatively affect sustainability. For example, application of excess fertilizer can generate and maintain high yields but also increases input costs and contributes to environmental damage through eutrophication, soil acidification and air pollution. Improving crop nutrient efficiency can improve agricultural sustainability by increasing yield while decreasing input costs and harmful environmental effects. Here, we review the mechanisms of nutrient efficiency (primarily for nitrogen, phosphorus, potassium and iron) and breeding strategies for improving this trait, along with the role of regulation of gene expression in enhancing crop nutrient efficiency to increase yields. We focus on the importance of root system architecture to improve nutrient acquisition efficiency, as well as the contributions of mineral translocation, remobilization and metabolic efficiency to nutrient utilization efficiency. © 2017 Institute of Botany, Chinese Academy of Sciences.

  18. Efficiency improvement by navigated safety inspection involving visual clutter based on the random search model.

    PubMed

    Sun, Xinlu; Chong, Heap-Yih; Liao, Pin-Chao

    2018-06-25

    Navigated inspection seeks to improve hazard identification (HI) accuracy. With tight inspection schedule, HI also requires efficiency. However, lacking quantification of HI efficiency, navigated inspection strategies cannot be comprehensively assessed. This work aims to determine inspection efficiency in navigated safety inspection, controlling for the HI accuracy. Based on a cognitive method of the random search model (RSM), an experiment was conducted to observe the HI efficiency in navigation, for a variety of visual clutter (VC) scenarios, while using eye-tracking devices to record the search process and analyze the search performance. The results show that the RSM is an appropriate instrument, and VC serves as a hazard classifier for navigation inspection in improving inspection efficiency. This suggests a new and effective solution for addressing the low accuracy and efficiency of manual inspection through navigated inspection involving VC and the RSM. It also provides insights into the inspectors' safety inspection ability.

  19. Measuring of electrical changes induced by in situ combustion through flow-through electrodes in a laboratory sample of core material

    DOEpatents

    Lee, D.O.; Montoya, P.C.; Wayland, J.R. Jr.

    1986-12-09

    Method and apparatus are provided for obtaining accurate dynamic measurements for passage of phase fronts through a core sample in a test fixture. Flow-through grid structures are provided for electrodes to permit data to be obtained before, during and after passage of a front there through. Such electrodes are incorporated in a test apparatus for obtaining electrical characteristics of the core sample. With the inventive structure a method is provided for measurement of instabilities in a phase front progressing through the medium. Availability of accurate dynamic data representing parameters descriptive of material characteristics before, during and after passage of a front provides a more efficient method for enhanced recovery of oil using a fire flood technique. 12 figs.

  20. Measuring of electrical changes induced by in situ combustion through flow-through electrodes in a laboratory sample of core material

    DOEpatents

    Lee, David O.; Montoya, Paul C.; Wayland, Jr., James R.

    1986-01-01

    Method and apparatus are provided for obtaining accurate dynamic measurements for passage of phase fronts through a core sample in a test fixture. Flow-through grid structures are provided for electrodes to permit data to be obtained before, during and after passage of a front therethrough. Such electrodes are incorporated in a test apparatus for obtaining electrical characteristics of the core sample. With the inventive structure a method is provided for measurement of instabilities in a phase front progressing through the medium. Availability of accurate dynamic data representing parameters descriptive of material characteristics before, during and after passage of a front provides a more efficient method for enhanced recovery of oil using a fire flood technique.

  1. Convergence and divergence across construction methods for human brain white matter networks: an assessment based on individual differences.

    PubMed

    Zhong, Suyu; He, Yong; Gong, Gaolang

    2015-05-01

    Using diffusion MRI, a number of studies have investigated the properties of whole-brain white matter (WM) networks with differing network construction methods (node/edge definition). However, how the construction methods affect individual differences of WM networks and, particularly, if distinct methods can provide convergent or divergent patterns of individual differences remain largely unknown. Here, we applied 10 frequently used methods to construct whole-brain WM networks in a healthy young adult population (57 subjects), which involves two node definitions (low-resolution and high-resolution) and five edge definitions (binary, FA weighted, fiber-density weighted, length-corrected fiber-density weighted, and connectivity-probability weighted). For these WM networks, individual differences were systematically analyzed in three network aspects: (1) a spatial pattern of WM connections, (2) a spatial pattern of nodal efficiency, and (3) network global and local efficiencies. Intriguingly, we found that some of the network construction methods converged in terms of individual difference patterns, but diverged with other methods. Furthermore, the convergence/divergence between methods differed among network properties that were adopted to assess individual differences. Particularly, high-resolution WM networks with differing edge definitions showed convergent individual differences in the spatial pattern of both WM connections and nodal efficiency. For the network global and local efficiencies, low-resolution and high-resolution WM networks for most edge definitions consistently exhibited a highly convergent pattern in individual differences. Finally, the test-retest analysis revealed a decent temporal reproducibility for the patterns of between-method convergence/divergence. Together, the results of the present study demonstrated a measure-dependent effect of network construction methods on the individual difference of WM network properties. © 2015 Wiley Periodicals, Inc.

  2. A heuristic approach using multiple criteria for environmentally benign 3PLs selection

    NASA Astrophysics Data System (ADS)

    Kongar, Elif

    2005-11-01

    Maintaining competitiveness in an environment where price and quality differences between competing products are disappearing depends on the company's ability to reduce costs and supply time. Timely responses to rapidly changing market conditions require an efficient Supply Chain Management (SCM). Outsourcing logistics to third-party logistics service providers (3PLs) is one commonly used way of increasing the efficiency of logistics operations, while creating a more "core competency focused" business environment. However, this alone may not be sufficient. Due to recent environmental regulations and growing public awareness regarding environmental issues, 3PLs need to be not only efficient but also environmentally benign to maintain companies' competitiveness. Even though an efficient and environmentally benign combination of 3PLs can theoretically be obtained using exhaustive search algorithms, heuristics approaches to the selection process may be superior in terms of the computational complexity. In this paper, a hybrid approach that combines a multiple criteria Genetic Algorithm (GA) with Linear Physical Weighting Algorithm (LPPW) to be used in efficient and environmentally benign 3PLs is proposed. A numerical example is also provided to illustrate the method and the analyses.

  3. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  4. Analysis and determination the efficiency of the European health systems.

    PubMed

    Del Rocío Moreno-Enguix, María; Gómez-Gallego, Juan Cándido; Gómez Gallego, María

    2018-01-01

    The current economic crisis has increased the interest in analyzing the efficiency of health care systems, as their funding is a very important part of the budgets for different countries. In this work determines the efficiency in the health services in European countries applying data envelopment analysis. In addition, the combined application of data envelopment analysis methods and ACP can provide an evaluation of the efficiency with respect to differently oriented productive health systems in the different countries. The results show that models with a lower level of efficiency are those whose input is beds, followed by the models whose input is physicians. Finally, we apply the AD to select a few simple indicators that facilitate control of the level of operational efficiency of a health system. Copyright © 2017 John Wiley & Sons, Ltd.

  5. A usability evaluation of Lazada mobile application

    NASA Astrophysics Data System (ADS)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Jamaludin, Nur Hafiza; Moh, Somia T. L.

    2017-10-01

    This paper reports on a usability evaluation of Lazada mobile application, an online shopping app for mobile devices. The evaluation was conducted using 12 users of ages 18 to 24. Seven (7) were expert users and the other 5 were novice users. The study objectives were to evaluate the perceived effectiveness, efficiency and satisfaction of the mobile application. The result provides a positive feedback and shows that the mobile shopping app is effective, efficient, and satisfying as perceived by the study participants. However, there are some observed usability issues with the main menu and the payment method that necessitates improvements to increase the application's effectiveness, efficiency and satisfaction. The suggested improvements include: 1) the main menu should be capitalized and place on the left side of mobile app and 2) payment method tutorial should be included as a hyperlink in the payment method page. This observation will be helpful to the owners of the application in future version development of the app.

  6. A convergent diffusion and social marketing approach for disseminating proven approaches to physical activity promotion.

    PubMed

    Dearing, James W; Maibach, Edward W; Buller, David B

    2006-10-01

    Approaches from diffusion of innovations and social marketing are used here to propose efficient means to promote and enhance the dissemination of evidence-based physical activity programs. While both approaches have traditionally been conceptualized as top-down, center-to-periphery, centralized efforts at social change, their operational methods have usually differed. The operational methods of diffusion theory have a strong relational emphasis, while the operational methods of social marketing have a strong transactional emphasis. Here, we argue for a convergence of diffusion of innovation and social marketing principles to stimulate the efficient dissemination of proven-effective programs. In general terms, we are encouraging a focus on societal sectors as a logical and efficient means for enhancing the impact of dissemination efforts. This requires an understanding of complex organizations and the functional roles played by different individuals in such organizations. In specific terms, ten principles are provided for working effectively within societal sectors and enhancing user involvement in the processes of adoption and implementation.

  7. Simultaneous and rapid determination of multiple component concentrations in a Kraft liquor process stream

    DOEpatents

    Li, Jian [Marietta, GA; Chai, Xin Sheng [Atlanta, GA; Zhu, Junyoung [Marietta, GA

    2008-06-24

    The present invention is a rapid method of determining the concentration of the major components in a chemical stream. The present invention is also a simple, low cost, device of determining the in-situ concentration of the major components in a chemical stream. In particular, the present invention provides a useful method for simultaneously determining the concentrations of sodium hydroxide, sodium sulfide and sodium carbonate in aqueous kraft pulping liquors through use of an attenuated total reflectance (ATR) tunnel flow cell or optical probe capable of producing a ultraviolet absorbency spectrum over a wavelength of 190 to 300 nm. In addition, the present invention eliminates the need for manual sampling and dilution previously required to generate analyzable samples. The inventive method can be used in Kraft pulping operations to control white liquor causticizing efficiency, sulfate reduction efficiency in green liquor, oxidation efficiency for oxidized white liquor and the active and effective alkali charge to kraft pulping operations.

  8. Unified treatment of microscopic boundary conditions and efficient algorithms for estimating tangent operators of the homogenized behavior in the computational homogenization method

    NASA Astrophysics Data System (ADS)

    Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic

    2017-03-01

    This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.

  9. Investigation of methods for sterilization of potting compounds and mated surfaces

    NASA Technical Reports Server (NTRS)

    Tulius, J. J.; Daley, D. J.; Phillips, G. B.

    1972-01-01

    The feasibility of using formaldehyde-liberating synthetic resins or polymers for the sterilization of potting compounds, mated and occluded areas, and spacecraft surfaces was demonstrated. The detailed study of interrelated parameters of formaldehyde gas sterilization revealed that efficient cycle conditions can be developed for the sterilization of spacecraft components. It was determined that certain parameters were more important than others in the development of cycles for specific applications. The use of formaldehyde gas for the sterilization of spacecraft components provides NASA with a highly efficient method which is inexpensive, reproducible, easily quantitated, materials compatible, operationally simple, generally non-hazardous and not thermally destructive.

  10. Efficient implementation of a 3-dimensional ADI method on the iPSC/860

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van der Wijngaart, R.F.

    1993-12-31

    A comparison is made between several domain decomposition strategies for the solution of three-dimensional partial differential equations on a MIMD distributed memory parallel computer. The grids used are structured, and the numerical algorithm is ADI. Important implementation issues regarding load balancing, storage requirements, network latency, and overlap of computations and communications are discussed. Results of the solution of the three-dimensional heat equation on the Intel iPSC/860 are presented for the three most viable methods. It is found that the Bruno-Cappello decomposition delivers optimal computational speed through an almost complete elimination of processor idle time, while providing good memory efficiency.

  11. CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.

  12. Method and system for efficiently searching an encoded vector index

    DOEpatents

    Bui, Thuan Quang; Egan, Randy Lynn; Kathmann, Kevin James

    2001-09-04

    Method and system aspects for efficiently searching an encoded vector index are provided. The aspects include the translation of a search query into a candidate bitmap, and the mapping of data from the candidate bitmap into a search result bitmap according to entry values in the encoded vector index. Further, the translation includes the setting of a bit in the candidate bitmap for each entry in a symbol table that corresponds to candidate of the search query. Also included in the mapping is the identification of a bit value in the candidate bitmap pointed to by an entry in an encoded vector.

  13. Recommender engine for continuous-time quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  14. An Adaptive Fuzzy-Logic Traffic Control System in Conditions of Saturated Transport Stream

    PubMed Central

    Marakhimov, A. R.; Igamberdiev, H. Z.; Umarov, Sh. X.

    2016-01-01

    This paper considers the problem of building adaptive fuzzy-logic traffic control systems (AFLTCS) to deal with information fuzziness and uncertainty in case of heavy traffic streams. Methods of formal description of traffic control on the crossroads based on fuzzy sets and fuzzy logic are proposed. This paper also provides efficient algorithms for implementing AFLTCS and develops the appropriate simulation models to test the efficiency of suggested approach. PMID:27517081

  15. Diffraction efficiency of plasmonic gratings fabricated by electron beam lithography using a silver halide film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudheer,, E-mail: sudheer@rrcat.gov.in, E-mail: sudheer.rrcat@gmail.com; Tiwari, P.; Srivastava, Himanshu

    2016-07-28

    The silver nanoparticle surface relief gratings of ∼10 μm period are fabricated using electron beam lithography on the silver halide film substrate. Morphological characterization of the gratings shows that the period, the shape, and the relief depth in the gratings are mainly dependent on the number of lines per frame, the spot size, and the accelerating voltage of electron beam raster in the SEM. Optical absorption of the silver nanoparticle gratings provides a broad localized surface plasmon resonance peak in the visible region, whereas the intensity of the peaks depends on the number density of silver nanoparticles in the gratings. Themore » maximum efficiency of ∼7.2% for first order diffraction is observed for the grating fabricated at 15 keV. The efficiency is peaking at 560 nm with ∼380 nm bandwidth. The measured profiles of the diffraction efficiency for the gratings are found in close agreement with the Raman-Nath diffraction theory. This technique provides a simple and efficient method for the fabrication of plasmonic nanoparticle grating structures with high diffraction efficiency having broad wavelength tuning.« less

  16. Water washable stainless steel HEPA filter

    DOEpatents

    Phillips, Terrance D.

    2001-01-01

    The invention is a high efficiency particulate (HEPA) filter apparatus and system, and method for assaying particulates. The HEPA filter provides for capture of 99.99% or greater of particulates from a gas stream, with collection of particulates on the surface of the filter media. The invention provides a filter system that can be cleaned and regenerated in situ.

  17. What if Best Practice Is Too Expensive? Feedback on Oral Presentations and Efficient Use of Resources

    ERIC Educational Resources Information Center

    Leger, Lawrence A.; Glass, Karligash; Katsiampa, Paraskevi; Liu, Shibo; Sirichand, Kavita

    2017-01-01

    We evaluate feedback methods for oral presentations used in training non-quantitative research skills (literature review and various associated tasks). Training is provided through a credit-bearing module taught to MSc students of banking, economics and finance in the UK. Monitoring oral presentations and providing "best practice"…

  18. A targeted metabolomic protocol for short-chain fatty acids and branched-chain amino acids.

    PubMed

    Zheng, Xiaojiao; Qiu, Yunping; Zhong, Wei; Baxter, Sarah; Su, Mingming; Li, Qiong; Xie, Guoxiang; Ore, Brandon M; Qiao, Shanlei; Spencer, Melanie D; Zeisel, Steven H; Zhou, Zhanxiang; Zhao, Aihua; Jia, Wei

    2013-08-01

    Research in obesity and metabolic disorders that involve intestinal microbiota demands reliable methods for the precise measurement of the short-chain fatty acids (SCFAs) and branched-chain amino acids (BCAAs) concentration. Here, we report a rapid method of simultaneously determining SCFAs and BCAAs in biological samples using propyl chloroformate (PCF) derivatization followed by gas chromatography mass spectrometry (GC-MS) analysis. A one-step derivatization using 100 µL of PCF in a reaction system of water, propanol, and pyridine (v/v/v = 8:3:2) at pH 8 provided the optimal derivatization efficiency. The best extraction efficiency of the derivatized products was achieved by a two-step extraction with hexane. The method exhibited good derivatization efficiency and recovery for a wide range of concentrations with a low limit of detection for each compound. The relative standard deviations (RSDs) of all targeted compounds showed good intra- and inter-day (within 7 days) precision (< 10%), and good stability (< 20%) within 4 days at room temperature (23-25 °C), or 7 days when stored at -20 °C. We applied our method to measure SCFA and BCAA levels in fecal samples from rats administrated with different diet. Both univariate and multivariate statistics analysis of the concentrations of these target metabolites could differentiate three groups with ethanol intervention and different oils in diet. This method was also successfully employed to determine SCFA and BCAA in the feces, plasma and urine from normal humans, providing important baseline information of the concentrations of these metabolites. This novel metabolic profile study has great potential for translational research.

  19. On modelling three-dimensional piezoelectric smart structures with boundary spectral element method

    NASA Astrophysics Data System (ADS)

    Zou, Fangxin; Aliabadi, M. H.

    2017-05-01

    The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.

  20. Efficient solution of ordinary differential equations modeling electrical activity in cardiac cells.

    PubMed

    Sundnes, J; Lines, G T; Tveito, A

    2001-08-01

    The contraction of the heart is preceded and caused by a cellular electro-chemical reaction, causing an electrical field to be generated. Performing realistic computer simulations of this process involves solving a set of partial differential equations, as well as a large number of ordinary differential equations (ODEs) characterizing the reactive behavior of the cardiac tissue. Experiments have shown that the solution of the ODEs contribute significantly to the total work of a simulation, and there is thus a strong need to utilize efficient solution methods for this part of the problem. This paper presents how an efficient implicit Runge-Kutta method may be adapted to solve a complicated cardiac cell model consisting of 31 ODEs, and how this solver may be coupled to a set of PDE solvers to provide complete simulations of the electrical activity.

  1. Assessing performance of alternative pavement marking materials.

    DOT National Transportation Integrated Search

    2010-01-01

    Pavement markings need to be restriped from time to time to maintain retroreflectivity. : Knowing which material provides the most economically efficient solution is important. : Currently, no agreed upon method by which to evaluate the use of altern...

  2. A stochastic method for computing hadronic matrix elements

    DOE PAGES

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  3. Direct volume estimation without segmentation

    NASA Astrophysics Data System (ADS)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  4. 3D Space Radiation Transport in a Shielded ICRU Tissue Sphere

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2014-01-01

    A computationally efficient 3DHZETRN code capable of simulating High Charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for a simple homogeneous shield object. Monte Carlo benchmarks were used to verify the methodology in slab and spherical geometry, and the 3D corrections were shown to provide significant improvement over the straight-ahead approximation in some cases. In the present report, the new algorithms with well-defined convergence criteria are extended to inhomogeneous media within a shielded tissue slab and a shielded tissue sphere and tested against Monte Carlo simulation to verify the solution methods. The 3D corrections are again found to more accurately describe the neutron and light ion fluence spectra as compared to the straight-ahead approximation. These computationally efficient methods provide a basis for software capable of space shield analysis and optimization.

  5. Design of a Variational Multiscale Method for Turbulent Compressible Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.

  6. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  7. Appropriate uses and considerations for online surveying in human dimensions research

    USGS Publications Warehouse

    Sexton, Natalie R.; Miller, Holly M.; Dietsch, Alia M.

    2011-01-01

    Online surveying has gained attention in recent years for its applicability to human dimensions research as an efficient and inexpensive data-collection method; however, online surveying is not a panacea. In this article, we provide some guidelines for alleviating or avoiding the criticisms and pitfalls suggested of online survey methods and explore two case studies demonstrating different approaches to online surveying. The first was a mixed-mode study of visitors to 52 participating National Wildlife Refuges. The response rate was 72%, with over half of respondents completing the survey online, resulting in cost-savings and efficiencies that would not have otherwise been realized. The second highlighted an online-only approach targeting specialized users of satellite imagery. Through branching and skipping, the online mode allowed flexibilities in administration impractical in a mail survey. The response rate of 53% was higher than typical for online surveys. Both case studies provide examples of appropriate uses of online surveying.

  8. Hospital non-price competition under the Global Budget Payment and Prospective Payment Systems.

    PubMed

    Chen, Wen-Yi; Lin, Yu-Hui

    2008-06-01

    This paper provides theoretical analyses of two alternative hospital payment systems for controlling medical cost: the Global Budget Payment System (GBPS) and the Prospective Payment System (PPS). The former method assigns a fixed total budget for all healthcare services over a given period with hospitals being paid on a fee-for-service basis. The latter method is usually connected with a fixed payment to hospitals within a Diagnosis-Related Group. Our results demonstrate that, given the same expenditure, the GBPS would approach optimal levels of quality and efficiency as well as the level of social welfare provided by the PPS, as long as market competition is sufficiently high; our results also demonstrate that the treadmill effect, modeling an inverse relationship between price and quantity under the GBPS, would be a quality-enhancing and efficiency-improving outcome due to market competition.

  9. Achromatic electromagnetic metasurface for generating a vortex wave with orbital angular momentum (OAM).

    PubMed

    Jiang, Shan; Chen, Chang; Zhang, Hualiang; Chen, Weidong

    2018-03-05

    The vortex wave that carries orbital angular momentum has attracted much attention due to the fact that it can provide an extra degree of freedom for optical communication, imaging and other applications. In spite of this, the method of OAM generation at high frequency still suffers from limitations, such as chromatic aberration and low efficiency. In this paper, an azimuthally symmetric electromagnetic metasurface with wide bandwidth is designed, fabricated and experimentally demonstrated to efficiently convert a left-handed (right-handed) circularly polarized incident plane wave (with a spin angular momentum (SAM) of ћ) to a right-handed (left-handed) circularly polarized vortex wave with OAM. The design methodology based on the field equivalence principle is discussed in detail. The simulation and measurement results confirm that the proposed method provides an effective way for generating OAM-carrying vortex wave with comparative performance across a broad bandwidth.

  10. Copper nanoparticle interspersed MoS2 nanoflowers with enhanced efficiency for CO2 electrochemical reduction to fuel.

    PubMed

    Shi, Guodong; Yu, Luo; Ba, Xin; Zhang, Xiaoshu; Zhou, Jianqing; Yu, Ying

    2017-08-15

    Electrocatalytic conversion of carbon dioxide (CO 2 ) has been considered as an ideal method to simultaneously solve the energy crisis and environmental issue around the world. In this work, ultrasmall Cu nanoparticle interspersed flower-like MoS 2 was successfully fabricated via a facile microwave hydrothermal method. The designed optimal hierarchical Cu/MoS 2 composite not only exhibited remarkably enhanced electronic conductivity and specific surface area but also possessed improved CO 2 adsorption capacity, resulting in a significant increase in overall faradaic efficiency and a 7-fold augmentation of the faradaic efficiency of CH 4 in comparison with bare MoS 2 . In addition, the Cu/MoS 2 composite had superior stability with high efficiency retained for 48 h in the electrochemical process. It is anticipated that the designed Cu/MoS 2 composite electrocatalyst may provide new insights for transition metal sulfides and non-noble particles applied to CO 2 reduction.

  11. The efficiency of health care production in OECD countries: A systematic review and meta-analysis of cross-country comparisons.

    PubMed

    Varabyova, Yauheniya; Müller, Julia-Maria

    2016-03-01

    There has been an ongoing interest in the analysis and comparison of the efficiency of health care systems using nonparametric and parametric applications. The objective of this study was to review the current state of the literature and to synthesize the findings on health system efficiency in OECD countries. We systematically searched five electronic databases through August 2014 and identified 22 studies that analyzed the efficiency of health care production at the country level. We summarized these studies with view on their sample, methods, and utilized variables. We developed and applied a checklist of 14 items to assess the quality of the reviewed studies along four dimensions: reporting, external validity, bias, and power. Moreover, to examine the internal validity of findings we meta-analyzed the efficiency estimates reported in 35 models from ten studies. The qualitative synthesis of the literature indicated large differences in study designs and methods. The meta-analysis revealed low correlations between country rankings suggesting a lack of internal validity of the efficiency estimates. In conclusion, methodological problems of existing cross-country comparisons of the efficiency of health care systems draw into question the ability of these comparisons to provide meaningful guidance to policy-makers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Partition method and experimental validation for impact dynamics of flexible multibody system

    NASA Astrophysics Data System (ADS)

    Wang, J. Y.; Liu, Z. Y.; Hong, J. Z.

    2018-06-01

    The impact problem of a flexible multibody system is a non-smooth, high-transient, and strong-nonlinear dynamic process with variable boundary. How to model the contact/impact process accurately and efficiently is one of the main difficulties in many engineering applications. The numerical approaches being used widely in impact analysis are mainly from two fields: multibody system dynamics (MBS) and computational solid mechanics (CSM). Approaches based on MBS provide a more efficient yet less accurate analysis of the contact/impact problems, while approaches based on CSM are well suited for particularly high accuracy needs, yet require very high computational effort. To bridge the gap between accuracy and efficiency in the dynamic simulation of a flexible multibody system with contacts/impacts, a partition method is presented considering that the contact body is divided into two parts, an impact region and a non-impact region. The impact region is modeled using the finite element method to guarantee the local accuracy, while the non-impact region is modeled using the modal reduction approach to raise the global efficiency. A three-dimensional rod-plate impact experiment is designed and performed to validate the numerical results. The principle for how to partition the contact bodies is proposed: the maximum radius of the impact region can be estimated by an analytical method, and the modal truncation orders of the non-impact region can be estimated by the highest frequency of the signal measured. The simulation results using the presented method are in good agreement with the experimental results. It shows that this method is an effective formulation considering both accuracy and efficiency. Moreover, a more complicated multibody impact problem of a crank slider mechanism is investigated to strengthen this conclusion.

  13. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  14. Palladium-Catalyzed Dehydrogenative Coupling: An Efficient Synthetic Strategy for the Construction of the Quinoline Core

    PubMed Central

    Carral-Menoyo, Asier; Ortiz-de-Elguea, Verónica; Martinez-Nunes, Mikel; Sotomayor, Nuria; Lete, Esther

    2017-01-01

    Palladium-catalyzed dehydrogenative coupling is an efficient synthetic strategy for the construction of quinoline scaffolds, a privileged structure and prevalent motif in many natural and biologically active products, in particular in marine alkaloids. Thus, quinolines and 1,2-dihydroquinolines can be selectively obtained in moderate-to-good yields via intramolecular C–H alkenylation reactions, by choosing the reaction conditions. This methodology provides a direct method for the construction of this type of quinoline through an efficient and atom economical procedure, and constitutes significant advance over the existing procedures that require preactivated reaction partners. PMID:28867803

  15. Modeling and quantification of repolarization feature dependency on heart rate.

    PubMed

    Minchole, A; Zacur, E; Pueyo, E; Laguna, P

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.

  16. Rapid and effective processing of blood specimens for diagnostic PCR using filter paper and Chelex-100.

    PubMed

    Polski, J M; Kimzey, S; Percival, R W; Grosso, L E

    1998-08-01

    To provide a more efficient method for isolating DNA from peripheral blood for use in diagnostic DNA mutation analysis. The use of blood impregnated filter paper and Chelex-100 in DNA isolation was evaluated and compared with standard DNA isolation techniques. In polymerase chain reaction (PCR) based assays of five point mutations, identical results were obtained with DNA isolated routinely from peripheral blood and isolated using the filter paper and Chelex-100 method. In the clinical setting, this method provides a useful alternative to conventional DNA isolation. It is easily implemented and inexpensive, and provides sufficient, stable DNA for multiple assays. The potential for specimen contamination is reduced because most of the steps are performed in a single microcentrifuge tube. In addition, this method provides for easy storage and transport of samples from the point of acquisition.

  17. Water augmented indirectly-fired gas turbine systems and method

    DOEpatents

    Bechtel, Thomas F.; Parsons, Jr., Edward J.

    1992-01-01

    An indirectly-fired gas turbine system utilizing water augmentation for increasing the net efficiency and power output of the system is described. Water injected into the compressor discharge stream evaporatively cools the air to provide a higher driving temperature difference across a high temperature air heater which is used to indirectly heat the water-containing air to a turbine inlet temperature of greater than about 1,000.degree. C. By providing a lower air heater hot side outlet temperature, heat rejection in the air heater is reduced to increase the heat recovery in the air heater and thereby increase the overall cycle efficiency.

  18. Rapid measurement and prediction of bacterial contamination in milk using an oxygen electrode.

    PubMed

    Numthuam, Sonthaya; Suzuki, Hiroaki; Fukuda, Junji; Phunsiri, Suthiluk; Rungchang, Saowaluk; Satake, Takaaki

    2009-03-01

    An oxygen electrode was used to measure oxygen consumption to determine bacterial contamination in milk. Dissolved oxygen (DO) measured at 10-35 degrees C for 2 hours provided a reasonable prediction efficiency (r > or = 0.90) of the amount of bacteria between 1.9 and 7.3 log (CFU/mL). A temperature-dependent predictive model was developed that has the same prediction accuracy like the normal predictive model. The analysis performed with and without stirring provided the same prediction efficiency, with correlation coefficient of 0.90. The measurement of DO is a simple and rapid method for the determination of bacteria in milk.

  19. Recent developments in anticancer drug delivery using cell penetrating and tumor targeting peptides.

    PubMed

    Dissanayake, Shama; Denny, William A; Gamage, Swarna; Sarojini, Vijayalekshmi

    2017-03-28

    Efficient intracellular trafficking and targeted delivery to the site of action are essential to overcome the current drawbacks of cancer therapeutics. Cell Penetrating Peptides (CPPs) offer the possibility of efficient intracellular trafficking, and, therefore the development of drug delivery systems using CPPs as cargo carriers is an attractive strategy to address the current drawbacks of cancer therapeutics. Additionally, the possibility of incorporating Tumor Targeting Peptides (TTPs) into the delivery system provides the necessary drug targeting effect. Therefore the conjugation of CPPs and/or TTPs with therapeutics provides a potentially efficient method of improving intracellular drug delivery mechanisms. Peptides used as cargo carriers in DDS have been shown to enhance the cellular uptake of drugs and thereby provide an efficient therapeutic benefit over the drug on its own. After providing a brief overview of various drug targeting approaches, this review focusses on peptides as carriers and targeting moieties in drug-peptide covalent conjugates and summarizes the most recent literature examples where CPPs on their own or CPPs together with TTPs have been conjugated to anticancer drugs such as Doxorubicin, Methotrexate, Paclitaxel, Chlorambucil etc. A short section on CPPs used in multicomponent drug delivery systems is also included. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Need for speed: An optimized gridding approach for spatially explicit disease simulations.

    PubMed

    Sellman, Stefan; Tsao, Kimberly; Tildesley, Michael J; Brommesson, Peter; Webb, Colleen T; Wennergren, Uno; Keeling, Matt J; Lindström, Tom

    2018-04-01

    Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power.

  1. Need for speed: An optimized gridding approach for spatially explicit disease simulations

    PubMed Central

    Tildesley, Michael J.; Brommesson, Peter; Webb, Colleen T.; Wennergren, Uno; Lindström, Tom

    2018-01-01

    Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power. PMID:29624574

  2. Wavelet filter analysis of local atmospheric pressure effects in the long-period tidal bands

    NASA Astrophysics Data System (ADS)

    Hu, X.-G.; Liu, L. T.; Ducarme, B.; Hsu, H. T.; Sun, H.-P.

    2006-11-01

    It is well known that local atmospheric pressure variations obviously affect the observation of short-period Earth tides, such as diurnal tides, semi-diurnal tides and ter-diurnal tides, but local atmospheric pressure effects on the long-period Earth tides have not been studied in detail. This is because the local atmospheric pressure is believed not to be sufficient for an effective pressure correction in long-period tidal bands, and there are no efficient methods to investigate local atmospheric effects in these bands. The usual tidal analysis software package, such as ETERNA, Baytap-G and VAV, cannot provide detailed pressure admittances for long-period tidal bands. We propose a wavelet method to investigate local atmospheric effects on gravity variations in long-period tidal bands. This method constructs efficient orthogonal filter bank with Daubechies wavelets of high vanishing moments. The main advantage of the wavelet filter bank is that it has excellent low frequency response and efficiently suppresses instrumental drift of superconducting gravimeters (SGs) without using any mathematical model. Applying the wavelet method to the 13-year continuous gravity observations from SG T003 in Brussels, Belgium, we filtered 12 long-period tidal groups into eight narrow frequency bands. Wavelet method demonstrates that local atmospheric pressure fluctuations are highly correlated with the noise of SG measurements in the period band 4-40 days with correlation coefficients higher than 0.95 and local atmospheric pressure variations are the main error source for the determination of the tidal parameters in these bands. We show the significant improvement of long-period tidal parameters provided by wavelet method in term of precision.

  3. Enhanced encapsulation and bioavailability of breviscapine in PLGA microparticles by nanocrystal and water-soluble polymer template techniques.

    PubMed

    Wang, Hong; Zhang, Guangxing; Ma, Xueqin; Liu, Yanhua; Feng, Jun; Park, Kinam; Wang, Wenping

    2017-06-01

    Poly (lactide-co-glycolide) (PLGA) microparticles are widely used for controlled drug delivery. Emulsion methods have been commonly used for preparation of PLGA microparticles, but they usually result in low loading capacity, especially for drugs with poor solubility in organic solvents. In the present study, the nanocrystal technology and a water-soluble polymer template method were used to fabricate nanocrystal-loaded microparticles with improved drug loading and encapsulation efficiency for prolonged delivery of breviscapine. Breviscapine nanocrystals were prepared using a precipitation-ultrasonication method and further loaded into PLGA microparticles by casting in a mold from a water-soluble polymer. The obtained disc-like particles were then characterized and compared with the spherical particles prepared by an emulsion-solvent evaporation method. X-ray powder diffraction (XRPD) and confocal laser scanning microscopy (CLSM) analysis confirmed a highly-dispersed state of breviscapine inside the microparticles. The drug form, loading percentage and fabrication techniques significantly affected the loading capacity and efficiency of breviscapine in PLGA microparticles, and their release performance as well. Drug loading was increased from 2.4% up to 15.3% when both nanocrystal and template methods were applied, and encapsulation efficiency increased from 48.5% to 91.9%. But loading efficiency was reduced as the drug loading was increased. All microparticles showed an initial burst release, and then a slow release period of 28days followed by an erosion-accelerated release phase, which provides a sustained delivery of breviscapine over a month. A relatively stable serum drug level for more than 30days was observed after intramuscular injection of microparticles in rats. Therefore, PLGA microparticles loaded with nanocrystals of poorly soluble drugs provided a promising approach for long-term therapeutic products characterized with preferable in vitro and in vivo performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Efficient spin filter and spin valve in a single-molecule magnet Fe{sub 4} between two graphene electrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zu, Feng-Xia; School of Physics and Wuhan National High Magnetic Field Center, Huazhong University of Science and Technology, Wuhan 430074; Gao, Guo-Ying

    2015-12-21

    We propose a magnetic molecular junction consisting of a single-molecule magnet Fe{sub 4} connected two graphene electrodes and investigate transport properties, using the nonequilibrium Green's function method in combination with spin-polarized density-functional theory. The results show that the device can be used as a nearly perfect spin filter with efficiency approaching 100%. Our calculations provide crucial microscopic information how the four iron cores of the chemical structure are responsible for the spin-resolved transmissions. Moreover, it is also found that the device behaves as a highly efficient spin valve, which is an excellent candidate for spintronics of molecular devices. The ideamore » of combining single-molecule magnets with graphene provides a direction in designing a new class of molecular spintronic devices.« less

  5. Unipolar Barrier Dual-Band Infrared Detectors

    NASA Technical Reports Server (NTRS)

    Ting, David Z. (Inventor); Soibel, Alexander (Inventor); Khoshakhlagh, Arezou (Inventor); Gunapala, Sarath (Inventor)

    2017-01-01

    Dual-band barrier infrared detectors having structures configured to reduce spectral crosstalk between spectral bands and/or enhance quantum efficiency, and methods of their manufacture are provided. In particular, dual-band device structures are provided for constructing high-performance barrier infrared detectors having reduced crosstalk and/or enhance quantum efficiency using novel multi-segmented absorber regions. The novel absorber regions may comprise both p-type and n-type absorber sections. Utilizing such multi-segmented absorbers it is possible to construct any suitable barrier infrared detector having reduced crosstalk, including npBPN, nBPN, pBPN, npBN, npBP, pBN and nBP structures. The pBPN and pBN detector structures have high quantum efficiency and suppresses dark current, but has a smaller etch depth than conventional detectors and does not require a thick bottom contact layer.

  6. Ancient numerical daemons of conceptual hydrological modeling: 1. Fidelity and efficiency of time stepping schemes

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Kavetski, Dmitri

    2010-10-01

    A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.

  7. High Efficiency DNA Extraction by Graphite Oxide/Cellulose/Magnetite Composites Under Na+ Free System

    NASA Astrophysics Data System (ADS)

    Akceoglu, Garbis Atam; Li, Oi Lun; Saito, Nagahiro

    2016-04-01

    DNA extraction is the key step at various research areas like biotechnology, diagnostic development, paternity determination, and forensic science . Solid support extraction is the most common method for DNA purification. In this method, Na+ ions have often been applied as binding buffers in order to obtain high extraction efficiency and high quality of DNA; however, the presence of Na+ ions might be interfering with the downstream DNA applications. In this study, we proposed graphite oxide (GO)/magnetite composite/cellulose as an innovative material for Na+-free DNA extraction. The total wt.% of GO was fixed at 4.15% in the GO/cellulose/magnetite composite . The concentration of magnetite within the composites were controlled at 0-3.98 wt.%. The extraction yield of DNA increased with increasing weight percentage of magnetite. The highest yield was achieved at 3.98 wt.% magnetite, where the extraction efficiency was reported to be 338.5 ng/µl. The absorbance ratios between 260 nm and 280 nm (A260/A280) of the DNA elution volume was demonstrated as 1.81, indicating the extracted DNA consisted of high purity. The mechanism of adsorption of DNA was provided by (1) π-π interaction between the aromatic ring in GO and nucleobases of DNA molecule, and (2) surface charge interaction between the positive charge magnetite and anions such as phosphates within the DNA molecules. The results proved that the GO/cellulose/magnetite composite provides a Na+-free method for selective DNA extraction with high extraction efficiency of pure DNA.

  8. Thermodynamic efficiency of learning a rule in neural networks

    NASA Astrophysics Data System (ADS)

    Goldt, Sebastian; Seifert, Udo

    2017-11-01

    Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.

  9. Optical determination of Shockley-Read-Hall and interface recombination currents in hybrid perovskites

    PubMed Central

    Sarritzu, Valerio; Sestu, Nicola; Marongiu, Daniela; Chang, Xueqing; Masi, Sofia; Rizzo, Aurora; Colella, Silvia; Quochi, Francesco; Saba, Michele; Mura, Andrea; Bongiovanni, Giovanni

    2017-01-01

    Metal-halide perovskite solar cells rival the best inorganic solar cells in power conversion efficiency, providing the outlook for efficient, cheap devices. In order for the technology to mature and approach the ideal Shockley-Queissier efficiency, experimental tools are needed to diagnose what processes limit performances, beyond simply measuring electrical characteristics often affected by parasitic effects and difficult to interpret. Here we study the microscopic origin of recombination currents causing photoconversion losses with an all-optical technique, measuring the electron-hole free energy as a function of the exciting light intensity. Our method allows assessing the ideality factor and breaks down the electron-hole recombination current into bulk defect and interface contributions, providing an estimate of the limit photoconversion efficiency, without any real charge current flowing through the device. We identify Shockley-Read-Hall recombination as the main decay process in insulated perovskite layers and quantify the additional performance degradation due to interface recombination in heterojunctions. PMID:28317883

  10. Eco Assist Techniques through Real-time Monitoring of BEV Energy Usage Efficiency

    PubMed Central

    Kim, Younsun; Lee, Ingeol; Kang, Sungho

    2015-01-01

    Energy efficiency enhancement has become an increasingly important issue for battery electric vehicles. Even if it can be improved in many ways, the driver’s driving pattern strongly influences the battery energy consumption of a vehicle. In this paper, eco assist techniques to simply implement an energy-efficient driving assistant system are introduced, including eco guide, eco control and eco monitoring methods. The eco guide is provided to control the vehicle speed and accelerator pedal stroke, and eco control is suggested to limit the output power of the battery. For eco monitoring, the eco indicator and eco report are suggested to teach eco-friendly driving habits. The vehicle test, which is done in four ways, consists of federal test procedure (FTP)-75, new european driving cycle (NEDC), city and highway cycles, and visual feedback with audible warnings is provided to attract the driver’s voluntary participation. The vehicle test result shows that the energy usage efficiency can be increased up to 19.41%. PMID:26121611

  11. Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter.

    PubMed

    Sung, Kwangjae; Lee, Dong Kyu 'Roy'; Kim, Hwangnam

    2018-05-26

    The reliable and accurate indoor pedestrian positioning is one of the biggest challenges for location-based systems and applications. Most pedestrian positioning systems have drift error and large bias due to low-cost inertial sensors and random motions of human being, as well as unpredictable and time-varying radio-frequency (RF) signals used for position determination. To solve this problem, many indoor positioning approaches that integrate the user's motion estimated by dead reckoning (DR) method and the location data obtained by RSS fingerprinting through Bayesian filter, such as the Kalman filter (KF), unscented Kalman filter (UKF), and particle filter (PF), have recently been proposed to achieve higher positioning accuracy in indoor environments. Among Bayesian filtering methods, PF is the most popular integrating approach and can provide the best localization performance. However, since PF uses a large number of particles for the high performance, it can lead to considerable computational cost. This paper presents an indoor positioning system implemented on a smartphone, which uses simple dead reckoning (DR), RSS fingerprinting using iBeacon and machine learning scheme, and improved KF. The core of the system is the enhanced KF called a sigma-point Kalman particle filter (SKPF), which localize the user leveraging both the unscented transform of UKF and the weighting method of PF. The SKPF algorithm proposed in this study is used to provide the enhanced positioning accuracy by fusing positional data obtained from both DR and fingerprinting with uncertainty. The SKPF algorithm can achieve better positioning accuracy than KF and UKF and comparable performance compared to PF, and it can provide higher computational efficiency compared with PF. iBeacon in our positioning system is used for energy-efficient localization and RSS fingerprinting. We aim to design the localization scheme that can realize the high positioning accuracy, computational efficiency, and energy efficiency through the SKPF and iBeacon indoors. Empirical experiments in real environments show that the use of the SKPF algorithm and iBeacon in our indoor localization scheme can achieve very satisfactory performance in terms of localization accuracy, computational cost, and energy efficiency.

  12. System and method for leveraging human physiological traits to control microprocessor frequency

    DOEpatents

    Shye, Alex; Pan, Yan; Scholbrock, Benjamin; Miller, J. Scott; Memik, Gokhan; Dinda, Peter A; Dick, Robert P

    2014-03-25

    A system and method for leveraging physiological traits to control microprocessor frequency are disclosed. In some embodiments, the system and method may optimize, for example, a particular processor-based architecture based on, for example, end user satisfaction. In some embodiments, the system and method may determine, for example, whether their users are satisfied to provide higher efficiency, improved reliability, reduced power consumption, increased security, and a better user experience. The system and method may use, for example, biometric input devices to provide information about a user's physiological traits to a computer system. Biometric input devices may include, for example, one or more of the following: an eye tracker, a galvanic skin response sensor, and/or a force sensor.

  13. Induction Consolidation of Thermoplastic Composites Using Smart Susceptors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsen, Marc R

    2012-06-14

    This project has focused on the area of energy efficient consolidation and molding of fiber reinforced thermoplastic composite components as an energy efficient alternative to the conventional processing methods such as autoclave processing. The expanding application of composite materials in wind energy, automotive, and aerospace provides an attractive energy efficiency target for process development. The intent is to have this efficient processing along with the recyclable thermoplastic materials ready for large scale application before these high production volume levels are reached. Therefore, the process can be implemented in a timely manner to realize the maximum economic, energy, and environmental efficiencies.more » Under this project an increased understanding of the use of induction heating with smart susceptors applied to consolidation of thermoplastic has been achieved. This was done by the establishment of processing equipment and tooling and the subsequent demonstration of this fabrication technology by consolidating/molding of entry level components for each of the participating industrial segments, wind energy, aerospace, and automotive. This understanding adds to the nation's capability to affordably manufacture high quality lightweight high performance components from advanced recyclable composite materials in a lean and energy efficient manner. The use of induction heating with smart susceptors is a precisely controlled low energy method for the consolidation and molding of thermoplastic composites. The smart susceptor provides intrinsic thermal control based on the interaction with the magnetic field from the induction coil thereby producing highly repeatable processing. The low energy usage is enabled by the fact that only the smart susceptor surface of the tool is heated, not the entire tool. Therefore much less mass is heated resulting in significantly less required energy to consolidate/mold the desired composite components. This energy efficiency results in potential energy savings of {approx}75% as compared to autoclave processing in aerospace, {approx}63% as compared to compression molding in automotive, and {approx}42% energy savings as compared to convectively heated tools in wind energy. The ability to make parts in a rapid and controlled manner provides significant economic advantages for each of the industrial segments. These attributes were demonstrated during the processing of the demonstration components on this project.« less

  14. A wavefront orientation method for precise numerical determination of tsunami travel time

    NASA Astrophysics Data System (ADS)

    Fine, I. V.; Thomson, R. E.

    2013-04-01

    We present a highly accurate and computationally efficient method (herein, the "wavefront orientation method") for determining the travel time of oceanic tsunamis. Based on Huygens principle, the method uses an eight-point grid-point pattern and the most recent information on the orientation of the advancing wave front to determine the time for a tsunami to travel to a specific oceanic location. The method is shown to provide improved accuracy and reduced anisotropy compared with the conventional multiple grid-point method presently in widespread use.

  15. Comparison of Several Methods for Determining the Internal Resistance of Lithium Ion Cells

    PubMed Central

    Schweiger, Hans-Georg; Obeidi, Ossama; Komesker, Oliver; Raschke, André; Schiemann, Michael; Zehner, Christian; Gehnen, Markus; Keller, Michael; Birke, Peter

    2010-01-01

    The internal resistance is the key parameter for determining power, energy efficiency and lost heat of a lithium ion cell. Precise knowledge of this value is vital for designing battery systems for automotive applications. Internal resistance of a cell was determined by current step methods, AC (alternating current) methods, electrochemical impedance spectroscopy and thermal loss methods. The outcomes of these measurements have been compared with each other. If charge or discharge of the cell is limited, current step methods provide the same results as energy loss methods. PMID:22219678

  16. Multi-class multi-residue analysis of veterinary drugs in meat using enhanced matrix removal lipid cleanup and liquid chromatography-tandem mass spectrometry.

    PubMed

    Zhao, Limian; Lucas, Derick; Long, David; Richter, Bruce; Stevens, Joan

    2018-05-11

    This study presents the development and validation of a quantitation method for the analysis of multi-class, multi-residue veterinary drugs using lipid removal cleanup cartridges, enhanced matrix removal lipid (EMR-Lipid), for different meat matrices by liquid chromatography tandem mass spectrometry detection. Meat samples were extracted using a two-step solid-liquid extraction followed by pass-through sample cleanup. The method was optimized based on the buffer and solvent composition, solvent additive additions, and EMR-Lipid cartridge cleanup. The developed method was then validated in five meat matrices, porcine muscle, bovine muscle, bovine liver, bovine kidney and chicken liver to evaluate the method performance characteristics, such as absolute recoveries and precision at three spiking levels, calibration curve linearity, limit of quantitation (LOQ) and matrix effect. The results showed that >90% of veterinary drug analytes achieved satisfactory recovery results of 60-120%. Over 97% analytes achieved excellent reproducibility results (relative standard deviation (RSD) < 20%), and the LOQs were 1-5 μg/kg in the evaluated meat matrices. The matrix co-extractive removal efficiency by weight provided by EMR-lipid cartridge cleanup was 42-58% in samples. The post column infusion study showed that the matrix ion suppression was reduced for samples with the EMR-Lipid cartridge cleanup. The reduced matrix ion suppression effect was also confirmed with <15% frequency of compounds with significant quantitative ion suppression (>30%) for all tested veterinary drugs in all of meat matrices. The results showed that the two-step solid-liquid extraction provides efficient extraction for the entire spectrum of veterinary drugs, including the difficult classes such as tetracyclines, beta-lactams etc. EMR-Lipid cartridges after extraction provided efficient sample cleanup with easy streamlined protocol and minimal impacts on analytes recovery, improving method reliability and consistency. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. On evaluating health centers groups in Lisbon and Tagus Valley: efficiency, equity and quality

    PubMed Central

    2013-01-01

    Background Bearing in mind the increasing health expenses and their weight in the Portuguese gross domestic product, it is of the utmost importance to evaluate the performance of Primary Health Care providers taking into account both efficiency, quality and equity. This paper aims to contribute to a better understanding of the performance of Primary Health Care by measuring it in a Portuguese region (Lisbon and Tagus Valley) and identifying best practices. It also intends to evaluate the quality and equity provided. Methods For the purpose of measuring the efficiency of the health care centers (ACES) the non-parametric full frontier technique of data envelopment analysis (DEA) was adopted. The recent partial frontier method of order-m was also used to estimate the influence of exogenous variables on the efficiency of the ACES. The horizontal equity was investigated by applying the non-parametric Kruskal-Wallis test with multiple comparisons. Moreover, the quality of service was analyzed by using the ratio between the complaints and the total activity of the ACES. Results On the whole, a significant level of inefficiency was observed, although there was a general improvement in efficiency between 2009 and 2010. It was found that nursing was the service with the lowest scores. Concerning the horizontal equity, the analysis showed that there is no evidence of relevant disparities between the different subregions(NUTS III). Concerning the exogenous variables, the purchasing power, the percentage of patients aged 65 years old or older and the population size affect the efficiency negatively. Conclusions This research shows that better usage of the available resources and the creation of a learning network and dissemination of best practices will contribute to improvements in the efficiency of the ACES while maintaining or even improving quality and equity. It was also proved that the market structure does matter when efficiency measurement is addressed. PMID:24359014

  18. Noise-resistant spectral features for retrieving foliar chemical parameters

    USDA-ARS?s Scientific Manuscript database

    Foliar chemical constituents are important indicators for understanding vegetation growing status and ecosystem functionality. Provided the noncontact and nondestructive traits, the hyperspectral analysis is a superior and efficient method for deriving these parameters. In practical implementation o...

  19. An efficient visualization method for analyzing biometric data

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; McGonagle, Mike; Yates, J. Harlan; Henning, Ronda; Hackett, Jay

    2013-05-01

    We introduce a novel application for biometric data analysis. This technology can be used as part of a unique and systematic approach designed to augment existing processing chains. Our system provides image quality control and analysis capabilities. We show how analysis and efficient visualization are used as part of an automated process. The goal of this system is to provide a unified platform for the analysis of biometric images that reduce manual effort and increase the likelihood of a match being brought to an examiner's attention from either a manual or lights-out application. We discuss the functionality of FeatureSCOPE™ which provides an efficient tool for feature analysis and quality control of biometric extracted features. Biometric databases must be checked for accuracy for a large volume of data attributes. Our solution accelerates review of features by a factor of up to 100 times. Review of qualitative results and cost reduction is shown by using efficient parallel visual review for quality control. Our process automatically sorts and filters features for examination, and packs these into a condensed view. An analyst can then rapidly page through screens of features and flag and annotate outliers as necessary.

  20. Decomposing Cost Efficiency in Regional Long-term Care Provision in Japan.

    PubMed

    Yamauchi, Yasuhiro

    2015-07-12

    Many developed countries face a growing need for long-term care provision because of population ageing. Japan is one such example, given its population's longevity and low birth rate. In this study, we examine the efficiency of Japan's regional long-term care system in FY2010 by performing a data envelopment analysis, a non-parametric frontier approach, on prefectural data and separating cost efficiency into technical, allocative, and price efficiencies under different average unit costs across regions. In doing so, we elucidate the structure of cost inefficiency by incorporating a method for restricting weight flexibility to avoid unrealistic concerns arising from zero optimal weight. The results indicate that technical inefficiency accounts for the highest share of losses, followed by price inefficiency and allocation inefficiency. Moreover, the majority of technical inefficiency losses stem from labor costs, particularly those for professional caregivers providing institutional services. We show that the largest share of allocative inefficiency losses can also be traced to labor costs for professional caregivers providing institutional services, while the labor provision of in-home care services shows an efficiency gain. However, although none of the prefectures gains efficiency by increasing the number of professional caregivers for institutional services, quite a few prefectures would gain allocative efficiency by increasing capital inputs for institutional services. These results indicate that preferred policies for promoting efficiency might vary from region to region, and thus, policy implications should be drawn with care.

  1. Antimicrobial breakpoint estimation accounting for variability in pharmacokinetics.

    PubMed

    Bi, Goue Denis Gohore; Li, Jun; Nekka, Fahima

    2009-06-26

    Pharmacokinetic and pharmacodynamic (PK/PD) indices are increasingly being used in the microbiological field to assess the efficacy of a dosing regimen. In contrast to methods using MIC, PK/PD-based methods reflect in vivo conditions and are more predictive of efficacy. Unfortunately, they entail the use of one PK-derived value such as AUC or Cmax and may thus lead to biased efficiency information when the variability is large. The aim of the present work was to evaluate the efficacy of a treatment by adjusting classical breakpoint estimation methods to the situation of variable PK profiles. We propose a logical generalisation of the usual AUC methods by introducing the concept of "efficiency" for a PK profile, which involves the efficacy function as a weight. We formulated these methods for both classes of concentration- and time-dependent antibiotics. Using drug models and in silico approaches, we provide a theoretical basis for characterizing the efficiency of a PK profile under in vivo conditions. We also used the particular case of variable drug intake to assess the effect of the variable PK profiles generated and to analyse the implications for breakpoint estimation. Compared to traditional methods, our weighted AUC approach gives a more powerful PK/PD link and reveals, through examples, interesting issues about the uniqueness of therapeutic outcome indices and antibiotic resistance problems.

  2. Increasing the computational efficient of digital cross correlation by a vectorization method

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  3. Efficient alignment-free DNA barcode analytics.

    PubMed

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  4. Estimation and modeling of electrofishing capture efficiency for fishes in wadeable warmwater streams

    USGS Publications Warehouse

    Price, A.; Peterson, James T.

    2010-01-01

    Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.

  5. Detection of main tidal frequencies using least squares harmonic estimation method

    NASA Astrophysics Data System (ADS)

    Mousavian, R.; Hossainali, M. Mashhadi

    2012-11-01

    In this paper the efficiency of the method of Least Squares Harmonic Estimation (LS-HE) for detecting the main tidal frequencies is investigated. Using this method, the tidal spectrum of the sea level data is evaluated at two tidal stations: Bandar Abbas in south of Iran and Workington on the eastern coast of the UK. The amplitudes of the tidal constituents at these two tidal stations are not the same. Moreover, in contrary to the Workington station, the Bandar Abbas tidal record is not an equispaced time series. Therefore, the analysis of the hourly tidal observations in Bandar Abbas and Workington can provide a reasonable insight into the efficiency of this method for analyzing the frequency content of tidal time series. Furthermore, applying the method of Fourier transform to the Workington tidal record provides an independent source of information for evaluating the tidal spectrum proposed by the LS-HE method. According to the obtained results, the spectrums of these two tidal records contain the components with the maximum amplitudes among the expected ones in this time span and some new frequencies in the list of known constituents. In addition, in terms of frequencies with maximum amplitude; the power spectrums derived from two aforementioned methods are the same. These results demonstrate the ability of LS-HE for identifying the frequencies with maximum amplitude in both tidal records.

  6. Teleform scannable data entry: an efficient method to update a community-based medical record? Community care coordination network Database Group.

    PubMed Central

    Guerette, P.; Robinson, B.; Moran, W. P.; Messick, C.; Wright, M.; Wofford, J.; Velez, R.

    1995-01-01

    Community-based multi-disciplinary care of chronically ill individuals frequently requires the efforts of several agencies and organizations. The Community Care Coordination Network (CCCN) is an effort to establish a community-based clinical database and electronic communication system to facilitate the exchange of pertinent patient data among primary care, community-based and hospital-based providers. In developing a primary care based electronic record, a method is needed to update records from the field or remote sites and agencies and yet maintain data quality. Scannable data entry with fixed fields, optical character recognition and verification was compared to traditional keyboard data entry to determine the relative efficiency of each method in updating the CCCN database. PMID:8563414

  7. Delivery methods for site-specific nucleases: Achieving the full potential of therapeutic gene editing.

    PubMed

    Liu, Jia; Shui, Sai-Lan

    2016-12-28

    The advent of site-specific nucleases, particularly CRISPR/Cas9, provides researchers with the unprecedented ability to manipulate genomic sequences. These nucleases are used to create model cell lines, engineer metabolic pathways, produce transgenic animals and plants, perform genome-wide functional screen and, most importantly, treat human diseases that are difficult to tackle by traditional medications. Considerable efforts have been devoted to improving the efficiency and specificity of nucleases for clinical applications. However, safe and efficient delivery methods remain the major obstacle for therapeutic gene editing. In this review, we summarize the recent progress on nuclease delivery methods, highlight their impact on the outcomes of gene editing and discuss the potential of different delivery approaches for therapeutic gene editing. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Efficient model learning methods for actor-critic control.

    PubMed

    Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik

    2012-06-01

    We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.

  9. High-throughput protein concentration and buffer exchange: comparison of ultrafiltration and ammonium sulfate precipitation.

    PubMed

    Moore, Priscilla A; Kery, Vladimir

    2009-01-01

    High-throughput protein purification is a complex, multi-step process. There are several technical challenges in the course of this process that are not experienced when purifying a single protein. Among the most challenging are the high-throughput protein concentration and buffer exchange, which are not only labor-intensive but can also result in significant losses of purified proteins. We describe two methods of high-throughput protein concentration and buffer exchange: one using ammonium sulfate precipitation and one using micro-concentrating devices based on membrane ultrafiltration. We evaluated the efficiency of both methods on a set of 18 randomly selected purified proteins from Shewanella oneidensis. While both methods provide similar yield and efficiency, the ammonium sulfate precipitation is much less labor intensive and time consuming than the ultrafiltration.

  10. Quantitative determination of a-Arbutin, ß-Arbutin, Kojic acid, nicotinamide, hydroquinone, resorcinol, 4-methoxyphenol, 4-ethoxyphenol and ascorbic acid from skin whitening Products by HPLC-UV

    USDA-ARS?s Scientific Manuscript database

    Development of an analytical method for the simultaneous determination of multifarious skin whitening agents will provide an efficient tool to analyze skin whitening cosmetics. An HPLC-UV method was developed for quantitative analysis of six commonly used whitening agents, a-arbutin, ß-arbutin, koji...

  11. Efficient Synthesis of γ-Lactams by a Tandem Reductive Amination/Lactamization Sequence

    PubMed Central

    Nöth, Julica; Frankowski, Kevin J.; Neuenswander, Benjamin; Aubé, Jeffrey; Reiser, Oliver

    2009-01-01

    A three-component method for synthesizing highly-substituted γ-lactams from readily available maleimides, aldehydes and amines is described. A new reductive amination/intramolecular lactamization sequence provides a straightforward route to the lactam products in a single manipulation. The general utility of this method is demonstrated by the parallel synthesis of a γ-lactam library. PMID:18338857

  12. Efficient synthesis of gamma-lactams by a tandem reductive amination/lactamization sequence.

    PubMed

    Nöth, Julica; Frankowski, Kevin J; Neuenswander, Benjamin; Aubé, Jeffrey; Reiser, Oliver

    2008-01-01

    A three-component method for the synthesis of highly substituted gamma-lactams from readily available maleimides, aldehydes, and amines is described. A new reductive amination/intramolecular lactamization sequence provides a straightforward route to the lactam products in a single manipulation. The general utility of this method is demonstrated by the parallel synthesis of a gamma-lactam library.

  13. Toward standardized test methods to determine the effectiveness of filtration media against airborne nanoparticles

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Tronville, Paolo

    2014-06-01

    The filtration of airborne nanoparticles is an important control technique as the environmental, health, and safety impacts of nanomaterials grow. A review of the literature shows that significant progress has been made on airborne nanoparticle filtration in the academic field in the recent years. We summarize the filtration mechanisms of fibrous and membrane filters; the air flow resistance and filter media figure of merit are discussed. Our review focuses on the air filtration test methods and instrumentation necessary to implement them; recent experimental studies are summarized accordingly. Two methods using monodisperse and polydisperse challenging aerosols, respectively, are discussed in detail. Our survey shows that the commercial instruments are already available for generating a large amount of nanoparticles, sizing, and quantifying them accurately. The commercial self-contained filter test systems provide the possibility of measurement for particles down to 15 nm. Current international standards dealing with efficiency test for filters and filter media focus on measurement of the minimum efficiency at the most penetrating particle size. The available knowledge and instruments provide a solid base for development of test methods to determine the effectiveness of filtration media against airborne nanoparticles down to single-digit nanometer range.

  14. Beluga whale, Delphinapterus leucas, vocalizations from the Churchill River, Manitoba, Canada.

    PubMed

    Chmelnitsky, Elly G; Ferguson, Steven H

    2012-06-01

    Classification of animal vocalizations is often done by a human observer using aural and visual analysis but more efficient, automated methods have also been utilized to reduce bias and increase reproducibility. Beluga whale, Delphinapterus leucas, calls were described from recordings collected in the summers of 2006-2008, in the Churchill River, Manitoba. Calls (n=706) were classified based on aural and visual analysis, and call characteristics were measured; calls were separated into 453 whistles (64.2%; 22 types), 183 pulsed∕noisy calls (25.9%; 15 types), and 70 combined calls (9.9%; seven types). Measured parameters varied within each call type but less variation existed in pulsed and noisy call types and some combined call types than in whistles. A more efficient and repeatable hierarchical clustering method was applied to 200 randomly chosen whistles using six call characteristics as variables; twelve groups were identified. Call characteristics varied less in cluster analysis groups than in whistle types described by visual and aural analysis and results were similar to the whistle contours described. This study provided the first description of beluga calls in Hudson Bay and using two methods provides more robust interpretations and an assessment of appropriate methods for future studies.

  15. Dynamic programming-based hot spot identification approach for pedestrian crashes.

    PubMed

    Medury, Aditya; Grembek, Offer

    2016-08-01

    Network screening techniques are widely used by state agencies to identify locations with high collision concentration, also referred to as hot spots. However, most of the research in this regard has focused on identifying highway segments that are of concern to automobile collisions. In comparison, pedestrian hot spot detection has typically focused on analyzing pedestrian crashes in specific locations, such as at/near intersections, mid-blocks, and/or other crossings, as opposed to long stretches of roadway. In this context, the efficiency of the some of the widely used network screening methods has not been tested. Hence, in order to address this issue, a dynamic programming-based hot spot identification approach is proposed which provides efficient hot spot definitions for pedestrian crashes. The proposed approach is compared with the sliding window method and an intersection buffer-based approach. The results reveal that the dynamic programming method generates more hot spots with a higher number of crashes, while providing small hot spot segment lengths. In comparison, the sliding window method is shown to suffer from shortcomings due to a first-come-first-serve approach vis-à-vis hot spot identification and a fixed hot spot window length assumption. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Method to produce nanocrystalline powders of oxide-based phosphors for lighting applications

    DOEpatents

    Loureiro, Sergio Paulo Martins; Setlur, Anant Achyut; Williams, Darryl Stephen; Manoharan, Mohan; Srivastava, Alok Mani

    2007-12-25

    Some embodiments of the present invention are directed toward nanocrystalline oxide-based phosphor materials, and methods for making same. Typically, such methods comprise a steric entrapment route for converting precursors into such phosphor material. In some embodiments, the nanocrystalline oxide-based phosphor materials are quantum splitting phosphors. In some or other embodiments, such nanocrystalline oxide based phosphor materials provide reduced scattering, leading to greater efficiency, when used in lighting applications.

  17. Controlling Second Harmonic Efficiency of Laser Beam Interactions

    NASA Technical Reports Server (NTRS)

    Barnes, Norman P. (Inventor); Walsh, Brian M. (Inventor); Reichle, Donald J. (Inventor)

    2011-01-01

    A method is provided for controlling second harmonic efficiency of laser beam interactions. A laser system generates two laser beams (e.g., a laser beam with two polarizations) for incidence on a nonlinear crystal having a preferred direction of propagation. Prior to incidence on the crystal, the beams are optically processed based on the crystal's beam separation characteristics to thereby control a position in the crystal along the preferred direction of propagation at which the beams interact.

  18. Mesoporous TiO2 Yolk-Shell Microspheres for Dye-sensitized Solar Cells with a High Efficiency Exceeding 11%

    PubMed Central

    Li, Zhao-Qian; Chen, Wang-Chao; Guo, Fu-Ling; Mo, Li-E; Hu, Lin-Hua; Dai, Song-Yuan

    2015-01-01

    Yolk-shell TiO2 microspheres were synthesized via a one-pot template-free solvothermal method building on the aldol condensation reaction of acetylacetone. This unique structure shows superior light scattering ability resulting in power conversion efficiency as high as 11%. This work provided a new synthesis system for TiO2 microspheres from solid to hollow and a novel material platform for high performance solar cells. PMID:26384004

  19. Formulation and application of optimal homotopty asymptotic method to coupled differential-difference equations.

    PubMed

    Ullah, Hakeem; Islam, Saeed; Khan, Ilyas; Shafie, Sharidan; Fiza, Mehreen

    2015-01-01

    In this paper we applied a new analytic approximate technique Optimal Homotopy Asymptotic Method (OHAM) for treatment of coupled differential-difference equations (DDEs). To see the efficiency and reliability of the method, we consider Relativistic Toda coupled nonlinear differential-difference equation. It provides us a convenient way to control the convergence of approximate solutions when it is compared with other methods of solution found in the literature. The obtained solutions show that OHAM is effective, simpler, easier and explicit.

  20. Formulation and Application of Optimal Homotopty Asymptotic Method to Coupled Differential - Difference Equations

    PubMed Central

    Ullah, Hakeem; Islam, Saeed; Khan, Ilyas; Shafie, Sharidan; Fiza, Mehreen

    2015-01-01

    In this paper we applied a new analytic approximate technique Optimal Homotopy Asymptotic Method (OHAM) for treatment of coupled differential- difference equations (DDEs). To see the efficiency and reliability of the method, we consider Relativistic Toda coupled nonlinear differential-difference equation. It provides us a convenient way to control the convergence of approximate solutions when it is compared with other methods of solution found in the literature. The obtained solutions show that OHAM is effective, simpler, easier and explicit. PMID:25874457

  1. A new way towards high-efficiency thermally activated delayed fluorescence devices via external heavy-atom effect

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzhi; Jin, Jiangjiang; Huang, Zhi; Zhuang, Shaoqing; Wang, Lei

    2016-07-01

    Thermally activated delayed fluorescence (TADF) mechanism is a significant method that enables the harvesting of both triplet and singlet excitons for emission. However, up to now most efforts have been devoted to dealing with the relation between singlet-triplet splitting (ΔEST) and fluorescence efficiency, while the significance of spin-orbit coupling (SOC) is usually ignored. In this contribution, a new method is developed to realize high-efficiency TADF-based devices through simple device-structure optimizations. By inserting an ultrathin external heavy-atom (EHA) perturber layer in a desired manner, it provides useful means of accelerating the T1 → S1 reverse intersystem crossing (RISC) in TADF molecules without affecting the corresponding S1 → T1 process heavily. Furthermore, this strategy also promotes the utilization of host triplets through Förster mechanism during host → guest energy transfer (ET) processes, which helps to get rid of the solely dependence upon Dexter mechanism. Based on this strategy, we have successfully raised the external quantum efficiency (EQE) in 4CzPN-based devices by nearly 38% in comparison to control devices. These findings provide keen insights into the role of EHA played in TADF-based devices, offering valuable guidelines for utilizing certain TADF dyes which possess high radiative transition rate but relatively inefficient RISC.

  2. A new approach to optimal selection of services in health care organizations.

    PubMed

    Adolphson, D L; Baird, M L; Lawrence, K D

    1991-01-01

    A new reimbursement policy adopted by Medicare in 1983 caused financial difficulties for many hospitals and health care organizations. Several organizations responded to these difficulties by developing systems to carefully measure their costs of providing services. The purpose of such systems was to provide relevant information about the profitability of hospital services. This paper presents a new method of making hospital service selection decisions: it is based on an optimization model that avoids arbitrary cost allocations as a basis for computing the costs of offering a given service. The new method provides more reliable information about which services are profitable or unprofitable, and it provides an accurate measure of the degree to which a service is profitable or unprofitable. The new method also provides useful information about the sensitivity of the optimal decision to changes in costs and revenues. Specialized algorithms for the optimization model lead to very efficient implementation of the method, even for the largest health care organizations.

  3. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  4. Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.

    PubMed

    Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank

    2017-12-01

    Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.

  5. A Reduced-Order Model for Efficient Simulation of Synthetic Jet Actuators

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2003-01-01

    A new reduced-order model of multidimensional synthetic jet actuators that combines the accuracy and conservation properties of full numerical simulation methods with the efficiency of simplified zero-order models is proposed. The multidimensional actuator is simulated by solving the time-dependent compressible quasi-1-D Euler equations, while the diaphragm is modeled as a moving boundary. The governing equations are approximated with a fourth-order finite difference scheme on a moving mesh such that one of the mesh boundaries coincides with the diaphragm. The reduced-order model of the actuator has several advantages. In contrast to the 3-D models, this approach provides conservation of mass, momentum, and energy. Furthermore, the new method is computationally much more efficient than the multidimensional Navier-Stokes simulation of the actuator cavity flow, while providing practically the same accuracy in the exterior flowfield. The most distinctive feature of the present model is its ability to predict the resonance characteristics of synthetic jet actuators; this is not practical when using the 3-D models because of the computational cost involved. Numerical results demonstrating the accuracy of the new reduced-order model and its limitations are presented.

  6. Beyond Standard Molecular Dynamics: Investigating the Molecular Mechanisms of G Protein-Coupled Receptors with Enhanced Molecular Dynamics Methods

    PubMed Central

    Johnston, Jennifer M.

    2014-01-01

    The majority of biological processes mediated by G Protein-Coupled Receptors (GPCRs) take place on timescales that are not conveniently accessible to standard molecular dynamics (MD) approaches, notwithstanding the current availability of specialized parallel computer architectures, and efficient simulation algorithms. Enhanced MD-based methods have started to assume an important role in the study of the rugged energy landscape of GPCRs by providing mechanistic details of complex receptor processes such as ligand recognition, activation, and oligomerization. We provide here an overview of these methods in their most recent application to the field. PMID:24158803

  7. Method for high specific bioproductivity of .alpha.,.omega.-alkanedicarboxylic acids

    DOEpatents

    Mobley, David Paul; Shank, Gary Keith

    2000-01-01

    This invention provides a low-cost method of producing .alpha.,.omega.-alkanedicarboxylic acids. Particular bioconversion conditions result in highly efficient conversion of fatty acid, fatty acid ester, or alkane substrates to diacids. Candida tropicalis AR40 or similar yeast strains are grown in a medium containing a carbon source and a nitrogen source at a temperature of 31.degree. C. to 38.degree. C., while additional carbon source is continuously added, until maximum cell growth is attained. Within 0-3 hours of this point, substrate is added to the culture to initiate conversion. An .alpha.,.omega.-alkanedicarboxylic acid made according to this method is also provided.

  8. An efficient and sensitive method for preparing cDNA libraries from scarce biological samples

    PubMed Central

    Sterling, Catherine H.; Veksler-Lublinsky, Isana; Ambros, Victor

    2015-01-01

    The preparation and high-throughput sequencing of cDNA libraries from samples of small RNA is a powerful tool to quantify known small RNAs (such as microRNAs) and to discover novel RNA species. Interest in identifying the small RNA repertoire present in tissues and in biofluids has grown substantially with the findings that small RNAs can serve as indicators of biological conditions and disease states. Here we describe a novel and straightforward method to clone cDNA libraries from small quantities of input RNA. This method permits the generation of cDNA libraries from sub-picogram quantities of RNA robustly, efficiently and reproducibly. We demonstrate that the method provides a significant improvement in sensitivity compared to previous cloning methods while maintaining reproducible identification of diverse small RNA species. This method should have widespread applications in a variety of contexts, including biomarker discovery from scarce samples of human tissue or body fluids. PMID:25056322

  9. Calculation of the Maxwell stress tensor and the Poisson-Boltzmann force on a solvated molecular surface using hypersingular boundary integrals

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Hou, Tingjun; McCammon, J. Andrew

    2005-08-01

    The electrostatic interaction among molecules solvated in ionic solution is governed by the Poisson-Boltzmann equation (PBE). Here the hypersingular integral technique is used in a boundary element method (BEM) for the three-dimensional (3D) linear PBE to calculate the Maxwell stress tensor on the solvated molecular surface, and then the PB forces and torques can be obtained from the stress tensor. Compared with the variational method (also in a BEM frame) that we proposed recently, this method provides an even more efficient way to calculate the full intermolecular electrostatic interaction force, especially for macromolecular systems. Thus, it may be more suitable for the application of Brownian dynamics methods to study the dynamics of protein/protein docking as well as the assembly of large 3D architectures involving many diffusing subunits. The method has been tested on two simple cases to demonstrate its reliability and efficiency, and also compared with our previous variational method used in BEM.

  10. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  11. Screening of groundwater remedial alternatives for brownfield sites: a comprehensive method integrated MCDA with numerical simulation.

    PubMed

    Li, Wei; Zhang, Min; Wang, Mingyu; Han, Zhantao; Liu, Jiankai; Chen, Zhezhou; Liu, Bo; Yan, Yan; Liu, Zhu

    2018-06-01

    Brownfield sites pollution and remediation is an urgent environmental issue worldwide. The screening and assessment of remedial alternatives is especially complex owing to its multiple criteria that involves technique, economy, and policy. To help the decision-makers selecting the remedial alternatives efficiently, the criteria framework conducted by the U.S. EPA is improved and a comprehensive method that integrates multiple criteria decision analysis (MCDA) with numerical simulation is conducted in this paper. The criteria framework is modified and classified into three categories: qualitative, semi-quantitative, and quantitative criteria, MCDA method, AHP-PROMETHEE (analytical hierarchy process-preference ranking organization method for enrichment evaluation) is used to determine the priority ranking of the remedial alternatives and the solute transport simulation is conducted to assess the remedial efficiency. A case study was present to demonstrate the screening method in a brownfield site in Cangzhou, northern China. The results show that the systematic method provides a reliable way to quantify the priority of the remedial alternatives.

  12. Advances in Significance Testing for Cluster Detection

    NASA Astrophysics Data System (ADS)

    Coleman, Deidra Andrea

    Over the past two decades, much attention has been given to data driven project goals such as the Human Genome Project and the development of syndromic surveillance systems. A major component of these types of projects is analyzing the abundance of data. Detecting clusters within the data can be beneficial as it can lead to the identification of specified sequences of DNA nucleotides that are related to important biological functions or the locations of epidemics such as disease outbreaks or bioterrorism attacks. Cluster detection techniques require efficient and accurate hypothesis testing procedures. In this dissertation, we improve upon the hypothesis testing procedures for cluster detection by enhancing distributional theory and providing an alternative method for spatial cluster detection using syndromic surveillance data. In Chapter 2, we provide an efficient method to compute the exact distribution of the number and coverage of h-clumps of a collection of words. This method involves defining a Markov chain using a minimal deterministic automaton to reduce the number of states needed for computation. We allow words of the collection to contain other words of the collection making the method more general. We use our method to compute the distributions of the number and coverage of h-clumps in the Chi motif of H. influenza.. In Chapter 3, we provide an efficient algorithm to compute the exact distribution of multiple window discrete scan statistics for higher-order, multi-state Markovian sequences. This algorithm involves defining a Markov chain to efficiently keep track of probabilities needed to compute p-values of the statistic. We use our algorithm to identify cases where the available approximation does not perform well. We also use our algorithm to detect unusual clusters of made free throw shots by National Basketball Association players during the 2009-2010 regular season. In Chapter 4, we give a procedure to detect outbreaks using syndromic surveillance data while controlling the Bayesian False Discovery Rate (BFDR). The procedure entails choosing an appropriate Bayesian model that captures the spatial dependency inherent in epidemiological data and considers all days of interest, selecting a test statistic based on a chosen measure that provides the magnitude of the maximumal spatial cluster for each day, and identifying a cutoff value that controls the BFDR for rejecting the collective null hypothesis of no outbreak over a collection of days for a specified region.We use our procedure to analyze botulism-like syndrome data collected by the North Carolina Disease Event Tracking and Epidemiologic Collection Tool (NC DETECT).

  13. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells

    PubMed Central

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized. PMID:28207765

  14. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells.

    PubMed

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang; Baifeng, Li

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized.

  15. A "hydrokinematic" method of measuring the glide efficiency of a human swimmer.

    PubMed

    Naemi, Roozbeh; Sanders, Ross H

    2008-12-01

    The aim of this study was to develop and test a method of quantifying the glide efficiency, defined as the ability of the body to maintain its velocity over time and to minimize deceleration through a rectilinear glide. The glide efficiency should be determined in a way that accounts for both the inertial and resistive characteristics of the gliding body as well as the instantaneous velocity. A displacement function (parametric curve) was obtained from the equation of motion of the body during a horizontal rectilinear glide. The values of the parameters in the displacement curve that provide the best fit to the displacement-time data of a body during a rectilinear horizontal glide represent the glide factor and the initial velocity of the particular glide interval. The glide factor is a measure of glide efficiency and indicates the ability of the body to minimize deceleration at each corresponding velocity. The glide efficiency depends on the hydrodynamic characteristic of the body, which is influenced by the body's shape as well as by the body's size. To distinguish the effects of size and shape on the glide efficiency, a size-related glide constant and a shape-related glide coefficient were determined as separate entities. The glide factor is the product of these two parameters. The goodness of fit statistics indicated that the representative displacement function found for each glide interval closely represents the real displacement data of a body in a rectilinear horizontal glide. The accuracy of the method was indicated by a relative standard error of calculation of less than 2.5%. Also the method was able to distinguish between subjects in their glide efficiency. It was found that the glide factor increased with decreasing velocity. The glide coefficient also increased with decreasing Reynolds number. The method is sufficiently accurate to distinguish between individual swimmers in terms of their glide efficiency. The separation of glide factor to a size-related glide constant and a shape-related glide coefficient enabled the effect of size and shape to be quantified.

  16. Efficient packet transportation on complex networks with nonuniform node capacity distribution

    NASA Astrophysics Data System (ADS)

    He, Xuan; Niu, Kai; He, Zhiqiang; Lin, Jiaru; Jiang, Zhong-Yuan

    2015-03-01

    Provided that node delivery capacity may be not uniformly distributed in many realistic networks, we present a node delivery capacity distribution in which each node capacity is composed of uniform fraction and degree related proportion. Based on the node delivery capacity distribution, we construct a novel routing mechanism called efficient weighted routing (EWR) strategy to enhance network traffic capacity and transportation efficiency. Compared with the shortest path routing and the efficient routing strategies, the EWR achieves the highest traffic capacity. After investigating average path length, network diameter, maximum efficient betweenness, average efficient betweenness, average travel time and average traffic load under extensive simulations, it indicates that the EWR appears to be a very effective routing method. The idea of this routing mechanism gives us a good insight into network science research. The practical use of this work is prospective in some real complex systems such as the Internet.

  17. A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.

    PubMed

    Huang, Shiping; Wu, Zhifeng; Misra, Anil

    2017-12-11

    Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.

  18. Standard Transistor Array (STAR). Volume 1: Placement technique

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Caroll, B. D.

    1979-01-01

    A large scale integration (LSI) technology, the standard transistor array uses a prefabricated understructure of transistors and a comprehensive library of digital logic cells to allow efficient fabrication of semicustom digital LSI circuits. The cell placement technique for this technology involves formation of a one dimensional cell layout and "folding" of the one dimensional placement onto the chip. It was found that, by use of various folding methods, high quality chip layouts can be achieved. Methods developed to measure of the "goodness" of the generated placements include efficient means for estimating channel usage requirements and for via counting. The placement and rating techniques were incorporated into a placement program (CAPSTAR). By means of repetitive use of the folding methods and simple placement improvement strategies, this program provides near optimum placements in a reasonable amount of time. The program was tested on several typical LSI circuits to provide performance comparisons both with respect to input parameters and with respect to the performance of other placement techniques. The results of this testing indicate that near optimum placements can be achieved by use of the procedures incurring severe time penalties.

  19. Fluid extraction

    DOEpatents

    Wai, Chien M.; Laintz, Kenneth E.

    1999-01-01

    A method of extracting metalloid and metal species from a solid or liquid material by exposing the material to a supercritical fluid solvent containing a chelating agent is described. The chelating agent forms chelates that are soluble in the supercritical fluid to allow removal of the species from the material. In preferred embodiments, the extraction solvent is supercritical carbon dioxide and the chelating agent is a fluorinated .beta.-diketone. In especially preferred embodiments the extraction solvent is supercritical carbon dioxide, and the chelating agent comprises a fluorinated .beta.-diketone and a trialkyl phosphate, or a fluorinated .beta.-diketone and a trialkylphosphine oxide. Although a trialkyl phosphate can extract lanthanides and actinides from acidic solutions, a binary mixture comprising a fluorinated .beta.-diketone and a trialkyl phosphate or a trialkylphosphine oxide tends to enhance the extraction efficiencies for actinides and lanthanides. The method provides an environmentally benign process for removing contaminants from industrial waste without using acids or biologically harmful solvents. The method is particularly useful for extracting actinides and lanthanides from acidic solutions. The chelate and supercritical fluid can be regenerated, and the contaminant species recovered, to provide an economic, efficient process.

  20. Developing Online Recruitment and Retention Methods for HIV Prevention Research Among Adolescent Males Who Are Interested in Sex with Males: Interviews with Adolescent Males

    PubMed Central

    Ramirez, Jaime J; Carey, Michael P

    2017-01-01

    Background Adolescent males interested in sex with males (AMSM) are an important audience for HIV prevention interventions, but they are difficult to reach due to their age and social stigma. Objective We aim to identify efficient methods to recruit and retain AMSM in online research. Methods Interviews with 14-to-18-year-old AMSM (N=16) were conducted at 2017 Pride events in Boston, MA and Providence, RI. Results Participants reported that (1) social media platforms are viable recruitment venues; (2) recruitment advertisements should describe the study using colorful/bright pictures, familiar words, and information about compensation; (3) surveys should be <20 minutes in length; (4) modest compensation (eg, email gift card, US $10 to $20) was preferred; and (5) communications that remind participants about the length and content of surveys, and compensation, should be sent between study activities to increase retention. Conclusions Soliciting input from AMSM provides critical guidance regarding recruitment and retention procedures to increase the efficiency of HIV prevention research for this at-risk group. PMID:29269343

  1. Projected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physics

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    2017-11-01

    We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.

  2. An efficient higher order family of root finders

    NASA Astrophysics Data System (ADS)

    Petkovic, Ljiljana D.; Rancic, Lidija; Petkovic, Miodrag S.

    2008-06-01

    A one parameter family of iterative methods for the simultaneous approximation of simple complex zeros of a polynomial, based on a cubically convergent Hansen-Patrick's family, is studied. We show that the convergence of the basic family of the fourth order can be increased to five and six using Newton's and Halley's corrections, respectively. Since these corrections use the already calculated values, the computational efficiency of the accelerated methods is significantly increased. Further acceleration is achieved by applying the Gauss-Seidel approach (single-step mode). One of the most important problems in solving nonlinear equations, the construction of initial conditions which provide both the guaranteed and fast convergence, is considered for the proposed accelerated family. These conditions are computationally verifiable; they depend only on the polynomial coefficients, its degree and initial approximations, which is of practical importance. Some modifications of the considered family, providing the computation of multiple zeros of polynomials and simple zeros of a wide class of analytic functions, are also studied. Numerical examples demonstrate the convergence properties of the presented family of root-finding methods.

  3. Aerodynamic analysis of Pegasus - Computations vs reality

    NASA Technical Reports Server (NTRS)

    Mendenhall, Michael R.; Lesieutre, Daniel J.; Whittaker, C. H.; Curry, Robert E.; Moulton, Bryan

    1993-01-01

    Pegasus, a three-stage, air-launched, winged space booster was developed to provide fast and efficient commercial launch services for small satellites. The aerodynamic design and analysis of Pegasus was conducted without benefit of wind tunnel tests using only computational aerodynamic and fluid dynamic methods. Flight test data from the first two operational flights of Pegasus are now available, and they provide an opportunity to validate the accuracy of the predicted pre-flight aerodynamic characteristics. Comparisons of measured and predicted flight characteristics are presented and discussed. Results show that the computational methods provide reasonable aerodynamic design information with acceptable margins. Post-flight analyses illustrate certain areas in which improvements are desired.

  4. Vacuum Powder Injector

    NASA Technical Reports Server (NTRS)

    Working, Dennis C.

    1991-01-01

    Method developed to provide uniform impregnation of bundles of carbon-fiber tow with low-solubility, high-melt-flow polymer powder materials to produce composite prepregs. Vacuum powder injector expands bundle of fiber tow, applies polymer to it, then compresses bundle to hold powder. System provides for control of amount of polymer on bundle. Crystallinity of polymer maintained by controlled melt on takeup system. All powder entrapped, and most collected for reuse. Process provides inexpensive and efficient method for making composite materials. Allows for coating of any bundle of fine fibers with powders. Shows high potential for making prepregs of improved materials and for preparation of high-temperature, high-modulus, reinforced thermoplastics.

  5. Frequency Splitting Analysis and Compensation Method for Inductive Wireless Powering of Implantable Biosensors.

    PubMed

    Schormans, Matthew; Valente, Virgilio; Demosthenous, Andreas

    2016-08-04

    Inductive powering for implanted medical devices, such as implantable biosensors, is a safe and effective technique that allows power to be delivered to implants wirelessly, avoiding the use of transcutaneous wires or implanted batteries. Wireless powering is very sensitive to a number of link parameters, including coil distance, alignment, shape, and load conditions. The optimum drive frequency of an inductive link varies depending on the coil spacing and load. This paper presents an optimum frequency tracking (OFT) method, in which an inductive power link is driven at a frequency that is maintained at an optimum value to ensure that the link is working at resonance, and the output voltage is maximised. The method is shown to provide significant improvements in maintained secondary voltage and system efficiency for a range of loads when the link is overcoupled. The OFT method does not require the use of variable capacitors or inductors. When tested at frequencies around a nominal frequency of 5 MHz, the OFT method provides up to a twofold efficiency improvement compared to a fixed frequency drive. The system can be readily interfaced with passive implants or implantable biosensors, and lends itself to interfacing with designs such as distributed implanted sensor networks, where each implant is operating at a different frequency.

  6. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions.

    PubMed

    Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z; Gao, Xin

    2017-01-01

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  7. On the value of incorporating spatial statistics in large-scale geophysical inversions: the SABRe case

    NASA Astrophysics Data System (ADS)

    Kokkinaki, A.; Sleep, B. E.; Chambers, J. E.; Cirpka, O. A.; Nowak, W.

    2010-12-01

    Electrical Resistance Tomography (ERT) is a popular method for investigating subsurface heterogeneity. The method relies on measuring electrical potential differences and obtaining, through inverse modeling, the underlying electrical conductivity field, which can be related to hydraulic conductivities. The quality of site characterization strongly depends on the utilized inversion technique. Standard ERT inversion methods, though highly computationally efficient, do not consider spatial correlation of soil properties; as a result, they often underestimate the spatial variability observed in earth materials, thereby producing unrealistic subsurface models. Also, these methods do not quantify the uncertainty of the estimated properties, thus limiting their use in subsequent investigations. Geostatistical inverse methods can be used to overcome both these limitations; however, they are computationally expensive, which has hindered their wide use in practice. In this work, we compare a standard Gauss-Newton smoothness constrained least squares inversion method against the quasi-linear geostatistical approach using the three-dimensional ERT dataset of the SABRe (Source Area Bioremediation) project. The two methods are evaluated for their ability to: a) produce physically realistic electrical conductivity fields that agree with the wide range of data available for the SABRe site while being computationally efficient, and b) provide information on the spatial statistics of other parameters of interest, such as hydraulic conductivity. To explore the trade-off between inversion quality and computational efficiency, we also employ a 2.5-D forward model with corrections for boundary conditions and source singularities. The 2.5-D model accelerates the 3-D geostatistical inversion method. New adjoint equations are developed for the 2.5-D forward model for the efficient calculation of sensitivities. Our work shows that spatial statistics can be incorporated in large-scale ERT inversions to improve the inversion results without making them computationally prohibitive.

  8. A UNIFIED FRAMEWORK FOR VARIANCE COMPONENT ESTIMATION WITH SUMMARY STATISTICS IN GENOME-WIDE ASSOCIATION STUDIES.

    PubMed

    Zhou, Xiang

    2017-12-01

    Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS. MQS is based on the method of moments (MoM) and the minimal norm quadratic unbiased estimation (MINQUE) criterion, and brings two seemingly unrelated methods-the renowned Haseman-Elston (HE) regression and the recent LD score regression (LDSC)-into the same unified statistical framework. With this new framework, we provide an alternative but mathematically equivalent form of HE that allows for the use of summary statistics. We provide an exact estimation form of LDSC to yield unbiased and statistically more efficient estimates. A key feature of our method is its ability to pair marginal z -scores computed using all samples with SNP correlation information computed using a small random subset of individuals (or individuals from a proper reference panel), while capable of producing estimates that can be almost as accurate as if both quantities are computed using the full data. As a result, our method produces unbiased and statistically efficient estimates, and makes use of summary statistics, while it is computationally efficient for large data sets. Using simulations and applications to 37 phenotypes from 8 real data sets, we illustrate the benefits of our method for estimating and partitioning SNP heritability in population studies as well as for heritability estimation in family studies. Our method is implemented in the GEMMA software package, freely available at www.xzlab.org/software.html.

  9. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  10. Efficient Method for Scalable Registration of Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Prouty, R.; LeMoigne, J.; Halem, M.

    2017-12-01

    The goal of this project is to build a prototype of a resource-efficient pipeline that will provide registration within subpixel accuracy of multitemporal Earth science data. Accurate registration of Earth-science data is imperative to proper data integration and seamless mosaicing of data from multiple times, sensors, and/or observation geometries. Modern registration methods make use of many arithmetic operations and sometimes require complete knowledge of the image domain. As such, while sensors become more advanced and are able to provide higher-resolution data, the memory resources required to properly register these data become prohibitive. The proposed pipeline employs a region of interest extraction algorithm in order to extract image subsets with high local feature density. These image subsets are then used to generate local solutions to the global registration problem. The local solutions are then 'globalized' to determine the deformation model that best solves the registration problem. The region of interest extraction and globalization routines are tested for robustness among the variety of scene-types and spectral locations provided by Earth-observing instruments such as Landsat, MODIS, or ASTER.

  11. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  12. Direct amidation of esters with nitroarenes

    NASA Astrophysics Data System (ADS)

    Cheung, Chi Wai; Ploeger, Marten Leendert; Hu, Xile

    2017-03-01

    Esters are one of the most common functional groups in natural and synthetic products, and the one-step conversion of the ester group into other functional groups is an attractive strategy in organic synthesis. Direct amidation of esters is particularly appealing due to the omnipresence of the amide moiety in biomolecules, fine chemicals, and drug candidates. However, efficient methods for direct amidation of unactivated esters are still lacking. Here we report nickel-catalysed reductive coupling of unactivated esters with nitroarenes to furnish in one step a wide range of amides bearing functional groups relevant to the development of drugs and agrochemicals. The method has been used to expedite the syntheses of bio-active molecules and natural products, as well as their post-synthetic modifications. Preliminary mechanistic study indicates a reaction pathway distinct from conventional amidation methods using anilines as nitrogen sources. The work provides a novel and efficient method for amide synthesis.

  13. The analysis of transient noise of PCB P/G network based on PI/SI co-simulation

    NASA Astrophysics Data System (ADS)

    Haohang, Su

    2018-02-01

    With the frequency of the space camera become higher than before, the power noise of the imaging electronic system become the important factor. Much more power noise would disturb the transmissions signal, and even influence the image sharpness and system noise. "Target impedance method" is one of the traditional design method of P/G network (power and ground network), which is shorted of transient power noise analysis and often made "over design". In this paper, a new design method of P/G network is provided which simulated by PI/SI co-simulation. The transient power noise can be simulated and then applied in the design of noise reduction, thus effectively controlling the change of the noise in the P/G network. The method can efficiently control the number of adding decoupling capacitor, and is very efficient and feasible to keep the power integrity.

  14. A joint tracking method for NSCC based on WLS algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Ruidan; Xu, Ying; Yuan, Hong

    2017-12-01

    Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.

  15. A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions

    NASA Astrophysics Data System (ADS)

    Exl, Lukas

    2017-12-01

    An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.

  16. Extracting organic matter on Mars: A comparison of methods involving subcritical water, surfactant solutions and organic solvents

    NASA Astrophysics Data System (ADS)

    Luong, Duy; Court, Richard W.; Sims, Mark R.; Cullen, David C.; Sephton, Mark A.

    2014-09-01

    The first step in many life detection protocols on Mars involves attempts to extract or isolate organic matter from its mineral matrix. A number of extraction options are available and include heat and solvent assisted methods. Recent operations on Mars indicate that heating samples can cause the loss or obfuscation of organic signals from target materials, raising the importance of solvent-based systems for future missions. Several solvent types are available (e.g. organic solvents, surfactant based solvents and subcritical water extraction) but a comparison of their efficiencies in Mars relevant materials is missing. We have spiked the well characterised Mars analogue material JSC Mars-1 with a number of representative organic standards. Extraction of the spiked JSC Mars-1 with the three solvent methods provides insights into the relative efficiency of these methods and indicates how they may be used on future Mars missions.

  17. Applying Process Improvement Methods to Clinical and Translational Research: Conceptual Framework and Case Examples

    PubMed Central

    Selker, Harry P.; Leslie, Laurel K.

    2015-01-01

    Abstract There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in‐person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies. PMID:26332869

  18. A combined gas cooled nuclear reactor and fuel cell cycle

    NASA Astrophysics Data System (ADS)

    Palmer, David J.

    Rising oil costs, global warming, national security concerns, economic concerns and escalating energy demands are forcing the engineering communities to explore methods to address these concerns. It is the intention of this thesis to offer a proposal for a novel design of a combined cycle, an advanced nuclear helium reactor/solid oxide fuel cell (SOFC) plant that will help to mitigate some of the above concerns. Moreover, the adoption of this proposal may help to reinvigorate the Nuclear Power industry while providing a practical method to foster the development of a hydrogen economy. Specifically, this thesis concentrates on the importance of the U.S. Nuclear Navy adopting this novel design for its nuclear electric vessels of the future with discussion on efficiency and thermodynamic performance characteristics related to the combined cycle. Thus, the goals and objectives are to develop an innovative combined cycle that provides a solution to the stated concerns and show that it provides superior performance. In order to show performance, it is necessary to develop a rigorous thermodynamic model and computer program to analyze the SOFC in relation with the overall cycle. A large increase in efficiency over the conventional pressurized water reactor cycle is realized. Both sides of the cycle achieve higher efficiencies at partial loads which is extremely important as most naval vessels operate at partial loads as well as the fact that traditional gas turbines operating alone have poor performance at reduced speeds. Furthermore, each side of the cycle provides important benefits to the other side. The high temperature exhaust from the overall exothermic reaction of the fuel cell provides heat for the reheater allowing for an overall increase in power on the nuclear side of the cycle. Likewise, the high temperature helium exiting the nuclear reactor provides a controllable method to stabilize the fuel cell at an optimal temperature band even during transients helping to increase performance and reduce degradation of the fuel cell. It also provides the high temperature needed to efficiently produce hydrogen for the fuel cell. Moreover, the inclusion of a highly reliable and electrically independent fuel cell is particularly important as the ship will have the ability to divert large amounts of power from the propulsion system to energize high energy weapon pulse loads without disturbing vital parts of the C4ISR systems or control panels. Ultimately, the thesis shows that the combined cycle is mutually beneficial to each side of the cycle and overall critically needed for our future.

  19. A nonlinear relaxation/quasi-Newton algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Edwards, Jack R.; Mcrae, D. S.

    1992-01-01

    A highly efficient implicit method for the computation of steady, two-dimensional compressible Navier-Stokes flowfields is presented. The discretization of the governing equations is hybrid in nature, with flux-vector splitting utilized in the streamwise direction and central differences with flux-limited artificial dissipation used for the transverse fluxes. Line Jacobi relaxation is used to provide a suitable initial guess for a new nonlinear iteration strategy based on line Gauss-Seidel sweeps. The applicability of quasi-Newton methods as convergence accelerators for this and other line relaxation algorithms is discussed, and efficient implementations of such techniques are presented. Convergence histories and comparisons with experimental data are presented for supersonic flow over a flat plate and for several high-speed compression corner interactions. Results indicate a marked improvement in computational efficiency over more conventional upwind relaxation strategies, particularly for flowfields containing large pockets of streamwise subsonic flow.

  20. Chapter 22: Compressed Air Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Benton, Nathanael; Burns, Patrick

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressormore » replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.« less

  1. Algal cell disruption using microbubbles to localize ultrasonic energy

    PubMed Central

    Krehbiel, Joel D.; Schideman, Lance C.; King, Daniel A.; Freund, Jonathan B.

    2015-01-01

    Microbubbles were added to an algal solution with the goal of improving cell disruption efficiency and the net energy balance for algal biofuel production. Experimental results showed that disruption increases with increasing peak rarefaction ultrasound pressure over the range studied: 1.90 to 3.07 MPa. Additionally, ultrasound cell disruption increased by up to 58% by adding microbubbles, with peak disruption occurring in the range of 108 microbubbles/ml. The localization of energy in space and time provided by the bubbles improve efficiency: energy requirements for such a process were estimated to be one-fourth of the available heat of combustion of algal biomass and one-fifth of currently used cell disruption methods. This increase in energy efficiency could make microbubble enhanced ultrasound viable for bioenergy applications and is expected to integrate well with current cell harvesting methods based upon dissolved air flotation. PMID:25311188

  2. Deterministic binary vectors for efficient automated indexing of MEDLINE/PubMed abstracts.

    PubMed

    Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R; Bernstam, Elmer V; Cohen, Trevor

    2012-01-01

    The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.

  3. Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts

    PubMed Central

    Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R.; Bernstam, Elmer V.; Cohen, Trevor

    2012-01-01

    The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI. PMID:23304369

  4. Wicket gate trailing-edge blowing: A method for improving off-design hydroturbine performance by adjusting the runner inlet swirl angle

    NASA Astrophysics Data System (ADS)

    Lewis, B. J.; Cimbala, J. M.; Wouden, A. M.

    2014-03-01

    At their best efficiency point (BEP), hydroturbines operate at very high efficiency. However, with the ever-increasing penetration of alternative electricity generation, it has become common to operate hydroturbines at off-design conditions in order to maintain stability in the electric power grid. This paper demonstrates a method for improving hydroturbine performance during off-design operation by injecting water through slots at the trailing edges of the wicket gates. The injected water causes a change in bulk flow direction at the inlet of the runner. This change in flow angle from the wicket gate trailing-edge jets provides the capability of independently varying the flow rate and swirl angle through the runner, which in current designs are both determined by the wicket gate opening angle. When properly tuned, altering the flow angle results in a significant improvement in turbine efficiency during off-design operation.

  5. SLFP: a stochastic linear fractional programming approach for sustainable waste management.

    PubMed

    Zhu, H; Huang, G H

    2011-12-01

    A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Multi-stage fuel cell system method and apparatus

    DOEpatents

    George, Thomas J.; Smith, William C.

    2000-01-01

    A high efficiency, multi-stage fuel cell system method and apparatus is provided. The fuel cell system is comprised of multiple fuel cell stages, whereby the temperatures of the fuel and oxidant gas streams and the percentage of fuel consumed in each stage are controlled to optimize fuel cell system efficiency. The stages are connected in a serial, flow-through arrangement such that the oxidant gas and fuel gas flowing through an upstream stage is conducted directly into the next adjacent downstream stage. The fuel cell stages are further arranged such that unspent fuel and oxidant laden gases too hot to continue within an upstream stage because of material constraints are conducted into a subsequent downstream stage which comprises a similar cell configuration, however, which is constructed from materials having a higher heat tolerance and designed to meet higher thermal demands. In addition, fuel is underutilized in each stage, resulting in a higher overall fuel cell system efficiency.

  7. Efficient Jacobian inversion for the control of simple robot manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1988-01-01

    Symbolic inversion of the Jacobian matrix for spherical wrist arms is investigated. It is shown that, taking advantage of the simple geometry of these arms, the closed-form solution of the system Q = J-1X, representing a transformation from task space to joint space, can be obtained very efficiently. The solutions for PUMA, Stanford, and a six-revolute-joint coplanar arm, along with all singular points, are presented. The solution for each joint variable is found as an explicit function of the singular points which provides a better insight into the effect of different singular points on the motion and force exertion of each individual joint. For the above arms, the computation cost of the solution is on the same order as the cost of forward kinematic solution and it is significantly reduced if forward kinematic solution is already obtained. A comparison with previous methods shows that this method is the most efficient to date.

  8. Ink jet assisted metallization for low cost flat plate solar cells

    NASA Technical Reports Server (NTRS)

    Teng, K. F.; Vest, R. W.

    1987-01-01

    Computer-controlled ink-jet-assisted metallization of the front surface of solar cells with metalorganic silver inks offers a maskless alternative method to conventional photolithography and screen printing. This method can provide low cost, fine resolution, reduced process complexity, avoidance of degradation of the p-n junction by firing at lower temperature, and uniform line film on rough surface of solar cells. The metallization process involves belt furnace firing and thermal spiking. With multilayer ink jet printing and firing, solar cells of about 5-6 percent efficiency without antireflection (AR) coating can be produced. With a titanium thin-film underlayer as an adhesion promoter, solar cells of average efficiency 8.08 percent without AR coating can be obtained. This efficiency value is approximately equal to that of thin-film solar cells of the same lot. Problems with regard to lower inorganic content of the inks and contact resistance are noted.

  9. Research on the self-absorption corrections for PGNAA of large samples

    NASA Astrophysics Data System (ADS)

    Yang, Jian-Bo; Liu, Zhi; Chang, Kang; Li, Rui

    2017-02-01

    When a large sample is analysed with the prompt gamma neutron activation analysis (PGNAA) neutron self-shielding and gamma self-absorption affect the accuracy, the correction method for the detection efficiency of the relative H of each element in a large sample is described. The influences of the thickness and density of the cement samples on the H detection efficiency, as well as the impurities Fe2O3 and SiO2 on the prompt γ ray yield for each element in the cement samples, were studied. The phase functions for Ca, Fe, and Si on H with changes in sample thickness and density were provided to avoid complicated procedures for preparing the corresponding density or thickness scale for measuring samples under each density or thickness value and to present a simplified method for the measurement efficiency scale for prompt-gamma neutron activation analysis.

  10. Reducing the impact of speed dispersion on subway corridor flow.

    PubMed

    Qiao, Jing; Sun, Lishan; Liu, Xiaoming; Rong, Jian

    2017-11-01

    The rapid increase in the volume of subway passengers in Beijing has necessitated higher requirements for the safety and efficiency of subway corridors. Speed dispersion is an important factor that affects safety and efficiency. This paper aims to analyze the management control methods for reducing pedestrian speed dispersion in subways. The characteristics of the speed dispersion of pedestrian flow were analyzed according to field videos. The control measurements which were conducted by placing traffic signs, yellow marking, and guardrail were proposed to alleviate speed dispersion. The results showed that the methods of placing traffic signs, yellow marking, and a guardrail improved safety and efficiency for all four volumes of pedestrian traffic flow, and the best-performing control measurement was guardrails. Furthermore, guardrails' optimal position and design measurements were explored. The research findings provide a rationale for subway managers in optimizing pedestrian traffic flow in subway corridors. Copyright © 2017. Published by Elsevier Ltd.

  11. Accommodating Change: A Case Study in Planning a Sustainable New Business School Building.

    ERIC Educational Resources Information Center

    Taylor, Lee

    2002-01-01

    Provides a case study of the planning and design of a new building for the Open University Business School. Goals included an energy-efficient building that would break the paradigm of traditional university working methods. (EV)

  12. ENVIRONMENTAL TECHNOLOGY VERIFICATION TEST PROTOCOL, GENERAL VENTILATION FILTERS

    EPA Science Inventory

    The Environmental Technology Verification Test Protocol, General Ventilation Filters provides guidance for verification tests.

    Reference is made in the protocol to the ASHRAE 52.2P "Method of Testing General Ventilation Air-cleaning Devices for Removal Efficiency by P...

  13. Mating programs including genomic relationships and dominance effects

    USDA-ARS?s Scientific Manuscript database

    Breed associations, artificial-insemination organizations, and on-farm software providers need new computerized mating programs for genomic selection so that genomic inbreeding could be minimized by comparing genotypes of potential mates. Efficient methods for transferring elements of the genomic re...

  14. Efficient methods and readily customizable libraries for managing complexity of large networks.

    PubMed

    Dogrusoz, Ugur; Karacelik, Alper; Safarli, Ilkin; Balci, Hasan; Dervishi, Leonard; Siper, Metin Can

    2018-01-01

    One common problem in visualizing real-life networks, including biological pathways, is the large size of these networks. Often times, users find themselves facing slow, non-scaling operations due to network size, if not a "hairball" network, hindering effective analysis. One extremely useful method for reducing complexity of large networks is the use of hierarchical clustering and nesting, and applying expand-collapse operations on demand during analysis. Another such method is hiding currently unnecessary details, to later gradually reveal on demand. Major challenges when applying complexity reduction operations on large networks include efficiency and maintaining the user's mental map of the drawing. We developed specialized incremental layout methods for preserving a user's mental map while managing complexity of large networks through expand-collapse and hide-show operations. We also developed open-source JavaScript libraries as plug-ins to the web based graph visualization library named Cytsocape.js to implement these methods as complexity management operations. Through efficient specialized algorithms provided by these extensions, one can collapse or hide desired parts of a network, yielding potentially much smaller networks, making them more suitable for interactive visual analysis. This work fills an important gap by making efficient implementations of some already known complexity management techniques freely available to tool developers through a couple of open source, customizable software libraries, and by introducing some heuristics which can be applied upon such complexity management techniques to ensure preserving mental map of users.

  15. YAMAT-seq: an efficient method for high-throughput sequencing of mature transfer RNAs.

    PubMed

    Shigematsu, Megumi; Honda, Shozo; Loher, Phillipe; Telonis, Aristeidis G; Rigoutsos, Isidore; Kirino, Yohei

    2017-05-19

    Besides translation, transfer RNAs (tRNAs) play many non-canonical roles in various biological pathways and exhibit highly variable expression profiles. To unravel the emerging complexities of tRNA biology and molecular mechanisms underlying them, an efficient tRNA sequencing method is required. However, the rigid structure of tRNA has been presenting a challenge to the development of such methods. We report the development of Y-shaped Adapter-ligated MAture TRNA sequencing (YAMAT-seq), an efficient and convenient method for high-throughput sequencing of mature tRNAs. YAMAT-seq circumvents the issue of inefficient adapter ligation, a characteristic of conventional RNA sequencing methods for mature tRNAs, by employing the efficient and specific ligation of Y-shaped adapter to mature tRNAs using T4 RNA Ligase 2. Subsequent cDNA amplification and next-generation sequencing successfully yield numerous mature tRNA sequences. YAMAT-seq has high specificity for mature tRNAs and high sensitivity to detect most isoacceptors from minute amount of total RNA. Moreover, YAMAT-seq shows quantitative capability to estimate expression levels of mature tRNAs, and has high reproducibility and broad applicability for various cell lines. YAMAT-seq thus provides high-throughput technique for identifying tRNA profiles and their regulations in various transcriptomes, which could play important regulatory roles in translation and other biological processes. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Quad-Tree Visual-Calculus Analysis of Satellite Coverage

    NASA Technical Reports Server (NTRS)

    Lo, Martin W.; Hockney, George; Kwan, Bruce

    2003-01-01

    An improved method of analysis of coverage of areas of the Earth by a constellation of radio-communication or scientific-observation satellites has been developed. This method is intended to supplant an older method in which the global-coverage-analysis problem is solved from a ground-to-satellite perspective. The present method provides for rapid and efficient analysis. This method is derived from a satellite-to-ground perspective and involves a unique combination of two techniques for multiresolution representation of map features on the surface of a sphere.

  17. Teleradiology from the provider's perspective-cost analysis for a mid-size university hospital.

    PubMed

    Rosenberg, Christian; Kroos, Kristin; Rosenberg, Britta; Hosten, Norbert; Flessa, Steffen

    2013-08-01

    Real costs of teleradiology services have not been systematically calculated. Pricing policies are not evidence-based. This study aims to prove the feasibility of performing an original cost analysis for teleradiology services and show break-even points to perform cost-effective practice. Based on the teleradiology services provided by the Greifswald University Hospital in northeastern Germany, a detailed process analysis and an activity-based costing model revealed costs per service unit according to eight examination categories. The Monte Carlo method was used to simulate the cost amplitude and identify pricing thresholds. Twenty-two sub-processes and four staff categories were identified. The average working time for one unit was 55 (x-ray) to 72 min (whole-body CT). Personnel costs were dominant (up to 68 %), representing lower limit costs. The Monte Carlo method showed the cost distribution per category according to the deficiency risk. Avoiding deficient pricing by a likelihood of 90 % increased the cost of a cranial CT almost twofold as compared with the lower limit cost. Original cost analysis is possible when providing teleradiology services with complex statutory requirements in place. Methodology and results provide useful data to help enhance efficiency in hospital management as well as implement realistic reimbursement fees. • Analysis of original costs of teleradiology is possible for a providing hospital • Results discriminate pricing thresholds and lower limit costs to perform cost-effective practice • The study methods represent a managing tool to enhance efficiency in providing facilities • The data are useful to help represent telemedicine services in regular medical fee schedules.

  18. Multi-scale occupancy estimation and modelling using multiple detection methods

    USGS Publications Warehouse

    Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.

    2008-01-01

    Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.

  19. Benchmarking wastewater treatment plants under an eco-efficiency perspective.

    PubMed

    Lorenzo-Toja, Yago; Vázquez-Rowe, Ian; Amores, María José; Termes-Rifé, Montserrat; Marín-Navarro, Desirée; Moreira, María Teresa; Feijoo, Gumersindo

    2016-10-01

    The new ISO 14045 framework is expected to slowly start shifting the definition of eco-efficiency toward a life-cycle perspective, using Life Cycle Assessment (LCA) as the environmental impact assessment method together with a system value assessment method for the economic analysis. In the present study, a set of 22 wastewater treatment plants (WWTPs) in Spain were analyzed on the basis of eco-efficiency criteria, using LCA and Life Cycle Costing (LCC) as a system value assessment method. The study is intended to be useful to decision-makers in the wastewater treatment sector, since the combined method provides an alternative scheme for analyzing the relationship between environmental impacts and costs. Two midpoint impact categories, global warming and eutrophication potential, as well as an endpoint single score indicator were used for the environmental assessment, while LCC was used for value assessment. Results demonstrated that substantial differences can be observed between different WWTPs depending on a wide range of factors such as plant configuration, plant size or even legal discharge limits. Based on these results the benchmarking of wastewater treatment facilities was performed by creating a specific classification and certification scheme. The proposed eco-label for the WWTPs rating is based on the integration of the three environmental indicators and an economic indicator calculated within the study under the eco-efficiency new framework. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Nondestructive DNA extraction from blackflies (Diptera: Simuliidae): retaining voucher specimens for DNA barcoding projects.

    PubMed

    Hunter, Stephanie J; Goodall, Tim I; Walsh, Kerry A; Owen, Richard; Day, John C

    2008-01-01

    A nondestructive, chemical-free method is presented for the extraction of DNA from small insects. Blackflies were submerged in sterile, distilled water and sonicated for varying lengths of time to provide DNA which was assessed in terms of quantity, purity and amplification efficiency. A verified DNA barcode was produced from DNA extracted from blackfly larvae, pupae and adult specimens. A 60-second sonication period was found to release the highest quality and quantity of DNA although the amplification efficiency was found to be similar regardless of sonication time. Overall, a 66% amplification efficiency was observed. Examination of post-sonicated material confirmed retention of morphological characters. Sonication was found to be a reliable DNA extraction approach for barcoding, providing sufficient quality template for polymerase chain reaction amplification as well as retaining the voucher specimen for post-barcoding morphological evaluation. © 2007 The Authors.

  1. A combination of selected mapping and clipping to increase energy efficiency of OFDM systems

    PubMed Central

    Lee, Byung Moo; Rim, You Seung

    2017-01-01

    We propose an energy efficient combination design for OFDM systems based on selected mapping (SLM) and clipping peak-to-average power ratio (PAPR) reduction techniques, and show the related energy efficiency (EE) performance analysis. The combination of two different PAPR reduction techniques can provide a significant benefit in increasing EE, because it can take advantages of both techniques. For the combination, we choose the clipping and SLM techniques, since the former technique is quite simple and effective, and the latter technique does not cause any signal distortion. We provide the structure and the systematic operating method, and show the various analyzes to derive the EE gain based on the combined technique. Our analysis show that the combined technique increases the EE by 69% compared to no PAPR reduction, and by 19.34% compared to only using SLM technique. PMID:29023591

  2. New Automotive Air Conditioning System Simulation Tool Developed in MATLAB/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiss, T.; Chaney, L.; Meyer, J.

    Further improvements in vehicle fuel efficiency require accurate evaluation of the vehicle's transient total power requirement. When operated, the air conditioning (A/C) system is the largest auxiliary load on a vehicle; therefore, accurate evaluation of the load it places on the vehicle's engine and/or energy storage system is especially important. Vehicle simulation software, such as 'Autonomie,' has been used by OEMs to evaluate vehicles' energy performance. A transient A/C simulation tool incorporated into vehicle simulation models would also provide a tool for developing more efficient A/C systems through a thorough consideration of the transient A/C system performance. The dynamic systemmore » simulation software Matlab/Simulink was used to develop new and more efficient vehicle energy system controls. The various modeling methods used for the new simulation tool are described in detail. Comparison with measured data is provided to demonstrate the validity of the model.« less

  3. Filter Media Tests Under Simulated Martian Atmospheric Conditions

    NASA Technical Reports Server (NTRS)

    Agui, Juan H.

    2016-01-01

    Human exploration of Mars will require the optimal utilization of planetary resources. One of its abundant resources is the Martian atmosphere that can be harvested through filtration and chemical processes that purify and separate it into its gaseous and elemental constituents. Effective filtration needs to be part of the suite of resource utilization technologies. A unique testing platform is being used which provides the relevant operational and instrumental capabilities to test articles under the proper simulated Martian conditions. A series of tests were conducted to assess the performance of filter media. Light sheet imaging of the particle flow provided a means of detecting and quantifying particle concentrations to determine capturing efficiencies. The media's efficiency was also evaluated by gravimetric means through a by-layer filter media configuration. These tests will help to establish techniques and methods for measuring capturing efficiency and arrestance of conventional fibrous filter media. This paper will describe initial test results on different filter media.

  4. A simple way to achieve bioinspired hybrid wettability surface with micro/nanopatterns for efficient fog collection.

    PubMed

    Yin, Kai; Du, Haifeng; Dong, Xinran; Wang, Cong; Duan, Ji-An; He, Jun

    2017-10-05

    Fog collection is receiving increasing attention for providing water in semi-arid deserts and inland areas. Inspired by the fog harvesting ability of the hydrophobic-hydrophilic surface of Namib desert beetles, we present a simple, low-cost method to prepare a hybrid superhydrophobic-hydrophilic surface. The surface contains micro/nanopatterns, and is prepared by incorporating femtosecond-laser fabricated polytetrafluoroethylene nanoparticles deposited on superhydrophobic copper mesh with a pristine hydrophilic copper sheet. The as-prepared surface exhibits enhanced fog collection efficiency compared with uniform (super)hydrophobic or (super)hydrophilic surfaces. This enhancement can be tuned by controlling the mesh number, inclination angle, and fabrication structure. Moreover, the surface shows excellent anti-corrosion ability after immersing in 1 M HCl, 1 M NaOH, and 10 wt% NaCl solutions for 2 hours. This work may provide insight into fabricating hybrid superhydrophobic-hydrophilic surfaces for efficient atmospheric water collection.

  5. Electricity End Uses, Energy Efficiency, and Distributed Energy Resources Baseline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz, Lisa; Wei, Max; Morrow, William

    This report was developed by a team of analysts at Lawrence Berkeley National Laboratory, with Argonne National Laboratory contributing the transportation section, and is a DOE EPSA product and part of a series of “baseline” reports intended to inform the second installment of the Quadrennial Energy Review (QER 1.2). QER 1.2 provides a comprehensive review of the nation’s electricity system and cover the current state and key trends related to the electricity system, including generation, transmission, distribution, grid operations and planning, and end use. The baseline reports provide an overview of elements of the electricity system. This report focuses onmore » end uses, electricity consumption, electric energy efficiency, distributed energy resources (DERs) (such as demand response, distributed generation, and distributed storage), and evaluation, measurement, and verification (EM&V) methods for energy efficiency and DERs.« less

  6. Comparative study of alkylthiols and alkylamines for the phase transfer of gold nanoparticles from an aqueous phase to n-hexane.

    PubMed

    Li, Lingxiangyu; Leopold, Kerstin; Schuster, Michael

    2013-05-01

    An efficient ligand-assisted phase transfer method has been developed to transfer gold nanoparticles (Au-NPs, d: 5-25 nm) from an aqueous solution to n-hexane. Four different ligands, namely 1-dodecanethiol (DDT), 1-octadecanethiol (ODT), dodecylamine (DDA), and octadecylamine (ODA) were investigated, and DDT was found to be the most efficient ligand. It appears that the molar ratio of DDT to Au-NPs is a critical factor affecting the transfer efficiency, and 270-310 is found to be the optimum range, under which the transfer efficiency is >96%. Moreover, the DDT-assisted phase transfer can preserve the shape and size of the Au-NPs, which was confirmed by UV-vis spectra and transmission electron microscopy (TEM). Additionally, the transferred Au-NPs still can be well dispersed in the n-hexane phase and remain stable for at least 2 weeks. On the other hand, the ODT-, DDA-, and ODA-assisted phase transfer is fraught with problems either related to transfer efficiency or NPs aggregation. Overall, the DDT-assisted phase transfer of Au-NPs provides a rapid and efficient method to recover Au-NPs from an aqueous solution to n-hexane. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Antimicrobial breakpoint estimation accounting for variability in pharmacokinetics

    PubMed Central

    Bi, Goue Denis Gohore; Li, Jun; Nekka, Fahima

    2009-01-01

    Background Pharmacokinetic and pharmacodynamic (PK/PD) indices are increasingly being used in the microbiological field to assess the efficacy of a dosing regimen. In contrast to methods using MIC, PK/PD-based methods reflect in vivo conditions and are more predictive of efficacy. Unfortunately, they entail the use of one PK-derived value such as AUC or Cmax and may thus lead to biased efficiency information when the variability is large. The aim of the present work was to evaluate the efficacy of a treatment by adjusting classical breakpoint estimation methods to the situation of variable PK profiles. Methods and results We propose a logical generalisation of the usual AUC methods by introducing the concept of "efficiency" for a PK profile, which involves the efficacy function as a weight. We formulated these methods for both classes of concentration- and time-dependent antibiotics. Using drug models and in silico approaches, we provide a theoretical basis for characterizing the efficiency of a PK profile under in vivo conditions. We also used the particular case of variable drug intake to assess the effect of the variable PK profiles generated and to analyse the implications for breakpoint estimation. Conclusion Compared to traditional methods, our weighted AUC approach gives a more powerful PK/PD link and reveals, through examples, interesting issues about the uniqueness of therapeutic outcome indices and antibiotic resistance problems. PMID:19558679

  8. Efficiency analysis of diffusion on T-fractals in the sense of random walks.

    PubMed

    Peng, Junhao; Xu, Guoai

    2014-04-07

    Efficiently controlling the diffusion process is crucial in the study of diffusion problem in complex systems. In the sense of random walks with a single trap, mean trapping time (MTT) and mean diffusing time (MDT) are good measures of trapping efficiency and diffusion efficiency, respectively. They both vary with the location of the node. In this paper, we analyze the effects of node's location on trapping efficiency and diffusion efficiency of T-fractals measured by MTT and MDT. First, we provide methods to calculate the MTT for any target node and the MDT for any source node of T-fractals. The methods can also be used to calculate the mean first-passage time between any pair of nodes. Then, using the MTT and the MDT as the measure of trapping efficiency and diffusion efficiency, respectively, we compare the trapping efficiency and diffusion efficiency among all nodes of T-fractal and find the best (or worst) trapping sites and the best (or worst) diffusing sites. Our results show that the hub node of T-fractal is the best trapping site, but it is also the worst diffusing site; and that the three boundary nodes are the worst trapping sites, but they are also the best diffusing sites. Comparing the maximum of MTT and MDT with their minimums, we find that the maximum of MTT is almost 6 times of the minimum of MTT and the maximum of MDT is almost equal to the minimum for MDT. Thus, the location of target node has large effect on the trapping efficiency, but the location of source node almost has no effect on diffusion efficiency. We also simulate random walks on T-fractals, whose results are consistent with the derived results.

  9. Efficient organ localization using multi-label convolutional neural networks in thorax-abdomen CT scans

    NASA Astrophysics Data System (ADS)

    Efrain Humpire-Mamani, Gabriel; Arindra Adiyoso Setio, Arnaud; van Ginneken, Bram; Jacobs, Colin

    2018-04-01

    Automatic localization of organs and other structures in medical images is an important preprocessing step that can improve and speed up other algorithms such as organ segmentation, lesion detection, and registration. This work presents an efficient method for simultaneous localization of multiple structures in 3D thorax-abdomen CT scans. Our approach predicts the location of multiple structures using a single multi-label convolutional neural network for each orthogonal view. Each network takes extra slices around the current slice as input to provide extra context. A sigmoid layer is used to perform multi-label classification. The output of the three networks is subsequently combined to compute a 3D bounding box for each structure. We used our approach to locate 11 structures of interest. The neural network was trained and evaluated on a large set of 1884 thorax-abdomen CT scans from patients undergoing oncological workup. Reference bounding boxes were annotated by human observers. The performance of our method was evaluated by computing the wall distance to the reference bounding boxes. The bounding boxes annotated by the first human observer were used as the reference standard for the test set. Using the best configuration, we obtained an average wall distance of 3.20~+/-~7.33 mm in the test set. The second human observer achieved 1.23~+/-~3.39 mm. For all structures, the results were better than those reported in previously published studies. In conclusion, we proposed an efficient method for the accurate localization of multiple organs. Our method uses multiple slices as input to provide more context around the slice under analysis, and we have shown that this improves performance. This method can easily be adapted to handle more organs.

  10. A highly efficient bead extraction technique with low bead number for digital microfluidic immunoassay

    PubMed Central

    Tsai, Po-Yen; Lee, I-Chin; Hsu, Hsin-Yun; Huang, Hong-Yuan; Fan, Shih-Kang; Liu, Cheng-Hsien

    2016-01-01

    Here, we describe a technique to manipulate a low number of beads to achieve high washing efficiency with zero bead loss in the washing process of a digital microfluidic (DMF) immunoassay. Previously, two magnetic bead extraction methods were reported in the DMF platform: (1) single-side electrowetting method and (2) double-side electrowetting method. The first approach could provide high washing efficiency, but it required a large number of beads. The second approach could reduce the required number of beads, but it was inefficient where multiple washes were required. More importantly, bead loss during the washing process was unavoidable in both methods. Here, an improved double-side electrowetting method is proposed for bead extraction by utilizing a series of unequal electrodes. It is shown that, with proper electrode size ratio, only one wash step is required to achieve 98% washing rate without any bead loss at bead number less than 100 in a droplet. It allows using only about 25 magnetic beads in DMF immunoassay to increase the number of captured analytes on each bead effectively. In our human soluble tumor necrosis factor receptor I (sTNF-RI) model immunoassay, the experimental results show that, comparing to our previous results without using the proposed bead extraction technique, the immunoassay with low bead number significantly enhances the fluorescence signal to provide a better limit of detection (3.14 pg/ml) with smaller reagent volumes (200 nl) and shorter analysis time (<1 h). This improved bead extraction technique not only can be used in the DMF immunoassay but also has great potential to be used in any other bead-based DMF systems for different applications. PMID:26858807

  11. Computational modeling of magnetic particle margination within blood flow through LAMMPS

    NASA Astrophysics Data System (ADS)

    Ye, Huilin; Shen, Zhiqiang; Li, Ying

    2017-11-01

    We develop a multiscale and multiphysics computational method to investigate the transport of magnetic particles as drug carriers in blood flow under influence of hydrodynamic interaction and external magnetic field. A hybrid coupling method is proposed to handle red blood cell (RBC)-fluid interface (CFI) and magnetic particle-fluid interface (PFI), respectively. Immersed boundary method (IBM)-based velocity coupling is used to account for CFI, which is validated by tank-treading and tumbling behaviors of a single RBC in simple shear flow. While PFI is captured by IBM-based force coupling, which is verified through movement of a single magnetic particle under non-uniform external magnetic field and breakup of a magnetic chain in rotating magnetic field. These two components are seamlessly integrated within the LAMMPS framework, which is a highly parallelized molecular dynamics solver. In addition, we also implement a parallelized lattice Boltzmann simulator within LAMMPS to handle the fluid flow simulation. Based on the proposed method, we explore the margination behaviors of magnetic particles and magnetic chains within blood flow. We find that the external magnetic field can be used to guide the motion of these magnetic materials and promote their margination to the vascular wall region. Moreover, the scaling performance and speedup test further confirm the high efficiency and robustness of proposed computational method. Therefore, it provides an efficient way to simulate the transport of nanoparticle-based drug carriers within blood flow in a large scale. The simulation results can be applied in the design of efficient drug delivery vehicles that optimally accumulate within diseased tissue, thus providing better imaging sensitivity, therapeutic efficacy and lower toxicity.

  12. A targeted metabolomic protocol for short-chain fatty acids and branched-chain amino acids

    PubMed Central

    Zheng, Xiaojiao; Qiu, Yunping; Zhong, Wei; Baxter, Sarah; Su, Mingming; Li, Qiong; Xie, Guoxiang; Ore, Brandon M.; Qiao, Shanlei; Spencer, Melanie D.; Zeisel, Steven H.; Zhou, Zhanxiang; Zhao, Aihua; Jia, Wei

    2013-01-01

    Research in obesity and metabolic disorders that involve intestinal microbiota demands reliable methods for the precise measurement of the short-chain fatty acids (SCFAs) and branched-chain amino acids (BCAAs) concentration. Here, we report a rapid method of simultaneously determining SCFAs and BCAAs in biological samples using propyl chloroformate (PCF) derivatization followed by gas chromatography mass spectrometry (GC-MS) analysis. A one-step derivatization using 100 µL of PCF in a reaction system of water, propanol, and pyridine (v/v/v = 8:3:2) at pH 8 provided the optimal derivatization efficiency. The best extraction efficiency of the derivatized products was achieved by a two-step extraction with hexane. The method exhibited good derivatization efficiency and recovery for a wide range of concentrations with a low limit of detection for each compound. The relative standard deviations (RSDs) of all targeted compounds showed good intra- and inter-day (within 7 days) precision (< 10%), and good stability (< 20%) within 4 days at room temperature (23–25 °C), or 7 days when stored at −20 °C. We applied our method to measure SCFA and BCAA levels in fecal samples from rats administrated with different diet. Both univariate and multivariate statistics analysis of the concentrations of these target metabolites could differentiate three groups with ethanol intervention and different oils in diet. This method was also successfully employed to determine SCFA and BCAA in the feces, plasma and urine from normal humans, providing important baseline information of the concentrations of these metabolites. This novel metabolic profile study has great potential for translational research. PMID:23997757

  13. A game theory-reinforcement learning (GT-RL) method to develop optimal operation policies for multi-operator reservoir systems

    NASA Astrophysics Data System (ADS)

    Madani, Kaveh; Hooshyar, Milad

    2014-11-01

    Reservoir systems with multiple operators can benefit from coordination of operation policies. To maximize the total benefit of these systems the literature has normally used the social planner's approach. Based on this approach operation decisions are optimized using a multi-objective optimization model with a compound system's objective. While the utility of the system can be increased this way, fair allocation of benefits among the operators remains challenging for the social planner who has to assign controversial weights to the system's beneficiaries and their objectives. Cooperative game theory provides an alternative framework for fair and efficient allocation of the incremental benefits of cooperation. To determine the fair and efficient utility shares of the beneficiaries, cooperative game theory solution methods consider the gains of each party in the status quo (non-cooperation) as well as what can be gained through the grand coalition (social planner's solution or full cooperation) and partial coalitions. Nevertheless, estimation of the benefits of different coalitions can be challenging in complex multi-beneficiary systems. Reinforcement learning can be used to address this challenge and determine the gains of the beneficiaries for different levels of cooperation, i.e., non-cooperation, partial cooperation, and full cooperation, providing the essential input for allocation based on cooperative game theory. This paper develops a game theory-reinforcement learning (GT-RL) method for determining the optimal operation policies in multi-operator multi-reservoir systems with respect to fairness and efficiency criteria. As the first step to underline the utility of the GT-RL method in solving complex multi-agent multi-reservoir problems without a need for developing compound objectives and weight assignment, the proposed method is applied to a hypothetical three-agent three-reservoir system.

  14. Development of a dry actuation conducting polymer actuator for micro-optical zoom lenses

    NASA Astrophysics Data System (ADS)

    Kim, Baek-Chul; Kim, Hyunseok; Nguyen, H. C.; Cho, M. S.; Lee, Y.; Nam, Jae-Do; Choi, Hyouk Ryeol; Koo, J. C.; Jeong, H.-S.

    2008-03-01

    The objective of the present work is to demonstrate the efficiency and feasibility of NBR (Nitrile Butadiene Rubber) based conducting polymer actuator that is fabricated into a micro zoon lens driver. Unlike the traditional conducting polymer that normally operates in a liquid, the proposed actuator successfully provides fairly effective driving performance for the zoom lens system in a dry environment. And this paper is including the experiment results for an efficiency improvement. The result suggested by an experiment was efficient in micro optical zoom lens system. In addition, the developed design method of actuator was given consideration to design the system.

  15. The status of silicon ribbon growth technology for high-efficiency silicon solar cells

    NASA Technical Reports Server (NTRS)

    Ciszek, T. F.

    1985-01-01

    More than a dozen methods have been applied to the growth of silicon ribbons, beginning as early as 1963. The ribbon geometry has been particularly intriguing for photovoltaic applications, because it might provide large area, damage free, nearly continuous substrates without the material loss or cost of ingot wafering. In general, the efficiency of silicon ribbon solar cells has been lower than that of ingot cells. The status of some ribbon growth techniques that have achieved laboratory efficiencies greater than 13.5% are reviewed, i.e., edge-defined, film-fed growth (EFG), edge-supported pulling (ESP), ribbon against a drop (RAD), and dendritic web growth (web).

  16. Non-nucleoside building blocks for copper-assisted and copper-free click chemistry for the efficient synthesis of RNA conjugates.

    PubMed

    Jayaprakash, K N; Peng, Chang Geng; Butler, David; Varghese, Jos P; Maier, Martin A; Rajeev, Kallanthottathil G; Manoharan, Muthiah

    2010-12-03

    Novel non-nucleoside alkyne monomers compatible with oligonucleotide synthesis were designed, synthesized, and efficiently incorporated into RNA and RNA analogues during solid-phase synthesis. These modifications allowed site-specific conjugation of ligands to the RNA oligonucleotides through copper-assisted (CuAAC) and copper-free strain-promoted azide-alkyne cycloaddition (SPAAC) reactions. The SPAAC click reactions of cyclooctyne-oligonucleotides with various classes of azido-functionalized ligands in solution phase and on solid phase were efficient and quantitative and occurred under mild reaction conditions. The SPAAC reaction provides a method for the synthesis of oligonucleotide-ligand conjugates uncontaminated with copper ions.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Bo; Abdelaziz, Omar; Shrestha, Som S.

    Based on the laboratory investigation in FY16, for R-22 and R-410A alternative low GWP refrigerants in two baseline rooftop air conditioners (RTU), we used the DOE/ORNL Heat Pump Design Model to model the two RTUs and calibrated the models against the experimental data. Using the calibrated equipment models, we compared the compressor efficiencies, heat exchanger performances. An efficiency-based compressor mapping method was developed, which is able to predict compressor performances of the alternative low GWP refrigerants accurately. Extensive model-based optimizations were conducted to provide a fair comparison between all the low GWP candidates by selecting their preferred configurations at themore » same cooling capacity and compressor efficiencies.« less

  18. Efficient quantum circuits for dense circulant and circulant like operators

    PubMed Central

    Zhou, S. S.

    2017-01-01

    Circulant matrices are an important family of operators, which have a wide range of applications in science and engineering-related fields. They are, in general, non-sparse and non-unitary. In this paper, we present efficient quantum circuits to implement circulant operators using fewer resources and with lower complexity than existing methods. Moreover, our quantum circuits can be readily extended to the implementation of Toeplitz, Hankel and block circulant matrices. Efficient quantum algorithms to implement the inverses and products of circulant operators are also provided, and an example application in solving the equation of motion for cyclic systems is discussed. PMID:28572988

  19. Density Functional Theory Calculations of the Role of Defects in Amorphous Silicon Solar Cells

    NASA Astrophysics Data System (ADS)

    Johlin, Eric; Wagner, Lucas; Buonassisi, Tonio; Grossman, Jeffrey C.

    2010-03-01

    Amorphous silicon holds promise as a cheap and efficient material for thin-film photovoltaic devices. However, current device efficiencies are severely limited by the low mobility of holes in the bulk amorphous silicon material, the cause of which is not yet fully understood. This work employs a statistical analysis of density functional theory calculations to uncover the implications of a range of defects (including internal strain and substitution impurities) on the trapping and mobility of holes, and thereby also on the total conversion efficiency. We investigate the root causes of this low mobility and attempt to provide suggestions for simple methods of improving this property.

  20. Regenerative braking system of PM synchronous motor

    NASA Astrophysics Data System (ADS)

    Gao, Qian; Lv, Chengxing; Zhao, Na; Zang, Hechao; Jiang, Huilue; Zhang, Zhaowen; Zhang, Fengli

    2018-04-01

    Permanent-magnet synchronous motor is widely adopted in many fields with the advantage of a high efficiency and a high torque density. Regenerative Braking Systems (RBS) provide an efficient method to assist PMSM system achieve better fuel economy and lowering exhaust emissions. This paper describes the design and testing of the regenerative braking systems of PMSM. The mode of PWM duty has been adjusted to control regenerative braking of PMSM using energy controller for the port-controlled Hamiltonian model. The simulation analysis indicates that a smooth control could be realized and the highest efficiency and the smallest current ripple could be achieved by Regenerative Braking Systems.

Top