Sample records for efficient analysis methods

  1. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    NASA Astrophysics Data System (ADS)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  2. Comparative analysis of quantitative efficiency evaluation methods for transportation networks

    PubMed Central

    He, Yuxin; Hong, Jian

    2017-01-01

    An effective evaluation of transportation network efficiency could offer guidance for the optimal control of urban traffic. Based on the introduction and related mathematical analysis of three quantitative evaluation methods for transportation network efficiency, this paper compares the information measured by them, including network structure, traffic demand, travel choice behavior and other factors which affect network efficiency. Accordingly, the applicability of various evaluation methods is discussed. Through analyzing different transportation network examples it is obtained that Q-H method could reflect the influence of network structure, traffic demand and user route choice behavior on transportation network efficiency well. In addition, the transportation network efficiency measured by this method and Braess’s Paradox can be explained with each other, which indicates a better evaluation of the real operation condition of transportation network. Through the analysis of the network efficiency calculated by Q-H method, it can also be drawn that a specific appropriate demand is existed to a given transportation network. Meanwhile, under the fixed demand, both the critical network structure that guarantees the stability and the basic operation of the network and a specific network structure contributing to the largest value of the transportation network efficiency can be identified. PMID:28399165

  3. Comparative analysis of quantitative efficiency evaluation methods for transportation networks.

    PubMed

    He, Yuxin; Qin, Jin; Hong, Jian

    2017-01-01

    An effective evaluation of transportation network efficiency could offer guidance for the optimal control of urban traffic. Based on the introduction and related mathematical analysis of three quantitative evaluation methods for transportation network efficiency, this paper compares the information measured by them, including network structure, traffic demand, travel choice behavior and other factors which affect network efficiency. Accordingly, the applicability of various evaluation methods is discussed. Through analyzing different transportation network examples it is obtained that Q-H method could reflect the influence of network structure, traffic demand and user route choice behavior on transportation network efficiency well. In addition, the transportation network efficiency measured by this method and Braess's Paradox can be explained with each other, which indicates a better evaluation of the real operation condition of transportation network. Through the analysis of the network efficiency calculated by Q-H method, it can also be drawn that a specific appropriate demand is existed to a given transportation network. Meanwhile, under the fixed demand, both the critical network structure that guarantees the stability and the basic operation of the network and a specific network structure contributing to the largest value of the transportation network efficiency can be identified.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The report is an overview of electric energy efficiency programs. It takes a concise look at what states are doing to encourage energy efficiency and how it impacts electric utilities. Energy efficiency programs began to be offered by utilities as a response to the energy crises of the 1970s. These regulatory-driven programs peaked in the early-1990s and then tapered off as deregulation took hold. Today, rising electricity prices, environmental concerns, and national security issues have renewed interest in increasing energy efficiency as an alternative to additional supply. In response, new methods for administering, managing, and delivering energy efficiency programs aremore » being implemented. Topics covered in the report include: Analysis of the benefits of energy efficiency and key methods for achieving energy efficiency; evaluation of the business drivers spurring increased energy efficiency; Discussion of the major barriers to expanding energy efficiency programs; evaluation of the economic impacts of energy efficiency; discussion of the history of electric utility energy efficiency efforts; analysis of the impact of energy efficiency on utility profits and methods for protecting profitability; Discussion of non-utility management of energy efficiency programs; evaluation of major methods to spur energy efficiency - systems benefit charges, resource planning, and resource standards; and, analysis of the alternatives for encouraging customer participation in energy efficiency programs.« less

  5. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    NASA Astrophysics Data System (ADS)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  6. Design of A Cyclone Separator Using Approximation Method

    NASA Astrophysics Data System (ADS)

    Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee

    2017-12-01

    A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.

  7. Evaluation of Saltzman and phenoldisulfonic acid methods for determining NO/sub x/ in engine exhaust gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, R.H.; Calabro, D.S.

    1969-11-01

    The two methods normally used for the analysis of NO/sub x/ are the Saltzman and the phenoldisulfonic acid technique. This paper describes an evaluation of these wet chemical methods to determine their practical application to engine exhaust gas analysis. Parameters considered for the Saltzman method included bubbler collection efficiency, NO to NO/sub 2/ conversion efficiency, masking effect of other contaminants usually present in exhaust gases and the time-temperature effect of these contaminants on store developed solutions. Collection efficiency and the effects of contaminants were also considered for the phenoldisulfonic acid method. Test results indicated satisfactory collection and conversion efficiencies formore » the Saltzman method, but contaminants seriously affected the measurement accuracy particularly if the developed solution was stored for a number of hours at room temperature before analysis. Storage at 32/sup 0/F minimized effect. The standard procedure for the phenoldisulfonic acid method gave good results, but the process was found to be too time consuming for routine analysis and measured only total NO/sub x/. 3 references, 9 tables.« less

  8. Combustor kinetic energy efficiency analysis of the hypersonic research engine data

    NASA Astrophysics Data System (ADS)

    Hoose, K. V.

    1993-11-01

    A one-dimensional method for measuring combustor performance is needed to facilitate design and development scramjet engines. A one-dimensional kinetic energy efficiency method is used for measuring inlet and nozzle performance. The objective of this investigation was to assess the use of kinetic energy efficiency as an indicator for scramjet combustor performance. A combustor kinetic energy efficiency analysis was performed on the Hypersonic Research Engine (HRE) data. The HRE data was chosen for this analysis due to its thorough documentation and availability. The combustor, inlet, and nozzle kinetic energy efficiency values were utilized to determine an overall engine kinetic energy efficiency. Finally, a kinetic energy effectiveness method was developed to eliminate thermochemical losses from the combustion of fuel and air. All calculated values exhibit consistency over the flight speed range. Effects from fuel injection, altitude, angle of attack, subsonic-supersonic combustion transition, and inlet spike position are shown and discussed. The results of analyzing the HRE data indicate that the kinetic energy efficiency method is effective as a measure of scramjet combustor performance.

  9. Data Envelopment Analysis: Measurement of Educational Efficiency in Texas

    ERIC Educational Resources Information Center

    Carter, Lacy

    2012-01-01

    The purpose of this study was to examine the efficiency of Texas public school districts through Data Envelopment Analysis. The Data Envelopment Analysis estimation method calculated and assigned efficiency scores to each of the 931 school districts considered in the study. The efficiency scores were utilized in two phases. First, the school…

  10. Optimisation and validation of a rapid and efficient microemulsion liquid chromatographic (MELC) method for the determination of paracetamol (acetaminophen) content in a suppository formulation.

    PubMed

    McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin

    2007-05-09

    A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.

  11. A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments.

    PubMed

    Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J

    2017-12-01

    qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.

  12. Chapter 20: Data Center IT Efficiency Measures Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Huang, Robert; Masanet, Eric

    This chapter focuses on IT measures in the data center and examines the techniques and analysis methods used to verify savings that result from improving the efficiency of two specific pieces of IT equipment: servers and data storage.

  13. Probabilistic methods for rotordynamics analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  14. Multiresolution molecular mechanics: Implementation and efficiency

    NASA Astrophysics Data System (ADS)

    Biyikli, Emre; To, Albert C.

    2017-01-01

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.

  15. An efficient scan diagnosis methodology according to scan failure mode for yield enhancement

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Tae; Seo, Nam-Sik; Oh, Ghil-Geun; Kim, Dae-Gue; Lee, Kyu-Taek; Choi, Chi-Young; Kim, InSoo; Min, Hyoung Bok

    2008-12-01

    Yield has always been a driving consideration during fabrication of modern semiconductor industry. Statistically, the largest portion of wafer yield loss is defective scan failure. This paper presents efficient failure analysis methods for initial yield ramp up and ongoing product with scan diagnosis. Result of our analysis shows that more than 60% of the scan failure dies fall into the category of shift mode in the very deep submicron (VDSM) devices. However, localization of scan shift mode failure is very difficult in comparison to capture mode failure because it is caused by the malfunction of scan chain. Addressing the biggest challenge, we propose the most suitable analysis method according to scan failure mode (capture / shift) for yield enhancement. In the event of capture failure mode, this paper describes the method that integrates scan diagnosis flow and backside probing technology to obtain more accurate candidates. We also describe several unique techniques, such as bulk back-grinding solution, efficient backside probing and signal analysis method. Lastly, we introduce blocked chain analysis algorithm for efficient analysis of shift failure mode. In this paper, we contribute to enhancement of the yield as a result of the combination of two methods. We confirm the failure candidates with physical failure analysis (PFA) method. The direct feedback of the defective visualization is useful to mass-produce devices in a shorter time. The experimental data on mass products show that our method produces average reduction by 13.7% in defective SCAN & SRAM-BIST failure rates and by 18.2% in wafer yield rates.

  16. An Optimal Bahadur-Efficient Method in Detection of Sparse Signals with Applications to Pathway Analysis in Sequencing Association Studies.

    PubMed

    Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui

    2016-01-01

    Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with themore » associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.« less

  18. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  19. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... method or methods used; the mathematical model, the engineering or statistical analysis, computer... accordance with § 431.16 of this subpart, or by application of an alternative efficiency determination method... must be: (i) Derived from a mathematical model that represents the mechanical and electrical...

  20. Efficient forced vibration reanalysis method for rotating electric machines

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Suzuki, Hiromitsu; Kuroishi, Masakatsu; Nakai, Hideo

    2015-01-01

    Rotating electric machines are subject to forced vibration by magnetic force excitation with wide-band frequency spectrum that are dependent on the operating conditions. Therefore, when designing the electric machines, it is inevitable to compute the vibration response of the machines at various operating conditions efficiently and accurately. This paper presents an efficient frequency-domain vibration analysis method for the electric machines. The method enables the efficient re-analysis of the vibration response of electric machines at various operating conditions without the necessity to re-compute the harmonic response by finite element analyses. Theoretical background of the proposed method is provided, which is based on the modal reduction of the magnetic force excitation by a set of amplitude-modulated standing-waves. The method is applied to the forced response vibration of the interior permanent magnet motor at a fixed operating condition. The results computed by the proposed method agree very well with those computed by the conventional harmonic response analysis by the FEA. The proposed method is then applied to the spin-up test condition to demonstrate its applicability to various operating conditions. It is observed that the proposed method can successfully be applied to the spin-up test conditions, and the measured dominant frequency peaks in the frequency response can be well captured by the proposed approach.

  1. Efficient multiscale magnetic-domain analysis of iron-core material under mechanical stress

    NASA Astrophysics Data System (ADS)

    Nishikubo, Atsushi; Ito, Shumpei; Mifune, Takeshi; Matsuo, Tetsuji; Kaido, Chikara; Takahashi, Yasuhito; Fujiwara, Koji

    2018-05-01

    For an efficient analysis of magnetization, a partial-implicit solution method is improved using an assembled domain structure model with six-domain mesoscopic particles exhibiting pinning-type hysteresis. The quantitative analysis of non-oriented silicon steel succeeds in predicting the stress dependence of hysteresis loss with computation times greatly reduced by using the improved partial-implicit method. The effect of cell division along the thickness direction is also evaluated.

  2. Reuse of imputed data in microarray analysis increases imputation efficiency

    PubMed Central

    Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su

    2004-01-01

    Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240

  3. A combined experimental-modelling method for the detection and analysis of pollution in coastal zones

    NASA Astrophysics Data System (ADS)

    Limić, Nedzad; Valković, Vladivoj

    1996-04-01

    Pollution of coastal seas with toxic substances can be efficiently detected by examining toxic materials in sediment samples. These samples contain information on the overall pollution from surrounding sources such as yacht anchorages, nearby industries, sewage systems, etc. In an efficient analysis of pollution one must determine the contribution from each individual source. In this work it is demonstrated that a modelling method can be utilized for solving this latter problem. The modelling method is based on a unique interpretation of concentrations in sediments from all sampling stations. The proposed method is a synthesis consisting of the utilization of PIXE as an efficient method of pollution concentration determination and the code ANCOPOL (N. Limic and R. Benis, The computer code ANCOPOL, SimTel/msdos/geology, 1994 [1]) for the calculation of contributions from the main polluters. The efficiency and limits of the proposed method are demonstrated by discussing trace element concentrations in sediments of Punat Bay on the island of Krk in Croatia.

  4. The role of environmental heterogeneity in meta-analysis of gene-environment interactions with quantitative traits.

    PubMed

    Li, Shi; Mukherjee, Bhramar; Taylor, Jeremy M G; Rice, Kenneth M; Wen, Xiaoquan; Rice, John D; Stringham, Heather M; Boehnke, Michael

    2014-07-01

    With challenges in data harmonization and environmental heterogeneity across various data sources, meta-analysis of gene-environment interaction studies can often involve subtle statistical issues. In this paper, we study the effect of environmental covariate heterogeneity (within and between cohorts) on two approaches for fixed-effect meta-analysis: the standard inverse-variance weighted meta-analysis and a meta-regression approach. Akin to the results in Simmonds and Higgins (), we obtain analytic efficiency results for both methods under certain assumptions. The relative efficiency of the two methods depends on the ratio of within versus between cohort variability of the environmental covariate. We propose to use an adaptively weighted estimator (AWE), between meta-analysis and meta-regression, for the interaction parameter. The AWE retains full efficiency of the joint analysis using individual level data under certain natural assumptions. Lin and Zeng (2010a, b) showed that a multivariate inverse-variance weighted estimator retains full efficiency as joint analysis using individual level data, if the estimates with full covariance matrices for all the common parameters are pooled across all studies. We show consistency of our work with Lin and Zeng (2010a, b). Without sacrificing much efficiency, the AWE uses only univariate summary statistics from each study, and bypasses issues with sharing individual level data or full covariance matrices across studies. We compare the performance of the methods both analytically and numerically. The methods are illustrated through meta-analysis of interaction between Single Nucleotide Polymorphisms in FTO gene and body mass index on high-density lipoprotein cholesterol data from a set of eight studies of type 2 diabetes. © 2014 WILEY PERIODICALS, INC.

  5. Structural reliability analysis under evidence theory using the active learning kriging model

    NASA Astrophysics Data System (ADS)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  6. Field evaluation of personal sampling methods for multiple bioaerosols.

    PubMed

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  7. Numerical prediction of the energy efficiency of the three-dimensional fish school using the discretized Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Lin, Yinwei

    2018-06-01

    A three-dimensional modeling of fish school performed by a modified Adomian decomposition method (ADM) discretized by the finite difference method is proposed. To our knowledge, few studies of the fish school are documented due to expensive cost of numerical computing and tedious three-dimensional data analysis. Here, we propose a simple model replied on the Adomian decomposition method to estimate the efficiency of energy saving of the flow motion of the fish school. First, the analytic solutions of Navier-Stokes equations are used for numerical validation. The influences of the distance between the side-by-side two fishes are studied on the energy efficiency of the fish school. In addition, the complete error analysis for this method is presented.

  8. Technical efficiency and resources allocation in university hospitals in Tehran, 2009-2012

    PubMed Central

    Rezapour, Aziz; Ebadifard Azar, Farbod; Yousef Zadeh, Negar; Roumiani, YarAllah; Bagheri Faradonbeh, Saeed

    2015-01-01

    Background: Assessment of hospitals’ performance in achieving its goals is a basic necessity. Measuring the efficiency of hospitals in order to boost resource productivity in healthcare organizations is extremely important. The aim of this study was to measure technical efficiency and determining status of resource allocation in some university hospitals, in Tehran, Iran. Methods: This study was conducted in 2012; the research population consisted of all hospitals affiliated to Iran and Tehran medical sciences universities of. Required data, such as human and capital resources information and also production variables (hospital outputs) were collected from data centers of studied hospitals. Data were analyzed using data envelopment analysis (DEA) method, Deap2,1 software; and the stochastic frontier analysis (SFA) method, Frontier 4,1 software. Results: According to DEA method, average of technical, management (pure) and scale efficiency of the studied hospitals during the study period were calculated 0.87, 0.971, and 0.907, respectively. All kinds of efficiency did not follow a fixed trend over the study time and were constantly changing. In the stochastic frontier's production function analysis, the technical efficiency of the studied industry during the study period was estimated to be 0.389. Conclusion: This study represented hospitals with the highest and lowest efficiency. Reference hospitals (more efficient states) were indicated for the inefficient centers. According to the findings, it was found that in the hospitals that do not operate efficiently, there is a capacity to improve the technical efficiency by removing excess inputs without changes in the level of outputs. However, by the optimal allocation of resources in most studied hospitals, very important economy of scale can be achieved. PMID:26793657

  9. A rapid and sensitive method for the simultaneous analysis of aliphatic and polar molecules containing free carboxyl groups in plant extracts by LC-MS/MS

    PubMed Central

    2009-01-01

    Background Aliphatic molecules containing free carboxyl groups are important intermediates in many metabolic and signalling reactions, however, they accumulate to low levels in tissues and are not efficiently ionized by electrospray ionization (ESI) compared to more polar substances. Quantification of aliphatic molecules becomes therefore difficult when small amounts of tissue are available for analysis. Traditional methods for analysis of these molecules require purification or enrichment steps, which are onerous when multiple samples need to be analyzed. In contrast to aliphatic molecules, more polar substances containing free carboxyl groups such as some phytohormones are efficiently ionized by ESI and suitable for analysis by LC-MS/MS. Thus, the development of a method with which aliphatic and polar molecules -which their unmodified forms differ dramatically in their efficiencies of ionization by ESI- can be simultaneously detected with similar sensitivities would substantially simplify the analysis of complex biological matrices. Results A simple, rapid, specific and sensitive method for the simultaneous detection and quantification of free aliphatic molecules (e.g., free fatty acids (FFA)) and small polar molecules (e.g., jasmonic acid (JA), salicylic acid (SA)) containing free carboxyl groups by direct derivatization of leaf extracts with Picolinyl reagent followed by LC-MS/MS analysis is presented. The presence of the N atom in the esterified pyridine moiety allowed the efficient ionization of 25 compounds tested irrespective of their chemical structure. The method was validated by comparing the results obtained after analysis of Nicotiana attenuata leaf material with previously described analytical methods. Conclusion The method presented was used to detect 16 compounds in leaf extracts of N. attenuata plants. Importantly, the method can be adapted based on the specific analytes of interest with the only consideration that the molecules must contain at least one free carboxyl group. PMID:19939243

  10. Research on the energy and ecological efficiency of mechanical equipment remanufacturing systems

    NASA Astrophysics Data System (ADS)

    Shi, Junli; Cheng, Jinshi; Ma, Qinyi; Wang, Yajun

    2017-08-01

    According to the characteristics of mechanical equipment remanufacturing system, the dynamic performance of energy consumption and emission is explored, the equipment energy efficiency and emission analysis model is established firstly, and then energy and ecological efficiency analysis method of the remanufacturing system is put forward, at last, the energy and ecological efficiency of WD615.87 automotive diesel engine remanufacturing system as an example is analyzed, the way of energy efficiency improvementnt and environmental friendly mechanism of remanufacturing process is put forward.

  11. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  12. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  13. Data Envelopment Analysis and Its Application to the Measurement of Efficiency in Higher Education

    ERIC Educational Resources Information Center

    Johnes, Jill

    2006-01-01

    The purpose of this paper is to examine the possibility of measuring efficiency in the context of higher education. The paper begins by exploring the advantages and drawbacks of the various methods for measuring efficiency in the higher education context. The ease with which data envelopment analysis (DEA) can handle multiple inputs and multiple…

  14. Identification of suitable genes contributes to lung adenocarcinoma clustering by multiple meta-analysis methods.

    PubMed

    Yang, Ze-Hui; Zheng, Rui; Gao, Yuan; Zhang, Qiang

    2016-09-01

    With the widespread application of high-throughput technology, numerous meta-analysis methods have been proposed for differential expression profiling across multiple studies. We identified the suitable differentially expressed (DE) genes that contributed to lung adenocarcinoma (ADC) clustering based on seven popular multiple meta-analysis methods. Seven microarray expression profiles of ADC and normal controls were extracted from the ArrayExpress database. The Bioconductor was used to perform the data preliminary preprocessing. Then, DE genes across multiple studies were identified. Hierarchical clustering was applied to compare the classification performance for microarray data samples. The classification efficiency was compared based on accuracy, sensitivity and specificity. Across seven datasets, 573 ADC cases and 222 normal controls were collected. After filtering out unexpressed and noninformative genes, 3688 genes were remained for further analysis. The classification efficiency analysis showed that DE genes identified by sum of ranks method separated ADC from normal controls with the best accuracy, sensitivity and specificity of 0.953, 0.969 and 0.932, respectively. The gene set with the highest classification accuracy mainly participated in the regulation of response to external stimulus (P = 7.97E-04), cyclic nucleotide-mediated signaling (P = 0.01), regulation of cell morphogenesis (P = 0.01) and regulation of cell proliferation (P = 0.01). Evaluation of DE genes identified by different meta-analysis methods in classification efficiency provided a new perspective to the choice of the suitable method in a given application. Varying meta-analysis methods always present varying abilities, so synthetic consideration should be taken when providing meta-analysis methods for particular research. © 2015 John Wiley & Sons Ltd.

  15. Frontier-based techniques in measuring hospital efficiency in Iran: a systematic review and meta-regression analysis

    PubMed Central

    2013-01-01

    Background In recent years, there has been growing interest in measuring the efficiency of hospitals in Iran and several studies have been conducted on the topic. The main objective of this paper was to review studies in the field of hospital efficiency and examine the estimated technical efficiency (TE) of Iranian hospitals. Methods Persian and English databases were searched for studies related to measuring hospital efficiency in Iran. Ordinary least squares (OLS) regression models were applied for statistical analysis. The PRISMA guidelines were followed in the search process. Results A total of 43 efficiency scores from 29 studies were retrieved and used to approach the research question. Data envelopment analysis was the principal frontier efficiency method in the estimation of efficiency scores. The pooled estimate of mean TE was 0.846 (±0.134). There was a considerable variation in the efficiency scores between the different studies performed in Iran. There were no differences in efficiency scores between data envelopment analysis (DEA) and stochastic frontier analysis (SFA) techniques. The reviewed studies are generally similar and suffer from similar methodological deficiencies, such as no adjustment for case mix and quality of care differences. The results of OLS regression revealed that studies that included more variables and more heterogeneous hospitals generally reported higher TE. Larger sample size was associated with reporting lower TE. Conclusions The features of frontier-based techniques had a profound impact on the efficiency scores among Iranian hospital studies. These studies suffer from major methodological deficiencies and were of sub-optimal quality, limiting their validity and reliability. It is suggested that improving data collection and processing in Iranian hospital databases may have a substantial impact on promoting the quality of research in this field. PMID:23945011

  16. Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiao, Xiangmin; Einstein, Daniel R.; Dyedov, Volodymyr

    2010-03-24

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stabilitymore » and consistency tests. These concepts lend themselves to robust numerical techniques including eigenvalue analysis, weighted least squares approximations, and numerical minimization, resulting in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.« less

  17. Quantitative Assessment of In-solution Digestion Efficiency Identifies Optimal Protocols for Unbiased Protein Analysis*

    PubMed Central

    León, Ileana R.; Schwämmle, Veit; Jensen, Ole N.; Sprenger, Richard R.

    2013-01-01

    The majority of mass spectrometry-based protein quantification studies uses peptide-centric analytical methods and thus strongly relies on efficient and unbiased protein digestion protocols for sample preparation. We present a novel objective approach to assess protein digestion efficiency using a combination of qualitative and quantitative liquid chromatography-tandem MS methods and statistical data analysis. In contrast to previous studies we employed both standard qualitative as well as data-independent quantitative workflows to systematically assess trypsin digestion efficiency and bias using mitochondrial protein fractions. We evaluated nine trypsin-based digestion protocols, based on standard in-solution or on spin filter-aided digestion, including new optimized protocols. We investigated various reagents for protein solubilization and denaturation (dodecyl sulfate, deoxycholate, urea), several trypsin digestion conditions (buffer, RapiGest, deoxycholate, urea), and two methods for removal of detergents before analysis of peptides (acid precipitation or phase separation with ethyl acetate). Our data-independent quantitative liquid chromatography-tandem MS workflow quantified over 3700 distinct peptides with 96% completeness between all protocols and replicates, with an average 40% protein sequence coverage and an average of 11 peptides identified per protein. Systematic quantitative and statistical analysis of physicochemical parameters demonstrated that deoxycholate-assisted in-solution digestion combined with phase transfer allows for efficient, unbiased generation and recovery of peptides from all protein classes, including membrane proteins. This deoxycholate-assisted protocol was also optimal for spin filter-aided digestions as compared with existing methods. PMID:23792921

  18. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  19. Methods for understanding super-efficient data envelopment analysis results with an application to hospital inpatient surgery.

    PubMed

    O'Neill, Liam; Dexter, Franklin

    2005-11-01

    We compare two techniques for increasing the transparency and face validity of Data Envelopment Analysis (DEA) results for managers at a single decision-making unit: multifactor efficiency (MFE) and non-radial super-efficiency (NRSE). Both methods incorporate the slack values from the super-efficient DEA model to provide a more robust performance measure than radial super-efficiency scores. MFE and NRSE are equivalent for unique optimal solutions and a single output. MFE incorporates the slack values from multiple output variables, whereas NRSE does not. MFE can be more transparent to managers since it involves no additional optimization steps beyond the DEA, whereas NRSE requires several. We compare results for operating room managers at an Iowa hospital evaluating its growth potential for multiple surgical specialties. In addition, we address the problem of upward bias of the slack values of the super-efficient DEA model.

  20. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  1. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  2. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  3. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  4. Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.

    PubMed

    OConnor, William; Runquist, Elizabeth A

    2008-07-01

    Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.

  5. A comparative study of biomass integrated gasification combined cycle power systems: Performance analysis.

    PubMed

    Zang, Guiyan; Tejasvi, Sharma; Ratner, Albert; Lora, Electo Silva

    2018-05-01

    The Biomass Integrated Gasification Combined Cycle (BIGCC) power system is believed to potentially be a highly efficient way to utilize biomass to generate power. However, there is no comparative study of BIGCC systems that examines all the latest improvements for gasification agents, gas turbine combustion methods, and CO 2 Capture and Storage options. This study examines the impact of recent advancements on BIGCC performance through exergy analysis using Aspen Plus. Results show that the exergy efficiency of these systems is ranged from 22.3% to 37.1%. Furthermore, exergy analysis indicates that the gas turbine with external combustion has relatively high exergy efficiency, and Selexol CO 2 removal method has low exergy destruction. Moreover, the sensitivity analysis shows that the system exergy efficiency is more sensitive to the initial temperature and pressure ratio of the gas turbine, whereas has a relatively weak dependence on the initial temperature and initial pressure of the steam turbine. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain.

    PubMed

    Frolov, Alexander A; Húsek, Dušan; Polyakov, Pavel Yu

    2016-03-01

    An usual task in large data set analysis is searching for an appropriate data representation in a space of fewer dimensions. One of the most efficient methods to solve this task is factor analysis. In this paper, we compare seven methods for Boolean factor analysis (BFA) in solving the so-called bars problem (BP), which is a BFA benchmark. The performance of the methods is evaluated by means of information gain. Study of the results obtained in solving BP of different levels of complexity has allowed us to reveal strengths and weaknesses of these methods. It is shown that the Likelihood maximization Attractor Neural Network with Increasing Activity (LANNIA) is the most efficient BFA method in solving BP in many cases. Efficacy of the LANNIA method is also shown, when applied to the real data from the Kyoto Encyclopedia of Genes and Genomes database, which contains full genome sequencing for 1368 organisms, and to text data set R52 (from Reuters 21578) typically used for label categorization.

  7. Measuring Efficiency of Secondary Healthcare Providers in Slovenia

    PubMed Central

    Blatnik, Patricia; Bojnec, Štefan; Tušak, Matej

    2017-01-01

    Abstract The chief aim of this study was to analyze secondary healthcare providers' efficiency, focusing on the efficiency analysis of Slovene general hospitals. We intended to present a complete picture of technical, allocative, and cost or economic efficiency of general hospitals. Methods We researched the aspects of efficiency with two econometric methods. First, we calculated the necessary quotients of efficiency with the stochastic frontier analyze (SFA), which are realized by econometric evaluation of stochastic frontier functions; then, with the data envelopment analyze (DEA), we calculated the necessary quotients that are based on the linear programming method. Results Results on measures of efficiency showed that the two chosen methods produced two different conclusions. The SFA method concluded Celje General Hospital is the most efficient general hospital, whereas the DEA method concluded Brežice General Hospital was the hospital to be declared as the most efficient hospital. Conclusion Our results are a useful tool that can aid managers, payers, and designers of healthcare policy to better understand how general hospitals operate. The participants can accordingly decide with less difficulty on any further business operations of general hospitals, having the best practices of general hospitals at their disposal. PMID:28730180

  8. Can matrix solid phase dispersion (MSPD) be more simplified? Application of solventless MSPD sample preparation method for GC-MS and GC-FID analysis of plant essential oil components.

    PubMed

    Wianowska, Dorota; Dawidowicz, Andrzej L

    2016-05-01

    This paper proposes and shows the analytical capabilities of a new variant of matrix solid phase dispersion (MSPD) with the solventless blending step in the chromatographic analysis of plant volatiles. The obtained results prove that the use of a solvent is redundant as the sorption ability of the octadecyl brush is sufficient for quantitative retention of volatiles from 9 plants differing in their essential oil composition. The extraction efficiency of the proposed simplified MSPD method is equivalent to the efficiency of the commonly applied variant of MSPD with the organic dispersing liquid and pressurized liquid extraction, which is a much more complex, technically advanced and highly efficient technique of plant extraction. The equivalency of these methods is confirmed by the variance analysis. The proposed solventless MSPD method is precise, accurate, and reproducible. The recovery of essential oil components estimated by the MSPD method exceeds 98%, which is satisfactory for analytical purposes. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Analysis of simulated angiographic procedures. Part 2: extracting efficiency data from audio and video recordings.

    PubMed

    Duncan, James R; Kline, Benjamin; Glaiberman, Craig B

    2007-04-01

    To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.

  10. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  11. Exergetic analysis of autonomous power complex for drilling rig

    NASA Astrophysics Data System (ADS)

    Lebedev, V. A.; Karabuta, V. S.

    2017-10-01

    The article considers the issue of increasing the energy efficiency of power equipment of the drilling rig. At present diverse types of power plants are used in power supply systems. When designing and choosing a power plant, one of the main criteria is its energy efficiency. The main indicator in this case is the effective efficiency factor calculated by the method of thermal balances. In the article, it is suggested to use the exergy method to determine energy efficiency, which allows to perform estimations of the thermodynamic perfection degree of the system by the example of a gas turbine plant: relative estimation (exergetic efficiency factor) and an absolute estimation. An exergetic analysis of the gas turbine plant operating in a simple scheme was carried out using the program WaterSteamPro. Exergy losses in equipment elements are calculated.

  12. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  13. Technical efficiency and resources allocation in university hospitals in Tehran, 2009-2012.

    PubMed

    Rezapour, Aziz; Ebadifard Azar, Farbod; Yousef Zadeh, Negar; Roumiani, YarAllah; Bagheri Faradonbeh, Saeed

    2015-01-01

    Assessment of hospitals' performance in achieving its goals is a basic necessity. Measuring the efficiency of hospitals in order to boost resource productivity in healthcare organizations is extremely important. The aim of this study was to measure technical efficiency and determining status of resource allocation in some university hospitals, in Tehran, Iran. This study was conducted in 2012; the research population consisted of all hospitals affiliated to Iran and Tehran medical sciences universities of. Required data, such as human and capital resources information and also production variables (hospital outputs) were collected from data centers of studied hospitals. Data were analyzed using data envelopment analysis (DEA) method, Deap2,1 software; and the stochastic frontier analysis (SFA) method, Frontier 4,1 software. According to DEA method, average of technical, management (pure) and scale efficiency of the studied hospitals during the study period were calculated 0.87, 0.971, and 0.907, respectively. All kinds of efficiency did not follow a fixed trend over the study time and were constantly changing. In the stochastic frontier's production function analysis, the technical efficiency of the studied industry during the study period was estimated to be 0.389. This study represented hospitals with the highest and lowest efficiency. Reference hospitals (more efficient states) were indicated for the inefficient centers. According to the findings, it was found that in the hospitals that do not operate efficiently, there is a capacity to improve the technical efficiency by removing excess inputs without changes in the level of outputs. However, by the optimal allocation of resources in most studied hospitals, very important economy of scale can be achieved.

  14. Coincidence and coherent data analysis methods for gravitational wave bursts in a network of interferometric detectors

    NASA Astrophysics Data System (ADS)

    Arnaud, Nicolas; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Porter, Edward K.

    2003-11-01

    Network data analysis methods are the only way to properly separate real gravitational wave (GW) transient events from detector noise. They can be divided into two generic classes: the coincidence method and the coherent analysis. The former uses lists of selected events provided by each interferometer belonging to the network and tries to correlate them in time to identify a physical signal. Instead of this binary treatment of detector outputs (signal present or absent), the latter method involves first the merging of the interferometer data and looks for a common pattern, consistent with an assumed GW waveform and a given source location in the sky. The thresholds are only applied later, to validate or not the hypothesis made. As coherent algorithms use more complete information than coincidence methods, they are expected to provide better detection performances, but at a higher computational cost. An efficient filter must yield a good compromise between a low false alarm rate (hence triggering on data at a manageable rate) and a high detection efficiency. Therefore, the comparison of the two approaches is achieved using so-called receiving operating characteristics (ROC), giving the relationship between the false alarm rate and the detection efficiency for a given method. This paper investigates this question via Monte Carlo simulations, using the network model developed in a previous article. Its main conclusions are the following. First, a three-interferometer network such as Virgo-LIGO is found to be too small to reach good detection efficiencies at low false alarm rates: larger configurations are suitable to reach a confidence level high enough to validate as true GW a detected event. In addition, an efficient network must contain interferometers with comparable sensitivities: studying the three-interferometer LIGO network shows that the 2-km interferometer with half sensitivity leads to a strong reduction of performances as compared to a network of three interferometers with full sensitivity. Finally, it is shown that coherent analyses are feasible for burst searches and are clearly more efficient than coincidence strategies. Therefore, developing such methods should be an important goal of a worldwide collaborative data analysis.

  15. 10 CFR 431.197 - Manufacturer's determination of efficiency for distribution transformers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... methods used; the mathematical model, the engineering or statistical analysis, computer simulation or... (b)(3) of this section, or by application of an alternative efficiency determination method (AEDM... section only if: (i) The AEDM has been derived from a mathematical model that represents the electrical...

  16. Estimating School Efficiency: A Comparison of Methods Using Simulated Data.

    ERIC Educational Resources Information Center

    Bifulco, Robert; Bretschneider, Stuart

    2001-01-01

    Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…

  17. Modified GMDH-NN algorithm and its application for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Song, Shufang; Wang, Lu

    2017-11-01

    Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.

  18. Improved dynamic analysis method using load-dependent Ritz vectors

    NASA Technical Reports Server (NTRS)

    Escobedo-Torres, J.; Ricles, J. M.

    1993-01-01

    The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.

  19. Oxalate analysis methodology for decayed wood

    Treesearch

    Carol A. Clausen; William Kenealy; Patricia K. Lebow

    2008-01-01

    Oxalate from partially decayed southern pine wood was analyzed by HPLC or colorimetric assay. Oxalate extraction efficiency, assessed by comparing analysis of whole wood cubes with ground wood, showed that both wood geometries could be extracted with comparable efficiency. To differentiate soluble oxalate from total oxalate, three extraction methods were assessed,...

  20. The Use of ATR-FTIR in Conjunction with Thermal Analysis Methods for Efficient Identification of Polymer Samples: A Qualitative Multiinstrument Instrumental Analysis Laboratory Experiment

    ERIC Educational Resources Information Center

    Dickson-Karn, Nicole M.

    2017-01-01

    A multi-instrument approach has been applied to the efficient identification of polymers in an upper-division undergraduate instrumental analysis laboratory course. Attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR) is used in conjunction with differential scanning calorimetry (DSC) to identify 18 polymer samples and…

  1. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  2. Spectral analysis for GNSS coordinate time series using chirp Fourier transform

    NASA Astrophysics Data System (ADS)

    Feng, Shengtao; Bo, Wanju; Ma, Qingzun; Wang, Zifan

    2017-12-01

    Spectral analysis for global navigation satellite system (GNSS) coordinate time series provides a principal tool to understand the intrinsic mechanism that affects tectonic movements. Spectral analysis methods such as the fast Fourier transform, Lomb-Scargle spectrum, evolutionary power spectrum, wavelet power spectrum, etc. are used to find periodic characteristics in time series. Among spectral analysis methods, the chirp Fourier transform (CFT) with less stringent requirements is tested with synthetic and actual GNSS coordinate time series, which proves the accuracy and efficiency of the method. With the length of series only limited to even numbers, CFT provides a convenient tool for windowed spectral analysis. The results of ideal synthetic data prove CFT accurate and efficient, while the results of actual data show that CFT is usable to derive periodic information from GNSS coordinate time series.

  3. Simultaneous quantitative analysis of olmesartan, amlodipine and hydrochlorothiazide in their combined dosage form utilizing classical and alternating least squares based chemometric methods.

    PubMed

    Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S

    2016-03-01

    Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.

  4. Effect of various pretreatment methods on anaerobic mixed microflora to enhance biohydrogen production utilizing dairy wastewater as substrate.

    PubMed

    Venkata Mohan, S; Lalit Babu, V; Sarma, P N

    2008-01-01

    Influence of different pretreatment methods applied on anaerobic mixed inoculum was evaluated for selectively enriching the hydrogen (H(2)) producing mixed culture using dairy wastewater as substrate. The experimental data showed the feasibility of molecular biohydrogen generation utilizing dairy wastewater as primary carbon source through metabolic participation. However, the efficiency of H(2) evolution and substrate removal efficiency were found to be dependent on the type of pretreatment procedure adopted on the parent inoculum. Among the studied pretreatment methods, chemical pretreatment (2-bromoethane sulphonic acid sodium salt (0.2 g/l); 24 h) procedure enabled higher H(2) yield along with concurrent substrate removal efficiency. On the contrary, heat-shock pretreatment (100 degrees C; 1 h) procedure resulted in relatively low H(2) yield. Compared to control experiments all the adopted pretreatment methods documented higher H(2) generation efficiency. In the case of combination experiments, integration of pH (pH 3; adjusted with ortho-phosphoric acid; 24 h) and chemical pretreatment evidenced higher H(2) production. Data envelopment analysis (DEA), a frontier analysis technique model was successfully applied to enumerate the relative efficiency of different pretreatment methods studied by considered pretreatment procedures as input and cumulative H(2) production rate and substrate degradation rate as corresponding two outputs.

  5. The Efficiency of Higher Education Institutions in England Revisited: Comparing Alternative Measures

    ERIC Educational Resources Information Center

    Johnes, Geraint; Tone, Kaoru

    2017-01-01

    Data envelopment analysis (DEA) has often been used to evaluate efficiency in the context of higher education institutions. Yet there are numerous alternative non-parametric measures of efficiency available. This paper compares efficiency scores obtained for institutions of higher education in England, 2013-2014, using three different methods: the…

  6. Chapter 13: Assessing Persistence and Other Evaluation Issues Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Violette, Daniel M.

    Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).

  7. Review of Computational Stirling Analysis Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  8. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  9. Analysis of data collected from right and left limbs: Accounting for dependence and improving statistical efficiency in musculoskeletal research.

    PubMed

    Stewart, Sarah; Pearson, Janet; Rome, Keith; Dalbeth, Nicola; Vandal, Alain C

    2018-01-01

    Statistical techniques currently used in musculoskeletal research often inefficiently account for paired-limb measurements or the relationship between measurements taken from multiple regions within limbs. This study compared three commonly used analysis methods with a mixed-models approach that appropriately accounted for the association between limbs, regions, and trials and that utilised all information available from repeated trials. Four analysis were applied to an existing data set containing plantar pressure data, which was collected for seven masked regions on right and left feet, over three trials, across three participant groups. Methods 1-3 averaged data over trials and analysed right foot data (Method 1), data from a randomly selected foot (Method 2), and averaged right and left foot data (Method 3). Method 4 used all available data in a mixed-effects regression that accounted for repeated measures taken for each foot, foot region and trial. Confidence interval widths for the mean differences between groups for each foot region were used as a criterion for comparison of statistical efficiency. Mean differences in pressure between groups were similar across methods for each foot region, while the confidence interval widths were consistently smaller for Method 4. Method 4 also revealed significant between-group differences that were not detected by Methods 1-3. A mixed effects linear model approach generates improved efficiency and power by producing more precise estimates compared to alternative approaches that discard information in the process of accounting for paired-limb measurements. This approach is recommended in generating more clinically sound and statistically efficient research outputs. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Distributed collaborative response surface method for mechanical dynamic assembly reliability design

    NASA Astrophysics Data System (ADS)

    Bai, Guangchen; Fei, Chengwei

    2013-11-01

    Because of the randomness of many impact factors influencing the dynamic assembly relationship of complex machinery, the reliability analysis of dynamic assembly relationship needs to be accomplished considering the randomness from a probabilistic perspective. To improve the accuracy and efficiency of dynamic assembly relationship reliability analysis, the mechanical dynamic assembly reliability(MDAR) theory and a distributed collaborative response surface method(DCRSM) are proposed. The mathematic model of DCRSM is established based on the quadratic response surface function, and verified by the assembly relationship reliability analysis of aeroengine high pressure turbine(HPT) blade-tip radial running clearance(BTRRC). Through the comparison of the DCRSM, traditional response surface method(RSM) and Monte Carlo Method(MCM), the results show that the DCRSM is not able to accomplish the computational task which is impossible for the other methods when the number of simulation is more than 100 000 times, but also the computational precision for the DCRSM is basically consistent with the MCM and improved by 0.40˜4.63% to the RSM, furthermore, the computational efficiency of DCRSM is up to about 188 times of the MCM and 55 times of the RSM under 10000 times simulations. The DCRSM is demonstrated to be a feasible and effective approach for markedly improving the computational efficiency and accuracy of MDAR analysis. Thus, the proposed research provides the promising theory and method for the MDAR design and optimization, and opens a novel research direction of probabilistic analysis for developing the high-performance and high-reliability of aeroengine.

  11. Data envelopment analysis with upper bound on output to measure efficiency performance of departments in Malaikulsaleh University

    NASA Astrophysics Data System (ADS)

    Abdullah, Dahlan; Suwilo, Saib; Tulus; Mawengkang, Herman; Efendi, Syahril

    2017-09-01

    The higher education system in Indonesia can be considered not only as an important source of developing knowledge in the country, but also could create positive living conditions for the country. Therefore it is not surprising that enrollments in higher education continue to expand. However, the implication of this situation, the Indonesian government is necessarily to support more funds. In the interest of accountability, it is essential to measure the efficiency for this higher institution. Data envelopment analysis (DEA) is a method to evaluate the technical efficiency of production units which have multiple input and output. The higher learning institution considered in this paper is Malikussaleh University located in Lhokseumawe, a city in Aceh province of Indonesia. This paper develops a method to evaluate efficiency for all departments in Malikussaleh University using DEA with bounded output. Accordingly, we present some important differences in efficiency of those departments. Finally we discuss the effort should be done by these departments in order to become efficient.

  12. Fast Computation and Assessment Methods in Power System Analysis

    NASA Astrophysics Data System (ADS)

    Nagata, Masaki

    Power system analysis is essential for efficient and reliable power system operation and control. Recently, online security assessment system has become of importance, as more efficient use of power networks is eagerly required. In this article, fast power system analysis techniques such as contingency screening, parallel processing and intelligent systems application are briefly surveyed from the view point of their application to online dynamic security assessment.

  13. The efficiency of health care production in OECD countries: A systematic review and meta-analysis of cross-country comparisons.

    PubMed

    Varabyova, Yauheniya; Müller, Julia-Maria

    2016-03-01

    There has been an ongoing interest in the analysis and comparison of the efficiency of health care systems using nonparametric and parametric applications. The objective of this study was to review the current state of the literature and to synthesize the findings on health system efficiency in OECD countries. We systematically searched five electronic databases through August 2014 and identified 22 studies that analyzed the efficiency of health care production at the country level. We summarized these studies with view on their sample, methods, and utilized variables. We developed and applied a checklist of 14 items to assess the quality of the reviewed studies along four dimensions: reporting, external validity, bias, and power. Moreover, to examine the internal validity of findings we meta-analyzed the efficiency estimates reported in 35 models from ten studies. The qualitative synthesis of the literature indicated large differences in study designs and methods. The meta-analysis revealed low correlations between country rankings suggesting a lack of internal validity of the efficiency estimates. In conclusion, methodological problems of existing cross-country comparisons of the efficiency of health care systems draw into question the ability of these comparisons to provide meaningful guidance to policy-makers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. A Simple and Computationally Efficient Sampling Approach to Covariate Adjustment for Multifactor Dimensionality Reduction Analysis of Epistasis

    PubMed Central

    Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.

    2010-01-01

    Epistasis or gene-gene interaction is a fundamental component of the genetic architecture of complex traits such as disease susceptibility. Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free method to detect epistasis when there are no significant marginal genetic effects. However, in many studies of complex disease, other covariates like age of onset and smoking status could have a strong main effect and may potentially interfere with MDR's ability to achieve its goal. In this paper, we present a simple and computationally efficient sampling method to adjust for covariate effects in MDR. We use simulation to show that after adjustment, MDR has sufficient power to detect true gene-gene interactions. We also compare our method with the state-of-art technique in covariate adjustment. The results suggest that our proposed method performs similarly, but is more computationally efficient. We then apply this new method to an analysis of a population-based bladder cancer study in New Hampshire. PMID:20924193

  15. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  16. Energy efficiency analysis of the manipulation process by the industrial objects with the use of Bernoulli gripping devices

    NASA Astrophysics Data System (ADS)

    Savkiv, Volodymyr; Mykhailyshyn, Roman; Duchon, Frantisek; Mikhalishin, Mykhailo

    2017-11-01

    The article deals with the topical issue of reducing energy consumption for transportation of industrial objects. The energy efficiency of the process of objects manipulation with the use of the orientation optimization method while gripping with the help of different methods has been studied. The analysis of the influence of the constituent parts of inertial forces, that affect the object of manipulation, on the necessary force characteristics and energy consumption of Bernoulli gripping device has been proposed. The economic efficiency of the use of the optimal orientation of Bernoulli gripping device while transporting the object of manipulation in comparison to the transportation without re-orientation has been proved.

  17. Efficient analysis of mode profiles in elliptical microcavity using dynamic-thermal electron-quantum medium FDTD method.

    PubMed

    Khoo, E H; Ahmed, I; Goh, R S M; Lee, K H; Hung, T G G; Li, E P

    2013-03-11

    The dynamic-thermal electron-quantum medium finite-difference time-domain (DTEQM-FDTD) method is used for efficient analysis of mode profile in elliptical microcavity. The resonance peak of the elliptical microcavity is studied by varying the length ratio. It is observed that at some length ratios, cavity mode is excited instead of whispering gallery mode. This depicts that mode profiles are length ratio dependent. Through the implementation of the DTEQM-FDTD on graphic processing unit (GPU), the simulation time is reduced by 300 times as compared to the CPU. This leads to an efficient optimization approach to design microcavity lasers for wide range of applications in photonic integrated circuits.

  18. Enhanced analysis of real-time PCR data by using a variable efficiency model: FPK-PCR

    PubMed Central

    Lievens, Antoon; Van Aelst, S.; Van den Bulcke, M.; Goetghebeur, E.

    2012-01-01

    Current methodology in real-time Polymerase chain reaction (PCR) analysis performs well provided PCR efficiency remains constant over reactions. Yet, small changes in efficiency can lead to large quantification errors. Particularly in biological samples, the possible presence of inhibitors forms a challenge. We present a new approach to single reaction efficiency calculation, called Full Process Kinetics-PCR (FPK-PCR). It combines a kinetically more realistic model with flexible adaptation to the full range of data. By reconstructing the entire chain of cycle efficiencies, rather than restricting the focus on a ‘window of application’, one extracts additional information and loses a level of arbitrariness. The maximal efficiency estimates returned by the model are comparable in accuracy and precision to both the golden standard of serial dilution and other single reaction efficiency methods. The cycle-to-cycle changes in efficiency, as described by the FPK-PCR procedure, stay considerably closer to the data than those from other S-shaped models. The assessment of individual cycle efficiencies returns more information than other single efficiency methods. It allows in-depth interpretation of real-time PCR data and reconstruction of the fluorescence data, providing quality control. Finally, by implementing a global efficiency model, reproducibility is improved as the selection of a window of application is avoided. PMID:22102586

  19. Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.

    PubMed

    Leung, Denis H Y; Wang, You-Gan; Zhu, Min

    2009-07-01

    The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

  20. Analysis and determination the efficiency of the European health systems.

    PubMed

    Del Rocío Moreno-Enguix, María; Gómez-Gallego, Juan Cándido; Gómez Gallego, María

    2018-01-01

    The current economic crisis has increased the interest in analyzing the efficiency of health care systems, as their funding is a very important part of the budgets for different countries. In this work determines the efficiency in the health services in European countries applying data envelopment analysis. In addition, the combined application of data envelopment analysis methods and ACP can provide an evaluation of the efficiency with respect to differently oriented productive health systems in the different countries. The results show that models with a lower level of efficiency are those whose input is beds, followed by the models whose input is physicians. Finally, we apply the AD to select a few simple indicators that facilitate control of the level of operational efficiency of a health system. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Efficient visualization of urban spaces

    NASA Astrophysics Data System (ADS)

    Stamps, A. E.

    2012-10-01

    This chapter presents a new method for calculating efficiency and applies that method to the issues of selecting simulation media and evaluating the contextual fit of new buildings in urban spaces. The new method is called "meta-analysis". A meta-analytic review of 967 environments indicated that static color simulations are the most efficient media for visualizing urban spaces. For contextual fit, four original experiments are reported on how strongly five factors influence visual appeal of a street: architectural style, trees, height of a new building relative to the heights of existing buildings, setting back a third story, and distance. A meta-analysis of these four experiments and previous findings, covering 461 environments, indicated that architectural style, trees, and height had effects strong enough to warrant implementation, but the effects of setting back third stories and distance were too small to warrant implementation.

  2. Development of a method for efficient cost-effective screening of Aspergillus niger mutants having increased production of glucoamylase.

    PubMed

    Zhu, Xudong; Arman, Bessembayev; Chu, Ju; Wang, Yonghong; Zhuang, Yingping

    2017-05-01

    To develop an efficient cost-effective screening process to improve production of glucoamylase in Aspergillus niger. The cultivation of A. niger was achieved with well-dispersed morphology in 48-deep-well microtiter plates, which increased the throughput of the samples compared to traditional flask cultivation. There was a close negative correlation between glucoamylase and its pH of the fermentation broth. A novel high-throughput analysis method using Methyl Orange was developed. When compared to the conventional analysis method using 4-nitrophenyl α-D-glucopyranoside as substrate, a correlation coefficient of 0.96 by statistical analysis was obtained. Using this novel screening method, we acquired a strain with an activity of 2.2 × 10 3  U ml -1 , a 70% higher yield of glucoamylase than its parent strain.

  3. Emergy Analysis and Sustainability Efficiency Analysis of Different Crop-Based Biodiesel in Life Cycle Perspective

    PubMed Central

    Ren, Jingzheng; Manzardo, Alessandro; Mazzi, Anna; Fedele, Andrea; Scipioni, Antonio

    2013-01-01

    Biodiesel as a promising alternative energy resource has been a hot spot in chemical engineering nowadays, but there is also an argument about the sustainability of biodiesel. In order to analyze the sustainability of biodiesel production systems and select the most sustainable scenario, various kinds of crop-based biodiesel including soybean-, rapeseed-, sunflower-, jatropha- and palm-based biodiesel production options are studied by emergy analysis; soybean-based scenario is recognized as the most sustainable scenario that should be chosen for further study in China. DEA method is used to evaluate the sustainability efficiencies of these options, and the biodiesel production systems based on soybean, sunflower, and palm are considered as DEA efficient, whereas rapeseed-based and jatropha-based scenarios are needed to be improved, and the improved methods have also been specified. PMID:23766723

  4. An efficiency study of the simultaneous analysis and design of structures

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.; Wu, Zhiqi; Sobieski, Jaroslaw

    1995-01-01

    The efficiency of the Simultaneous Analysis and Design (SAND) approach in the minimum weight optimization of structural systems subject to strength and displacement constraints as well as size side constraints is investigated. SAND allows for an optimization to take place in one single operation as opposed to the more traditional and sequential Nested Analysis and Design (NAND) method, where analyses and optimizations alternate. Thus, SAND has the advantage that the stiffness matrix is never factored during the optimization retaining its original sparsity. One of SAND's disadvantages is the increase in the number of design variables and in the associated number of constraint gradient evaluations. If SAND is to be an acceptable player in the optimization field, it is essential to investigate the efficiency of the method and to present a possible cure for any inherent deficiencies.

  5. Evaluation Methodology for Surface Engineering Techniques to Improve Powertrain Efficiency in Military Vehicles

    DTIC Science & Technology

    2012-06-01

    Conducting metrology, surface analysis, and metallography/ fractography interrogations of samples to correlate microstructure with friction...are examined using a variety of methods such as metallography, chemical analysis, fractography , and hardness measurements. These methods assist in

  6. Efficient alignment-free DNA barcode analytics.

    PubMed

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  7. Histological method for evaluation of the efficiency of Enerlit-Clima.

    PubMed

    Gol'dshtein, D V; Vikhlyantseva, E V; Sakharova, N K; Maevskii, E I; Pogorelov, A G; Uchitel', M L

    2004-08-01

    We propose a method of evaluation of anticlimacteric efficiency of a drug by its effect on the estrous cycle. The study was carried out on 9-month-old mice with retained, but notably reduced reproductive function. Analysis of the cell components of the estrous cycle was carried out on histological preparations of vaginal smears.

  8. Development of a probabilistic analysis methodology for structural reliability estimation

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.

    1991-01-01

    The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.

  9. How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods

    DTIC Science & Technology

    2007-08-01

    Attack Trees for Modeling and Analysis 10 2.8 Misuse and Abuse Cases 10 2.9 Formal Methods 11 2.9.1 Software Cost Reduction 12 2.9.2 Common...modern or efficient techniques. • Requirements analysis typically is either not performed at all (identified requirements are directly specified without...any analysis or modeling) or analysis is restricted to functional re- quirements and ignores quality requirements, other nonfunctional requirements

  10. Efficiency and factors influencing efficiency of Community Health Strategy in providing Maternal and Child Health services in Mwingi District, Kenya: an expert opinion perspective

    PubMed Central

    Nzioki, Japheth Mativo; Onyango, Rosebella Ogutu; Ombaka, James Herbert

    2015-01-01

    Introduction Community Health Strategy (CHS) is a new Primary Health Care (PHC) model in Kenya, designed to provide PHC services in Kenya. In 2011, CHS was initiated in Mwingi district as one of the components of APHIA plus kamili program. The objectives of this study was to evaluate the efficiency of the CHS in providing MCH services in Mwingi district and to establish the factors influencing efficiency of the CHS in providing MCH services in the district. Methods This was a qualitative study. Fifteen Key informants were sampled from key stakeholders. Sampling was done using purposive and maximum variation sampling methods. Semi-structured in-depth interviews were used for data collection. Data was managed and analyzed using NVIVO. Framework analysis and quasi statistics were used in data analysis. Results Expert opinion data indicated that the CHS was efficient in providing MCH services. Factors influencing efficiency of the CHS in provision of MCH services were: challenges facing Community Health Workers (CHWs), Social cultural and economic factors influencing MCH in the district, and motivation among CHWs. Conclusion Though CHS was found to be efficient in providing MCH services, this was an expert opinion perspective, a quantitative Cost Effectiveness Analysis (CEA) to confirm these findings is recommended. To improve efficiency of the CHS in the district, challenges facing CHWs and Social cultural and economic factors that influence efficiency of the CHS in the district need to be addressed. PMID:26090046

  11. Quad-Tree Visual-Calculus Analysis of Satellite Coverage

    NASA Technical Reports Server (NTRS)

    Lo, Martin W.; Hockney, George; Kwan, Bruce

    2003-01-01

    An improved method of analysis of coverage of areas of the Earth by a constellation of radio-communication or scientific-observation satellites has been developed. This method is intended to supplant an older method in which the global-coverage-analysis problem is solved from a ground-to-satellite perspective. The present method provides for rapid and efficient analysis. This method is derived from a satellite-to-ground perspective and involves a unique combination of two techniques for multiresolution representation of map features on the surface of a sphere.

  12. An efficient incremental learning mechanism for tracking concept drift in spam filtering

    PubMed Central

    Sheu, Jyh-Jian; Chu, Ko-Tsung; Li, Nien-Feng; Lee, Cheng-Chi

    2017-01-01

    This research manages in-depth analysis on the knowledge about spams and expects to propose an efficient spam filtering method with the ability of adapting to the dynamic environment. We focus on the analysis of email’s header and apply decision tree data mining technique to look for the association rules about spams. Then, we propose an efficient systematic filtering method based on these association rules. Our systematic method has the following major advantages: (1) Checking only the header sections of emails, which is different from those spam filtering methods at present that have to analyze fully the email’s content. Meanwhile, the email filtering accuracy is expected to be enhanced. (2) Regarding the solution to the problem of concept drift, we propose a window-based technique to estimate for the condition of concept drift for each unknown email, which will help our filtering method in recognizing the occurrence of spam. (3) We propose an incremental learning mechanism for our filtering method to strengthen the ability of adapting to the dynamic environment. PMID:28182691

  13. Evaluation of a multiclass, multiresidue liquid chromatography-tandem mass spectrometry method for analysis of 120 veterinary drugs in bovine kidney

    USDA-ARS?s Scientific Manuscript database

    Traditionally, regulatory monitoring of veterinary drug residues in food animal tissues involves the use of several single-class methods to cover a wide analytical scope. Multiclass, multiresidue methods of analysis tend to provide greater overall laboratory efficiency than the use of multiple meth...

  14. Spectrometer Sensitivity Investigations on the Spectrometric Oil Analysis Program.

    DTIC Science & Technology

    1983-04-22

    31 H. ACID DISSOLUTION METHOD (ADM) ........... 90 31 I. ANALYSIS OF SAMPLES............................ 31 jJ. PARTICLE TRANSPORT EFFICIENCY OF...THE ROTATING *DISK.................................... 32 I .K. A/E35U-3 ACID DISSOLUTION METHOD.................. 32 L. BURN TIME... ACID DISSOLUTION METHOD ......... ,...,....... 95 3. EFFECT OF BURN TIME ............ 95 4. DIRECT SAMPLE INTRODUCTION .......................... 95

  15. [Application of the mixed programming with Labview and Matlab in biomedical signal analysis].

    PubMed

    Yu, Lu; Zhang, Yongde; Sha, Xianzheng

    2011-01-01

    This paper introduces the method of mixed programming with Labview and Matlab, and applies this method in a pulse wave pre-processing and feature detecting system. The method has been proved suitable, efficient and accurate, which has provided a new kind of approach for biomedical signal analysis.

  16. The Precision Efficacy Analysis for Regression Sample Size Method.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…

  17. Ecological efficiency in China and its influencing factors-a super-efficient SBM metafrontier-Malmquist-Tobit model study.

    PubMed

    Ma, Xiaojun; Wang, Changxin; Yu, Yuanbo; Li, Yudong; Dong, Biying; Zhang, Xinyu; Niu, Xueqi; Yang, Qian; Chen, Ruimin; Li, Yifan; Gu, Yihan

    2018-05-15

    Ecological problem is one of the core issues that restrain China's economic development at present, and it is urgently needed to be solved properly and effectively. Based on panel data from 30 regions, this paper uses a super efficiency slack-based measure (SBM) model that introduces the undesirable output to calculate the ecological efficiency, and then uses traditional and metafrontier-Malmquist index method to study regional change trends and technology gap ratios (TGRs). Finally, the Tobit regression and principal component analysis methods are used to analysis the main factors affecting eco-efficiency and impact degree. The results show that about 60% of China's provinces have effective eco-efficiency, and the overall ecological efficiency of China is at the superior middling level, but there is a serious imbalance among different provinces and regions. Ecological efficiency has an obvious spatial cluster effect. There are differences among regional TGR values. Most regions show a downward trend and the phenomenon of focusing on economic development at the expense of ecological protection still exists. Expansion of opening to the outside, increases in R&D spending, and improvement of population urbanization rate have positive effects on eco-efficiency. Blind economic expansion, increases of industrial structure, and proportion of energy consumption have negative effects on eco-efficiency.

  18. Urea free and more efficient sample preparation method for mass spectrometry based protein identification via combining the formic acid-assisted chemical cleavage and trypsin digestion.

    PubMed

    Wu, Shuaibin; Yang, Kaiguang; Liang, Zhen; Zhang, Lihua; Zhang, Yukui

    2011-10-30

    A formic acid (FA)-assisted sample preparation method was presented for protein identification via mass spectrometry (MS). Detailedly, an aqueous solution containing 2% FA and dithiothreitol was selected to perform protein denaturation, aspartic acid (D) sites cleavage and disulfide linkages reduction simultaneously at 108°C for 2h. Subsequently, FA wiped off via vacuum concentration. Finally, iodoacetamide (IAA) alkylation and trypsin digestion could be performed ordinally. A series of model proteins (BSA, β-lactoglobulin and apo-Transferrin) were treated respectively using such method, followed by matrix-assisted laser desorption ionization-time-of-flight mass spectrometry (MALDI-TOF MS) analysis. The identified peptide number was increased by ∼ 80% in comparison with the conventional urea-assisted sample preparation method. Moreover, BSA identification was achieved efficiently down to femtomole (25 ± 0 sequence coverage and 16 ± 1 peptides) via such method. In contrast, there were not peptides identified confidently via the urea-assisted method before desalination via the C18 zip tip. The absence of urea in this sample preparation method was an advantage for the more favorable digestion and MALDI-TOF MS analysis. The performances of two methods for the real sample (rat liver proteome) were also compared, followed by a nanoflow reversed-phase liquid chromatography with electrospray ionization tandem mass spectrometry system analysis. As a result, 1335 ± 43 peptides were identified confidently (false discovery rate <1%) via FA-assisted method, corresponding to 295 ± 12 proteins (of top match=1 and requiring 2 unique peptides at least). In contrast, there were only 1107 ± 16 peptides (corresponding to 231 ± 10 proteins) obtained from the conventional urea-assisted method. It was serving as a more efficient protein sample preparation method for researching specific proteomes better, and providing assistance to develop other proteomics analysis methods, such as, peptide quantitative analysis. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less

  20. Equivalent Skin Analysis of Wing Structures Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.

    2000-01-01

    An efficient method of modeling trapezoidal built-up wing structures is developed by coupling. in an indirect way, an Equivalent Plate Analysis (EPA) with Neural Networks (NN). Being assumed to behave like a Mindlin-plate, the wing is solved using the Ritz method with Legendre polynomials employed as the trial functions. This analysis method can be made more efficient by avoiding most of the computational effort spent on calculating contributions to the stiffness and mass matrices from each spar and rib. This is accomplished by replacing the wing inner-structure with an "equivalent" material that combines to the skin and whose properties are simulated by neural networks. The constitutive matrix, which relates the stress vector to the strain vector, and the density of the equivalent material are obtained by enforcing mass and stiffness matrix equities with rec,ard to the EPA in a least-square sense. Neural networks for the material properties are trained in terms of the design variables of the wing structure. Examples show that the present method, which can be called an Equivalent Skin Analysis (ESA) of the wing structure, is more efficient than the EPA and still fairly good results can be obtained. The present ESA is very promising to be used at the early stages of wing structure design.

  1. Efficient methods and readily customizable libraries for managing complexity of large networks.

    PubMed

    Dogrusoz, Ugur; Karacelik, Alper; Safarli, Ilkin; Balci, Hasan; Dervishi, Leonard; Siper, Metin Can

    2018-01-01

    One common problem in visualizing real-life networks, including biological pathways, is the large size of these networks. Often times, users find themselves facing slow, non-scaling operations due to network size, if not a "hairball" network, hindering effective analysis. One extremely useful method for reducing complexity of large networks is the use of hierarchical clustering and nesting, and applying expand-collapse operations on demand during analysis. Another such method is hiding currently unnecessary details, to later gradually reveal on demand. Major challenges when applying complexity reduction operations on large networks include efficiency and maintaining the user's mental map of the drawing. We developed specialized incremental layout methods for preserving a user's mental map while managing complexity of large networks through expand-collapse and hide-show operations. We also developed open-source JavaScript libraries as plug-ins to the web based graph visualization library named Cytsocape.js to implement these methods as complexity management operations. Through efficient specialized algorithms provided by these extensions, one can collapse or hide desired parts of a network, yielding potentially much smaller networks, making them more suitable for interactive visual analysis. This work fills an important gap by making efficient implementations of some already known complexity management techniques freely available to tool developers through a couple of open source, customizable software libraries, and by introducing some heuristics which can be applied upon such complexity management techniques to ensure preserving mental map of users.

  2. Comparative study on DuPont analysis and DEA models for measuring stock performance using financial ratio

    NASA Astrophysics Data System (ADS)

    Arsad, Roslah; Shaari, Siti Nabilah Mohd; Isa, Zaidi

    2017-11-01

    Determining stock performance using financial ratio is challenging for many investors and researchers. Financial ratio can indicate the strengths and weaknesses of a company's stock performance. There are five categories of financial ratios namely liquidity, efficiency, leverage, profitability and market ratios. It is important to interpret the ratio correctly for proper financial decision making. The purpose of this study is to compare the performance of listed companies in Bursa Malaysia using Data Envelopment Analysis (DEA) and DuPont analysis Models. The study is conducted in 2015 involving 116 consumer products companies listed in Bursa Malaysia. The estimation method of Data Envelopment Analysis computes the efficiency scores and ranks the companies accordingly. The Alirezaee and Afsharian's method of analysis based Charnes, Cooper and Rhodes (CCR) where Constant Return to Scale (CRS) is employed. The DuPont analysis is a traditional tool for measuring the operating performance of companies. In this study, DuPont analysis is used to evaluate three different aspects such as profitability, efficiency of assets utilization and financial leverage. Return on Equity (ROE) is also calculated in DuPont analysis. This study finds that both analysis models provide different rankings of the selected samples. Hypothesis testing based on Pearson's correlation, indicates that there is no correlation between rankings produced by DEA and DuPont analysis. The DEA ranking model proposed by Alirezaee and Asharian is unstable. The method cannot provide complete ranking because the values of Balance Index is equal and zero.

  3. Rapid enumeration of viable bacteria by image analysis

    NASA Technical Reports Server (NTRS)

    Singh, A.; Pyle, B. H.; McFeters, G. A.

    1989-01-01

    A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.

  4. iTemplate: A template-based eye movement data analysis approach.

    PubMed

    Xiao, Naiqi G; Lee, Kang

    2018-02-08

    Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.

  5. Time-variant random interval natural frequency analysis of structures

    NASA Astrophysics Data System (ADS)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  6. [A cost-benefit analysis of different therapeutic methods in menorrhagia].

    PubMed

    Kirschner, R

    1995-02-20

    When deciding the right forms of treatment for various medical conditions it has been usual to consider medical knowledge, norms and experience. Increasingly, economic factors and principles are being introduced by the management, in the form of health economics and pharmaco-economic analyses, enforced as budgetary cuts and demands for rationalisation and measures to increase efficiency. Economic evaluations require construction of models for analyses. We have used DRG-information, National Health reimbursements and pharmacological retail prices to make a cost-efficiency analysis of treatments of menorrhagia. The analysis showed better cost-efficiency for certain pharmacological treatments than for surgery.

  7. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  8. Steroid hormones in environmental matrices: extraction method comparison.

    PubMed

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  9. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  10. Indirect synthesis of multi-degree of freedom transient systems. [linear programming for a kinematically linear system

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Chen, Y. H.

    1974-01-01

    An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  12. A neuro-data envelopment analysis approach for optimization of uncorrelated multiple response problems with smaller the better type controllable factors

    NASA Astrophysics Data System (ADS)

    Bashiri, Mahdi; Farshbaf-Geranmayeh, Amir; Mogouie, Hamed

    2013-11-01

    In this paper, a new method is proposed to optimize a multi-response optimization problem based on the Taguchi method for the processes where controllable factors are the smaller-the-better (STB)-type variables and the analyzer desires to find an optimal solution with smaller amount of controllable factors. In such processes, the overall output quality of the product should be maximized while the usage of the process inputs, the controllable factors, should be minimized. Since all possible combinations of factors' levels, are not considered in the Taguchi method, the response values of the possible unpracticed treatments are estimated using the artificial neural network (ANN). The neural network is tuned by the central composite design (CCD) and the genetic algorithm (GA). Then data envelopment analysis (DEA) is applied for determining the efficiency of each treatment. Although the important issue for implementation of DEA is its philosophy, which is maximization of outputs versus minimization of inputs, this important issue has been neglected in previous similar studies in multi-response problems. Finally, the most efficient treatment is determined using the maximin weight model approach. The performance of the proposed method is verified in a plastic molding process. Moreover a sensitivity analysis has been done by an efficiency estimator neural network. The results show efficiency of the proposed approach.

  13. Exergy analysis on industrial boiler energy conservation and emission evaluation applications

    NASA Astrophysics Data System (ADS)

    Li, Henan

    2017-06-01

    Industrial boiler is one of the most energy-consuming equipments in china, the annual consumption of energy accounts for about one-third of the national energy consumption. Industrial boilers in service at present have several severe problems such as small capacity, low efficiency, high energy consumption and causing severe pollution on environment. In recent years, our country in the big scope, long time serious fog weather, with coal-fired industrial boilers is closely related to the regional characteristics of high strength and low emissions [1]. The energy-efficient and emission-reducing of industry boiler is of great significance to improve China’s energy usage efficiency and environmental protection. Difference in thermal equilibrium theory is widely used in boiler design, exergy analysis method is established on the basis of the first law and second law of thermodynamics, by studying the cycle of the effect of energy conversion and utilization, to analyze its influencing factors, to reveal the exergy loss of location, distribution and size, find out the weak links, and a method of mining system of the boiler energy saving potential. Exergy analysis method is used for layer combustion boiler efficiency and pollutant emission characteristics analysis and evaluation, and can more objectively and accurately the energy conserving potential of the mining system of the boiler, find out the weak link of energy consumption, and improve equipment performance to improve the industrial boiler environmental friendliness.

  14. Assessing global resource utilization efficiency in the industrial sector.

    PubMed

    Rosen, Marc A

    2013-09-01

    Designing efficient energy systems, which also meet economic, environmental and other objectives and constraints, is a significant challenge. In a world with finite natural resources and large energy demands, it is important to understand not just actual efficiencies, but also limits to efficiency, as the latter identify margins for efficiency improvement. Energy analysis alone is inadequate, e.g., it yields energy efficiencies that do not provide limits to efficiency. To obtain meaningful and useful efficiencies for energy systems, and to clarify losses, exergy analysis is a beneficial and useful tool. Here, the global industrial sector and industries within it are assessed by using energy and exergy methods. The objective is to improve the understanding of the efficiency of global resource use in the industrial sector and, with this information, to facilitate the development, prioritization and ultimate implementation of rational improvement options. Global energy and exergy flow diagrams for the industrial sector are developed and overall efficiencies for the global industrial sector evaluated as 51% based on energy and 30% based on exergy. Consequently, exergy analysis indicates a less efficient picture of energy use in the global industrial sector than does energy analysis. A larger margin for improvement exists from an exergy perspective, compared to the overly optimistic margin indicated by energy. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  16. Efficiency analysis of wood processing industry in China during 2006-2015

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Yuan, Baolong; Li, Yanxuan

    2018-03-01

    The wood processing industry is an important industry which affects the national economy and social development. The data envelopment analysis model (DEA) is a quantitative evaluation method for studying industrial efficiency. In this paper, the wood processing industry of 8 provinces in southern China is taken as the study object, and the efficiency of each province in 2006 to 2015 was measured and calculated with the DEA method, and the efficiency changes, technological changes and Malmquist index were analyzed dynamically. The empirical results show that there is a widening gap in the efficiency of wood processing industry of the 8 provinces, and the technological progress has shown a lag in the promotion of wood processing industry. According to the research conclusion, along with the situation of domestic and foreign wood processing industry development, the government must introduce relevant policies to strengthen the construction of the wood processing industry technology innovation policy system and the industrial coordinated development system.

  17. Measuring Road Network Vulnerability with Sensitivity Analysis

    PubMed Central

    Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin

    2017-01-01

    This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706

  18. Mutual Coupling Analysis for Conformal Microstrip Antennas.

    DTIC Science & Technology

    1984-12-01

    6 0.001/ko, and the infinite integral is terminated at k 150 ko . 28*,-J ." . .. C. MUTUAL COUPLING ANALYSIS In this section, the moment method ...fact that it does provide an attractive alternative to the Green’s function method on which the analysis in later sections is based. In the present...by the moment method , the chosen set of expansion dipole modes plays a very important role. The efficiency as well as accuracy of the analysis depend

  19. Bayesian data analysis in observational comparative effectiveness research: rationale and examples.

    PubMed

    Olson, William H; Crivera, Concetta; Ma, Yi-Wen; Panish, Jessica; Mao, Lian; Lynch, Scott M

    2013-11-01

    Many comparative effectiveness research and patient-centered outcomes research studies will need to be observational for one or both of two reasons: first, randomized trials are expensive and time-consuming; and second, only observational studies can answer some research questions. It is generally recognized that there is a need to increase the scientific validity and efficiency of observational studies. Bayesian methods for the design and analysis of observational studies are scientifically valid and offer many advantages over frequentist methods, including, importantly, the ability to conduct comparative effectiveness research/patient-centered outcomes research more efficiently. Bayesian data analysis is being introduced into outcomes studies that we are conducting. Our purpose here is to describe our view of some of the advantages of Bayesian methods for observational studies and to illustrate both realized and potential advantages by describing studies we are conducting in which various Bayesian methods have been or could be implemented.

  20. A rapid, highly efficient and economical method of Agrobacterium-mediated in planta transient transformation in living onion epidermis.

    PubMed

    Xu, Kedong; Huang, Xiaohui; Wu, Manman; Wang, Yan; Chang, Yunxia; Liu, Kun; Zhang, Ju; Zhang, Yi; Zhang, Fuli; Yi, Liming; Li, Tingting; Wang, Ruiyue; Tan, Guangxuan; Li, Chengwei

    2014-01-01

    Transient transformation is simpler, more efficient and economical in analyzing protein subcellular localization than stable transformation. Fluorescent fusion proteins were often used in transient transformation to follow the in vivo behavior of proteins. Onion epidermis, which has large, living and transparent cells in a monolayer, is suitable to visualize fluorescent fusion proteins. The often used transient transformation methods included particle bombardment, protoplast transfection and Agrobacterium-mediated transformation. Particle bombardment in onion epidermis was successfully established, however, it was expensive, biolistic equipment dependent and with low transformation efficiency. We developed a highly efficient in planta transient transformation method in onion epidermis by using a special agroinfiltration method, which could be fulfilled within 5 days from the pretreatment of onion bulb to the best time-point for analyzing gene expression. The transformation conditions were optimized to achieve 43.87% transformation efficiency in living onion epidermis. The developed method has advantages in cost, time-consuming, equipment dependency and transformation efficiency in contrast with those methods of particle bombardment in onion epidermal cells, protoplast transfection and Agrobacterium-mediated transient transformation in leaf epidermal cells of other plants. It will facilitate the analysis of protein subcellular localization on a large scale.

  1. FARVATX: FAmily-based Rare Variant Association Test for X-linked genes

    PubMed Central

    Choi, Sungkyoung; Lee, Sungyoung; Qiao, Dandi; Hardin, Megan; Cho, Michael H.; Silverman, Edwin K; Park, Taesung; Won, Sungho

    2016-01-01

    Although the X chromosome has many genes that are functionally related to human diseases, the complicated biological properties of the X chromosome have prevented efficient genetic association analyses, and only a few significantly associated X-linked variants have been reported for complex traits. For instance, dosage compensation of X-linked genes is often achieved via the inactivation of one allele in each X-linked variant in females; however, some X-linked variants can escape this X chromosome inactivation. Efficient genetic analyses cannot be conducted without prior knowledge about the gene expression process of X-linked variants, and misspecified information can lead to power loss. In this report, we propose new statistical methods for rare X-linked variant genetic association analysis of dichotomous phenotypes with family-based samples. The proposed methods are computationally efficient and can complete X-linked analyses within a few hours. Simulation studies demonstrate the statistical efficiency of the proposed methods, which were then applied to rare-variant association analysis of the X chromosome in chronic obstructive pulmonary disease (COPD). Some promising significant X-linked genes were identified, illustrating the practical importance of the proposed methods. PMID:27325607

  2. FARVATX: Family-Based Rare Variant Association Test for X-Linked Genes.

    PubMed

    Choi, Sungkyoung; Lee, Sungyoung; Qiao, Dandi; Hardin, Megan; Cho, Michael H; Silverman, Edwin K; Park, Taesung; Won, Sungho

    2016-09-01

    Although the X chromosome has many genes that are functionally related to human diseases, the complicated biological properties of the X chromosome have prevented efficient genetic association analyses, and only a few significantly associated X-linked variants have been reported for complex traits. For instance, dosage compensation of X-linked genes is often achieved via the inactivation of one allele in each X-linked variant in females; however, some X-linked variants can escape this X chromosome inactivation. Efficient genetic analyses cannot be conducted without prior knowledge about the gene expression process of X-linked variants, and misspecified information can lead to power loss. In this report, we propose new statistical methods for rare X-linked variant genetic association analysis of dichotomous phenotypes with family-based samples. The proposed methods are computationally efficient and can complete X-linked analyses within a few hours. Simulation studies demonstrate the statistical efficiency of the proposed methods, which were then applied to rare-variant association analysis of the X chromosome in chronic obstructive pulmonary disease. Some promising significant X-linked genes were identified, illustrating the practical importance of the proposed methods. © 2016 WILEY PERIODICALS, INC.

  3. Efficient computation of photonic crystal waveguide modes with dispersive material.

    PubMed

    Schmidt, Kersten; Kappeler, Roman

    2010-03-29

    The optimization of PhC waveguides is a key issue for successfully designing PhC devices. Since this design task is computationally expensive, efficient methods are demanded. The available codes for computing photonic bands are also applied to PhC waveguides. They are reliable but not very efficient, which is even more pronounced for dispersive material. We present a method based on higher order finite elements with curved cells, which allows to solve for the band structure taking directly into account the dispersiveness of the materials. This is accomplished by reformulating the wave equations as a linear eigenproblem in the complex wave-vectors k. For this method, we demonstrate the high efficiency for the computation of guided PhC waveguide modes by a convergence analysis.

  4. The Efficiency and Budgeting of Public Hospitals: Case Study of Iran

    PubMed Central

    Yusefzadeh, Hasan; Ghaderi, Hossein; Bagherzade, Rafat; Barouni, Mohsen

    2013-01-01

    Background Hospitals are the most costly and important components of any health care system, so it is important to know their economic values, pay attention to their efficiency and consider factors affecting them. Objective The aim of this study was to assess the technical scale and economic efficiency of hospitals in the West Azerbaijan province of Iran, for which Data Envelopment Analysis (DEA) was used to propose a model for operational budgeting. Materials and Methods This study was a descriptive-analysis that was conducted in 2009 and had three inputs and two outputs. Deap2, 1 software was used for data analysis. Slack and radial movements and surplus of inputs were calculated for selected hospitals. Finally, a model was proposed for performance-based budgeting of hospitals and health sectors using the DEA technique. Results The average scores of technical efficiency, pure technical efficiency (managerial efficiency) and scale efficiency of hospitals were 0.584, 0.782 and 0.771, respectively. In other words the capacity of efficiency promotion in hospitals without any increase in costs and with the same amount of inputs was about 41.5%. Only four hospitals among all hospitals had the maximum level of technical efficiency. Moreover, surplus production factors were evident in these hospitals. Conclusions Reduction of surplus production factors through comprehensive planning based on the results of the Data Envelopment Analysis can play a major role in cost reduction of hospitals and health sectors. In hospitals with a technical efficiency score of less than one, the original and projected values of inputs were different; resulting in a surplus. Hence, these hospitals should reduce their values of inputs to achieve maximum efficiency and optimal performance. The results of this method was applied to hospitals a benchmark for making decisions about resource allocation; linking budgets to performance results; and controlling and improving hospitals performance. PMID:24349726

  5. Freud: a software suite for high-throughput simulation analysis

    NASA Astrophysics Data System (ADS)

    Harper, Eric; Spellings, Matthew; Anderson, Joshua; Glotzer, Sharon

    Computer simulation is an indispensable tool for the study of a wide variety of systems. As simulations scale to fill petascale and exascale supercomputing clusters, so too does the size of the data produced, as well as the difficulty in analyzing these data. We present Freud, an analysis software suite for efficient analysis of simulation data. Freud makes no assumptions about the system being analyzed, allowing for general analysis methods to be applied to nearly any type of simulation. Freud includes standard analysis methods such as the radial distribution function, as well as new methods including the potential of mean force and torque and local crystal environment analysis. Freud combines a Python interface with fast, parallel C + + analysis routines to run efficiently on laptops, workstations, and supercomputing clusters. Data analysis on clusters reduces data transfer requirements, a prohibitive cost for petascale computing. Used in conjunction with simulation software, Freud allows for smart simulations that adapt to the current state of the system, enabling the study of phenomena such as nucleation and growth, intelligent investigation of phases and phase transitions, and determination of effective pair potentials.

  6. In vivo evaluation of the effect of stimulus distribution on FIR statistical efficiency in event-related fMRI

    PubMed Central

    Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L

    2013-01-01

    Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a-priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. PMID:23473798

  7. A New Approach to Aircraft Robust Performance Analysis

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.; Tierno, Jorge E.

    2004-01-01

    A recently developed algorithm for nonlinear system performance analysis has been applied to an F16 aircraft to begin evaluating the suitability of the method for aerospace problems. The algorithm has a potential to be much more efficient than the current methods in performance analysis for aircraft. This paper is the initial step in evaluating this potential.

  8. A method for quantitative analysis of standard and high-throughput qPCR expression data based on input sample quantity.

    PubMed

    Adamski, Mateusz G; Gumann, Patryk; Baird, Alison E

    2014-01-01

    Over the past decade rapid advances have occurred in the understanding of RNA expression and its regulation. Quantitative polymerase chain reactions (qPCR) have become the gold standard for quantifying gene expression. Microfluidic next generation, high throughput qPCR now permits the detection of transcript copy number in thousands of reactions simultaneously, dramatically increasing the sensitivity over standard qPCR. Here we present a gene expression analysis method applicable to both standard polymerase chain reactions (qPCR) and high throughput qPCR. This technique is adjusted to the input sample quantity (e.g., the number of cells) and is independent of control gene expression. It is efficiency-corrected and with the use of a universal reference sample (commercial complementary DNA (cDNA)) permits the normalization of results between different batches and between different instruments--regardless of potential differences in transcript amplification efficiency. Modifications of the input quantity method include (1) the achievement of absolute quantification and (2) a non-efficiency corrected analysis. When compared to other commonly used algorithms the input quantity method proved to be valid. This method is of particular value for clinical studies of whole blood and circulating leukocytes where cell counts are readily available.

  9. LADES: a software for constructing and analyzing longitudinal designs in biomedical research.

    PubMed

    Vázquez-Alcocer, Alan; Garzón-Cortes, Daniel Ladislao; Sánchez-Casas, Rosa María

    2014-01-01

    One of the most important steps in biomedical longitudinal studies is choosing a good experimental design that can provide high accuracy in the analysis of results with a minimum sample size. Several methods for constructing efficient longitudinal designs have been developed based on power analysis and the statistical model used for analyzing the final results. However, development of this technology is not available to practitioners through user-friendly software. In this paper we introduce LADES (Longitudinal Analysis and Design of Experiments Software) as an alternative and easy-to-use tool for conducting longitudinal analysis and constructing efficient longitudinal designs. LADES incorporates methods for creating cost-efficient longitudinal designs, unequal longitudinal designs, and simple longitudinal designs. In addition, LADES includes different methods for analyzing longitudinal data such as linear mixed models, generalized estimating equations, among others. A study of European eels is reanalyzed in order to show LADES capabilities. Three treatments contained in three aquariums with five eels each were analyzed. Data were collected from 0 up to the 12th week post treatment for all the eels (complete design). The response under evaluation is sperm volume. A linear mixed model was fitted to the results using LADES. The complete design had a power of 88.7% using 15 eels. With LADES we propose the use of an unequal design with only 14 eels and 89.5% efficiency. LADES was developed as a powerful and simple tool to promote the use of statistical methods for analyzing and creating longitudinal experiments in biomedical research.

  10. Division of methods for counting helminths' eggs and the problem of efficiency of these methods.

    PubMed

    Jaromin-Gleń, Katarzyna; Kłapeć, Teresa; Łagód, Grzegorz; Karamon, Jacek; Malicki, Jacek; Skowrońska, Agata; Bieganowski, Andrzej

    2017-03-21

    From the sanitary and epidemiological aspects, information concerning the developmental forms of intestinal parasites, especially the eggs of helminths present in our environment in: water, soil, sandpits, sewage sludge, crops watered with wastewater are very important. The methods described in the relevant literature may be classified in various ways, primarily according to the methodology of the preparation of samples from environmental matrices prepared for analysis, and the sole methods of counting and chambers/instruments used for this purpose. In addition, there is a possibility to perform the classification of the research methods analyzed from the aspect of the method and time of identification of the individuals counted, or the necessity for staining them. Standard methods for identification of helminths' eggs from environmental matrices are usually characterized by low efficiency, i.e. from 30% to approximately 80%. The efficiency of the method applied may be measured in a dual way, either by using the method of internal standard or the 'Split/Spike' method. While measuring simultaneously in an examined object the efficiency of the method and the number of eggs, the 'actual' number of eggs may be calculated by multiplying the obtained value of the discovered eggs of helminths by inverse efficiency.

  11. High throughput sequencing analysis of RNA libraries reveals the influences of initial library and PCR methods on SELEX efficiency.

    PubMed

    Takahashi, Mayumi; Wu, Xiwei; Ho, Michelle; Chomchan, Pritsana; Rossi, John J; Burnett, John C; Zhou, Jiehua

    2016-09-22

    The systemic evolution of ligands by exponential enrichment (SELEX) technique is a powerful and effective aptamer-selection procedure. However, modifications to the process can dramatically improve selection efficiency and aptamer performance. For example, droplet digital PCR (ddPCR) has been recently incorporated into SELEX selection protocols to putatively reduce the propagation of byproducts and avoid selection bias that result from differences in PCR efficiency of sequences within the random library. However, a detailed, parallel comparison of the efficacy of conventional solution PCR versus the ddPCR modification in the RNA aptamer-selection process is needed to understand effects on overall SELEX performance. In the present study, we took advantage of powerful high throughput sequencing technology and bioinformatics analysis coupled with SELEX (HT-SELEX) to thoroughly investigate the effects of initial library and PCR methods in the RNA aptamer identification. Our analysis revealed that distinct "biased sequences" and nucleotide composition existed in the initial, unselected libraries purchased from two different manufacturers and that the fate of the "biased sequences" was target-dependent during selection. Our comparison of solution PCR- and ddPCR-driven HT-SELEX demonstrated that PCR method affected not only the nucleotide composition of the enriched sequences, but also the overall SELEX efficiency and aptamer efficacy.

  12. High throughput sequencing analysis of RNA libraries reveals the influences of initial library and PCR methods on SELEX efficiency

    PubMed Central

    Takahashi, Mayumi; Wu, Xiwei; Ho, Michelle; Chomchan, Pritsana; Rossi, John J.; Burnett, John C.; Zhou, Jiehua

    2016-01-01

    The systemic evolution of ligands by exponential enrichment (SELEX) technique is a powerful and effective aptamer-selection procedure. However, modifications to the process can dramatically improve selection efficiency and aptamer performance. For example, droplet digital PCR (ddPCR) has been recently incorporated into SELEX selection protocols to putatively reduce the propagation of byproducts and avoid selection bias that result from differences in PCR efficiency of sequences within the random library. However, a detailed, parallel comparison of the efficacy of conventional solution PCR versus the ddPCR modification in the RNA aptamer-selection process is needed to understand effects on overall SELEX performance. In the present study, we took advantage of powerful high throughput sequencing technology and bioinformatics analysis coupled with SELEX (HT-SELEX) to thoroughly investigate the effects of initial library and PCR methods in the RNA aptamer identification. Our analysis revealed that distinct “biased sequences” and nucleotide composition existed in the initial, unselected libraries purchased from two different manufacturers and that the fate of the “biased sequences” was target-dependent during selection. Our comparison of solution PCR- and ddPCR-driven HT-SELEX demonstrated that PCR method affected not only the nucleotide composition of the enriched sequences, but also the overall SELEX efficiency and aptamer efficacy. PMID:27652575

  13. Family-Based Rare Variant Association Analysis: A Fast and Efficient Method of Multivariate Phenotype Association Analysis.

    PubMed

    Wang, Longfei; Lee, Sungyoung; Gim, Jungsoo; Qiao, Dandi; Cho, Michael; Elston, Robert C; Silverman, Edwin K; Won, Sungho

    2016-09-01

    Family-based designs have been repeatedly shown to be powerful in detecting the significant rare variants associated with human diseases. Furthermore, human diseases are often defined by the outcomes of multiple phenotypes, and thus we expect multivariate family-based analyses may be very efficient in detecting associations with rare variants. However, few statistical methods implementing this strategy have been developed for family-based designs. In this report, we describe one such implementation: the multivariate family-based rare variant association tool (mFARVAT). mFARVAT is a quasi-likelihood-based score test for rare variant association analysis with multiple phenotypes, and tests both homogeneous and heterogeneous effects of each variant on multiple phenotypes. Simulation results show that the proposed method is generally robust and efficient for various disease models, and we identify some promising candidate genes associated with chronic obstructive pulmonary disease. The software of mFARVAT is freely available at http://healthstat.snu.ac.kr/software/mfarvat/, implemented in C++ and supported on Linux and MS Windows. © 2016 WILEY PERIODICALS, INC.

  14. Efficient alignment-free DNA barcode analytics

    PubMed Central

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-01-01

    Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305

  15. Shape optimization for aerodynamic efficiency and low observability

    NASA Technical Reports Server (NTRS)

    Vinh, Hoang; Van Dam, C. P.; Dwyer, Harry A.

    1993-01-01

    Field methods based on the finite-difference approximations of the time-domain Maxwell's equations and the potential-flow equation have been developed to solve the multidisciplinary problem of airfoil shaping for aerodynamic efficiency and low radar cross section (RCS). A parametric study and an optimization study employing the two analysis methods are presented to illustrate their combined capabilities. The parametric study shows that for frontal radar illumination, the RCS of an airfoil is independent of the chordwise location of maximum thickness but depends strongly on the maximum thickness, leading-edge radius, and leadingedge shape. In addition, this study shows that the RCS of an airfoil can be reduced without significant effects on its transonic aerodynamic efficiency by reducing the leading-edge radius and/or modifying the shape of the leading edge. The optimization study involves the minimization of wave drag for a non-lifting, symmetrical airfoil with constraints on the airfoil maximum thickness and monostatic RCS. This optimization study shows that the two analysis methods can be used effectively to design aerodynamically efficient airfoils with certain desired RCS characteristics.

  16. Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems

    NASA Astrophysics Data System (ADS)

    Arrarás, A.; Portero, L.; Yotov, I.

    2014-01-01

    We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.

  17. Effective visibility analysis method in virtual geographic environment

    NASA Astrophysics Data System (ADS)

    Li, Yi; Zhu, Qing; Gong, Jianhua

    2008-10-01

    Visibility analysis in virtual geographic environment has broad applications in many aspects in social life. But in practical use it is urged to improve the efficiency and accuracy, as well as to consider human vision restriction. The paper firstly introduces a high-efficient 3D data modeling method, which generates and organizes 3D data model using R-tree and LOD techniques. Then a new visibility algorithm which can realize real-time viewshed calculation considering the shelter of DEM and 3D building models and some restrictions of human eye to the viewshed generation. Finally an experiment is conducted to prove the visibility analysis calculation quickly and accurately which can meet the demand of digital city applications.

  18. Mayday - integrative analytics for expression data

    PubMed Central

    2010-01-01

    Background DNA Microarrays have become the standard method for large scale analyses of gene expression and epigenomics. The increasing complexity and inherent noisiness of the generated data makes visual data exploration ever more important. Fast deployment of new methods as well as a combination of predefined, easy to apply methods with programmer's access to the data are important requirements for any analysis framework. Mayday is an open source platform with emphasis on visual data exploration and analysis. Many built-in methods for clustering, machine learning and classification are provided for dissecting complex datasets. Plugins can easily be written to extend Mayday's functionality in a large number of ways. As Java program, Mayday is platform-independent and can be used as Java WebStart application without any installation. Mayday can import data from several file formats, database connectivity is included for efficient data organization. Numerous interactive visualization tools, including box plots, profile plots, principal component plots and a heatmap are available, can be enhanced with metadata and exported as publication quality vector files. Results We have rewritten large parts of Mayday's core to make it more efficient and ready for future developments. Among the large number of new plugins are an automated processing framework, dynamic filtering, new and efficient clustering methods, a machine learning module and database connectivity. Extensive manual data analysis can be done using an inbuilt R terminal and an integrated SQL querying interface. Our visualization framework has become more powerful, new plot types have been added and existing plots improved. Conclusions We present a major extension of Mayday, a very versatile open-source framework for efficient micro array data analysis designed for biologists and bioinformaticians. Most everyday tasks are already covered. The large number of available plugins as well as the extension possibilities using compiled plugins and ad-hoc scripting allow for the rapid adaption of Mayday also to very specialized data exploration. Mayday is available at http://microarray-analysis.org. PMID:20214778

  19. FECAL SOURCE TRACKING BY ANTIBIOTIC RESISTANCE ANALYSIS ON A WATERSHED EXHIBITING LOW RESISTANCE

    EPA Science Inventory

    The ongoing development of microbial source tracking has made it possible to identify contamination sources with varying accuracy, depending on the method used. The purpose of this study was done to test the efficiency of the antibiotic resistance analysis (ARA) method under low ...

  20. Mathematical modeling of photovoltaic thermal PV/T system with v-groove collector

    NASA Astrophysics Data System (ADS)

    Zohri, M.; Fudholi, A.; Ruslan, M. H.; Sopian, K.

    2017-07-01

    The use of v-groove in solar collector has a higher thermal efficiency in references. Dropping the working heat of photovoltaic panel was able to raise the electrical efficiency performance. Electrical and thermal efficiency were produced by photovoltaic thermal (PV/T) system concurrently. Mathematical modeling based on steady-state thermal analysis of PV/T system with v-groove was conducted. With matrix inversion method, the energy balance equations are explained by means of the investigative method. The comparison results show that in the PV/T system with the V-groove collector is higher temperature, thermal and electrical efficiency than other collectors.

  1. An analysis of ash and isotopic carbon discrimination (delta13C) methods to evaluate water use efficiency in apple

    USDA-ARS?s Scientific Manuscript database

    Apple cultivars are selected for fruit quality, disease and insect resistance, not water use efficiency (WUE), however, the need for more water use efficient crops is accelerating due to climate change and increased competition for water resources. On a whole plant basis, calculation of water use e...

  2. Eigenvalue sensitivity analysis of planar frames with variable joint and support locations

    NASA Technical Reports Server (NTRS)

    Chuang, Ching H.; Hou, Gene J. W.

    1991-01-01

    Two sensitivity equations are derived in this study based upon the continuum approach for eigenvalue sensitivity analysis of planar frame structures with variable joint and support locations. A variational form of an eigenvalue equation is first derived in which all of the quantities are expressed in the local coordinate system attached to each member. Material derivative of this variational equation is then sought to account for changes in member's length and orientation resulting form the perturbation of joint and support locations. Finally, eigenvalue sensitivity equations are formulated in either domain quantities (by the domain method) or boundary quantities (by the boundary method). It is concluded that the sensitivity equation derived by the boundary method is more efficient in computation but less accurate than that of the domain method. Nevertheless, both of them in terms of computational efficiency are superior to the conventional direct differentiation method and the finite difference method.

  3. Some comments on Hurst exponent and the long memory processes on capital markets

    NASA Astrophysics Data System (ADS)

    Sánchez Granero, M. A.; Trinidad Segovia, J. E.; García Pérez, J.

    2008-09-01

    The analysis of long memory processes in capital markets has been one of the topics in finance, since the existence of the market memory could implicate the rejection of an efficient market hypothesis. The study of these processes in finance is realized through Hurst exponent and the most classical method applied is R/S analysis. In this paper we will discuss the efficiency of this methodology as well as some of its more important modifications to detect the long memory. We also propose the application of a classical geometrical method with short modifications and we compare both approaches.

  4. Pabon Lasso and Data Envelopment Analysis: A Complementary Approach to Hospital Performance Measurement

    PubMed Central

    Mehrtak, Mohammad; Yusefzadeh, Hasan; Jaafaripooyan, Ebrahim

    2014-01-01

    Background: Performance measurement is essential to the management of health care organizations to which efficiency is per se a vital indicator. Present study accordingly aims to measure the efficiency of hospitals employing two distinct methods. Methods: Data Envelopment Analysis and Pabon Lasso Model were jointly applied to calculate the efficiency of all general hospitals located in Iranian Eastern Azerbijan Province. Data was collected using hospitals’ monthly performance forms and analyzed and displayed by MS Visio and DEAP software. Results: In accord with Pabon Lasso model, 44.5% of the hospitals were entirely efficient, whilst DEA revealed 61% to be efficient. As such, 39% of the hospitals, by the Pabon Lasso, were wholly inefficient; based on DEA though; the relevant figure was only 22.2%. Finally, 16.5% of hospitals as calculated by Pabon Lasso and 16.7% by DEA were relatively efficient. DEA appeared to show more hospitals as efficient as opposed to the Pabon Lasso model. Conclusion: Simultaneous use of two models rendered complementary and corroborative results as both evidently reveal efficient hospitals. However, their results should be compared with prudence. Whilst the Pabon Lasso inefficient zone is fully clear, DEA does not provide such a crystal clear limit for inefficiency. PMID:24999147

  5. Development of Composite Materials with High Passive Damping Properties

    DTIC Science & Technology

    2006-05-15

    frequency response function analysis. Sound transmission through sandwich panels was studied using the statistical energy analysis (SEA). Modal density...2.2.3 Finite element models 14 2.2.4 Statistical energy analysis method 15 CHAPTER 3 ANALYSIS OF DAMPING IN SANDWICH MATERIALS. 24 3.1 Equation of...sheets and the core. 2.2.4 Statistical energy analysis method Finite element models are generally only efficient for problems at low and middle frequencies

  6. Governance and performance: the performance of Dutch hospitals explained by governance characteristics.

    PubMed

    Blank, Jos L T; van Hulst, Bart Laurents

    2011-10-01

    This paper describes the efficiency of Dutch hospitals using the Data Envelopment Analysis (DEA) method with bootstrapping. In particular, the analysis focuses on accounting for cost inefficiency measures on the part of hospital corporate governance. We use bootstrap techniques, as introduced by Simar and Wilson (J. Econom. 136(1):31-64, 2007), in order to obtain more efficient estimates of the effects of governance on the efficiency. The results show that part of the cost efficiency can be explained with governance. In particular we find that a higher remuneration of the board as well as a higher remuneration of the supervisory board does not implicate better performance.

  7. CFD Analysis and Design Optimization Using Parallel Computers

    NASA Technical Reports Server (NTRS)

    Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James

    1997-01-01

    A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.

  8. Managing for efficiency in health care: the case of Greek public hospitals.

    PubMed

    Mitropoulos, Panagiotis; Mitropoulos, Ioannis; Sissouras, Aris

    2013-12-01

    This paper evaluates the efficiency of public hospitals with two alternative conceptual models. One model targets resource usage directly to assess production efficiency, while the other model incorporates financial results to assess economic efficiency. Performance analysis of these models was conducted in two stages. In stage one, we utilized data envelopment analysis to obtain the efficiency score of each hospital, while in stage two we took into account the influence of the operational environment on efficiency by regressing those scores on explanatory variables that concern the performance of hospital services. We applied these methods to evaluate 96 general hospitals in the Greek national health system. The results indicate that, although the average efficiency scores in both models have remained relatively stable compared to past assessments, internal changes in hospital performances do exist. This study provides a clear framework for policy implications to increase the overall efficiency of general hospitals.

  9. A novel method for transmitting southern rice black-streaked dwarf virus to rice without insect vector.

    PubMed

    Yu, Lu; Shi, Jing; Cao, Lianlian; Zhang, Guoping; Wang, Wenli; Hu, Deyu; Song, Baoan

    2017-08-15

    Southern rice black-streaked dwarf virus (SRBSDV) has spread from the south of China to the north of Vietnam in the past few years, and has severely influenced rice production. However, previous study of traditional SRBSDV transmission method by the natural virus vector, the white-backed planthopper (WBPH, Sogatella furcifera), in the laboratory, researchers are frequently confronted with lack of enough viral samples due to the limited life span of infected vectors and rice plants and low virus acquisition and inoculation efficiency by the vector. Meanwhile, traditional mechanical inoculation of virus only apply to dicotyledon because of the higher content of lignin in the leaves of the monocot. Therefore, establishing an efficient and persistent-transmitting model, with a shorter virus transmission time and a higher virus transmission efficiency, for screening novel anti-SRBSDV drugs is an urgent need. In this study, we firstly reported a novel method for transmitting SRBSDV in rice using the bud-cutting method. The transmission efficiency of SRBSDV in rice was investigated via the polymerase chain reaction (PCR) method and the replication of SRBSDV in rice was also investigated via the proteomics analysis. Rice infected with SRBSDV using the bud-cutting method exhibited similar symptoms to those infected by the WBPH, and the transmission efficiency (>80.00%), which was determined using the PCR method, and the virus transmission time (30 min) were superior to those achieved that transmitted by the WBPH. Proteomics analysis confirmed that SRBSDV P1, P2, P3, P4, P5-1, P5-2, P6, P8, P9-1, P9-2, and P10 proteins were present in infected rice seedlings infected via the bud-cutting method. The results showed that SRBSDV could be successfully transmitted via the bud-cutting method and plants infected SRBSDV exhibited the symptoms were similar to those transmitted by the WBPH. Therefore, the use of the bud-cutting method to generate a cheap, efficient, reliable supply of SRBSDV-infected rice seedlings should aid the development of disease control strategies. Meanwhile, this method also could provide a new idea for the other virus transmission in monocot.

  10. Analysis of case-only studies accounting for genotyping error.

    PubMed

    Cheng, K F

    2007-03-01

    The case-only design provides one approach to assess possible interactions between genetic and environmental factors. It has been shown that if these factors are conditionally independent, then a case-only analysis is not only valid but also very efficient. However, a drawback of the case-only approach is that its conclusions may be biased by genotyping errors. In this paper, our main aim is to propose a method for analysis of case-only studies when these errors occur. We show that the bias can be adjusted through the use of internal validation data, which are obtained by genotyping some sampled individuals twice. Our analysis is based on a simple and yet highly efficient conditional likelihood approach. Simulation studies considered in this paper confirm that the new method has acceptable performance under genotyping errors.

  11. Techniques of EMG signal analysis: detection, processing, classification and applications

    PubMed Central

    Hussain, M.S.; Mohd-Yasin, F.

    2006-01-01

    Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694

  12. Bootstrap Methods: A Very Leisurely Look.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Winstead, Wayland H.

    The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…

  13. RESEARCH ASSOCIATED WITH THE DEVELOPMENT OF EPA METHOD 552.2

    EPA Science Inventory

    The work presented in this paper entails the development of a method for haloacetic acid (HAA) analysis, Environmental Protection Agency (EPA)method 552.2, that improves the saftey and efficiency of previous methods and incorporates three additional trihalogenated acetic acids: b...

  14. Dynamic analysis of nonlinear rotor-housing systems

    NASA Technical Reports Server (NTRS)

    Noah, Sherif T.

    1988-01-01

    Nonlinear analysis methods are developed which will enable the reliable prediction of the dynamic behavior of the space shuttle main engine (SSME) turbopumps in the presence of bearing clearances and other local nonlinearities. A computationally efficient convolution method, based on discretized Duhamel and transition matrix integral formulations, is developed for the transient analysis. In the formulation, the coupling forces due to the nonlinearities are treated as external forces acting on the coupled subsystems. Iteration is utilized to determine their magnitudes at each time increment. The method is applied to a nonlinear generic model of the high pressure oxygen turbopump (HPOTP). As compared to the fourth order Runge-Kutta numerical integration methods, the convolution approach proved to be more accurate and more highly efficient. For determining the nonlinear, steady-state periodic responses, an incremental harmonic balance method was also developed. The method was successfully used to determine dominantly harmonic and subharmonic responses fo the HPOTP generic model with bearing clearances. A reduction method similar to the impedance formulation utilized with linear systems is used to reduce the housing-rotor models to their coordinates at the bearing clearances. Recommendations are included for further development of the method, for extending the analysis to aperiodic and chaotic regimes and for conducting critical parameteric studies of the nonlinear response of the current SSME turbopumps.

  15. Quantitative Analysis of the Efficiency of OLEDs.

    PubMed

    Sim, Bomi; Moon, Chang-Ki; Kim, Kwon-Hyeon; Kim, Jang-Joo

    2016-12-07

    We present a comprehensive model for the quantitative analysis of factors influencing the efficiency of organic light-emitting diodes (OLEDs) as a function of the current density. The model takes into account the contribution made by the charge carrier imbalance, quenching processes, and optical design loss of the device arising from various optical effects including the cavity structure, location and profile of the excitons, effective radiative quantum efficiency, and out-coupling efficiency. Quantitative analysis of the efficiency can be performed with an optical simulation using material parameters and experimental measurements of the exciton profile in the emission layer and the lifetime of the exciton as a function of the current density. This method was applied to three phosphorescent OLEDs based on a single host, mixed host, and exciplex-forming cohost. The three factors (charge carrier imbalance, quenching processes, and optical design loss) were influential in different ways, depending on the device. The proposed model can potentially be used to optimize OLED configurations on the basis of an analysis of the underlying physical processes.

  16. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    PubMed

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).

  17. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  18. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  19. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  20. Survey of methods for calculating sensitivity of general eigenproblems

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha V.; Haftka, Raphael T.

    1987-01-01

    A survey of methods for sensitivity analysis of the algebraic eigenvalue problem for non-Hermitian matrices is presented. In addition, a modification of one method based on a better normalizing condition is proposed. Methods are classified as Direct or Adjoint and are evaluated for efficiency. Operation counts are presented in terms of matrix size, number of design variables and number of eigenvalues and eigenvectors of interest. The effect of the sparsity of the matrix and its derivatives is also considered, and typical solution times are given. General guidelines are established for the selection of the most efficient method.

  1. A new method for flight test determination of propulsive efficiency and drag coefficient

    NASA Technical Reports Server (NTRS)

    Bull, G.; Bridges, P. D.

    1983-01-01

    A flight test method is described from which propulsive efficiency as well as parasite and induced drag coefficients can be directly determined using relatively simple instrumentation and analysis techniques. The method uses information contained in the transient response in airspeed for a small power change in level flight in addition to the usual measurement of power required for level flight. Measurements of pitch angle and longitudinal and normal acceleration are eliminated. The theoretical basis for the method, the analytical techniques used, and the results of application of the method to flight test data are presented.

  2. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited bandwidth available between CPU and memory. In terms of parallel efficiency/scaling the different studied methods perform similarly, with the greedy algorithm being the bottleneck. In terms of absolute computational work the adaptive methods are better for the cases studied due to their more efficient selection of the control points. By automating most of the RBF mesh deformation, a robust, efficient and almost user-independent mesh deformation method is presented.

  3. Secure and Efficient Regression Analysis Using a Hybrid Cryptographic Framework: Development and Evaluation

    PubMed Central

    Jiang, Xiaoqian; Aziz, Md Momin Al; Wang, Shuang; Mohammed, Noman

    2018-01-01

    Background Machine learning is an effective data-driven tool that is being widely used to extract valuable patterns and insights from data. Specifically, predictive machine learning models are very important in health care for clinical data analysis. The machine learning algorithms that generate predictive models often require pooling data from different sources to discover statistical patterns or correlations among different attributes of the input data. The primary challenge is to fulfill one major objective: preserving the privacy of individuals while discovering knowledge from data. Objective Our objective was to develop a hybrid cryptographic framework for performing regression analysis over distributed data in a secure and efficient way. Methods Existing secure computation schemes are not suitable for processing the large-scale data that are used in cutting-edge machine learning applications. We designed, developed, and evaluated a hybrid cryptographic framework, which can securely perform regression analysis, a fundamental machine learning algorithm using somewhat homomorphic encryption and a newly introduced secure hardware component of Intel Software Guard Extensions (Intel SGX) to ensure both privacy and efficiency at the same time. Results Experimental results demonstrate that our proposed method provides a better trade-off in terms of security and efficiency than solely secure hardware-based methods. Besides, there is no approximation error. Computed model parameters are exactly similar to plaintext results. Conclusions To the best of our knowledge, this kind of secure computation model using a hybrid cryptographic framework, which leverages both somewhat homomorphic encryption and Intel SGX, is not proposed or evaluated to this date. Our proposed framework ensures data security and computational efficiency at the same time. PMID:29506966

  4. Costs and cost-efficiency of a mobile cash transfer to prevent child undernutrition during the lean season in Burkina Faso: a mixed methods analysis from the MAM'Out randomized controlled trial.

    PubMed

    Puett, Chloe; Salpéteur, Cécile; Houngbe, Freddy; Martínez, Karen; N'Diaye, Dieynaba S; Tonguet-Papucci, Audrey

    2018-01-01

    This study assessed the costs and cost-efficiency of a mobile cash transfer implemented in Tapoa Province, Burkina Faso in the MAM'Out randomized controlled trial from June 2013 to December 2014, using mixed methods and taking a societal perspective by including costs to implementing partners and beneficiary households. Data were collected via interviews with implementing staff from the humanitarian agency and the private partner delivering the mobile money, focus group discussions with beneficiaries, and review of accounting databases. Costs were analyzed by input category and activity-based cost centers. cost-efficiency was analyzed by cost-transfer ratios (CTR) and cost per beneficiary. Qualitative analysis was conducted to identify themes related to implementing electronic cash transfers, and barriers to efficient implementation. The CTR was 0.82 from a societal perspective, within the same range as other humanitarian transfer programs; however the intervention did not achieve the same degree of cost-efficiency as other mobile transfer programs specifically. Challenges in coordination between humanitarian and private partners resulted in long wait times for beneficiaries, particularly in the first year of implementation. Sensitivity analyses indicated a potential 6% reduction in CTR through reducing beneficiary wait time by one-half. Actors reported that coordination challenges improved during the project, therefore inefficiencies likely would be resolved, and cost-efficiency improved, as the program passed the pilot phase. Despite the time required to establish trusting relationships among actors, and to set up a network of cash points in remote areas, this analysis showed that mobile transfers hold promise as a cost-efficient method of delivering cash in this setting. Implementation by local government would likely reduce costs greatly compared to those found in this study context, and improve cost-efficiency especially by subsidizing expansion of mobile money network coverage and increasing cash distribution points in remote areas which are unprofitable for private partners.

  5. Analysis of flexible aircraft longitudinal dynamics and handling qualities. Volume 1: Analysis methods

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Schmidt, D. S.

    1985-01-01

    As aircraft become larger and lighter due to design requirements for increased payload and improved fuel efficiency, they will also become more flexible. For highly flexible vehicles, the handling qualities may not be accurately predicted by conventional methods. This study applies two analysis methods to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop model analysis technique. This method considers the effects of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Volume 1 consists of the development and application of the two analysis methods described above.

  6. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  7. A Rapid, Highly Efficient and Economical Method of Agrobacterium-Mediated In planta Transient Transformation in Living Onion Epidermis

    PubMed Central

    Xu, Kedong; Huang, Xiaohui; Wu, Manman; Wang, Yan; Chang, Yunxia; Liu, Kun; Zhang, Ju; Zhang, Yi; Zhang, Fuli; Yi, Liming; Li, Tingting; Wang, Ruiyue; Tan, Guangxuan; Li, Chengwei

    2014-01-01

    Transient transformation is simpler, more efficient and economical in analyzing protein subcellular localization than stable transformation. Fluorescent fusion proteins were often used in transient transformation to follow the in vivo behavior of proteins. Onion epidermis, which has large, living and transparent cells in a monolayer, is suitable to visualize fluorescent fusion proteins. The often used transient transformation methods included particle bombardment, protoplast transfection and Agrobacterium-mediated transformation. Particle bombardment in onion epidermis was successfully established, however, it was expensive, biolistic equipment dependent and with low transformation efficiency. We developed a highly efficient in planta transient transformation method in onion epidermis by using a special agroinfiltration method, which could be fulfilled within 5 days from the pretreatment of onion bulb to the best time-point for analyzing gene expression. The transformation conditions were optimized to achieve 43.87% transformation efficiency in living onion epidermis. The developed method has advantages in cost, time-consuming, equipment dependency and transformation efficiency in contrast with those methods of particle bombardment in onion epidermal cells, protoplast transfection and Agrobacterium-mediated transient transformation in leaf epidermal cells of other plants. It will facilitate the analysis of protein subcellular localization on a large scale. PMID:24416168

  8. Application of the probabilistic approximate analysis method to a turbopump blade analysis. [for Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.

    1990-01-01

    An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.

  9. A window-DEA based efficiency evaluation of the public hospital sector in Greece during the 5-year economic crisis

    PubMed Central

    Flokou, Angeliki; Aletras, Vassilis; Niakas, Dimitris

    2017-01-01

    The main objective of this study was to apply the non-parametric method of Data Envelopment Analysis (DEA) to measure the efficiency of Greek NHS hospitals between 2009–2013. Hospitals were divided into four separate groups with common characteristics which allowed comparisons to be carried out in the context of increased homogeneity. The window-DEA method was chosen since it leads to increased discrimination on the results especially when applied to small samples and it enables year-by-year comparisons of the results. Three inputs -hospital beds, physicians and other health professionals- and three outputs—hospitalized cases, surgeries and outpatient visits- were chosen as production variables in an input-oriented 2-year window DEA model for the assessment of technical and scale efficiency as well as for the identification of returns to scale. The Malmquist productivity index together with its components (i.e. pure technical efficiency change, scale efficiency change and technological scale) were also calculated in order to analyze the sources of productivity change between the first and last year of the study period. In the context of window analysis, the study identified the individual efficiency trends together with “all-windows” best and worst performers and revealed that a high level of technical and scale efficiency was maintained over the entire 5-year period. Similarly, the relevant findings of Malmquist productivity index analysis showed that both scale and pure technical efficiency were improved in 2013 whilst technological change was found to be in favor of the two groups with the largest hospitals. PMID:28542362

  10. In vivo evaluation of the effect of stimulus distribution on FIR statistical efficiency in event-related fMRI.

    PubMed

    Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L

    2013-05-15

    Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. Published by Elsevier B.V.

  11. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  12. Computer Graphics-aided systems analysis: application to well completion design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detamore, J.E.; Sarma, M.P.

    1985-03-01

    The development of an engineering tool (in the form of a computer model) for solving design and analysis problems related with oil and gas well production operations is discussed. The development of the method is based on integrating the concepts of ''Systems Analysis'' with the techniques of ''Computer Graphics''. The concepts behind the method are very general in nature. This paper, however, illustrates the application of the method in solving gas well completion design problems. The use of the method will save time and improve the efficiency of such design and analysis problems. The method can be extended to othermore » design and analysis aspects of oil and gas wells.« less

  13. Factors that influence the efficiency of beef and dairy cattle recording system in Kenya: A SWOT-AHP analysis.

    PubMed

    Wasike, Chrilukovian B; Magothe, Thomas M; Kahi, Alexander K; Peters, Kurt J

    2011-01-01

    Animal recording in Kenya is characterised by erratic producer participation and high drop-out rates from the national recording scheme. This study evaluates factors influencing efficiency of beef and dairy cattle recording system. Factors influencing efficiency of animal identification and registration, pedigree and performance recording, and genetic evaluation and information utilisation were generated using qualitative and participatory methods. Pairwise comparison of factors was done by strengths, weaknesses, opportunities and threats-analytical hierarchical process analysis and priority scores to determine their relative importance to the system calculated using Eigenvalue method. For identification and registration, and evaluation and information utilisation, external factors had high priority scores. For pedigree and performance recording, threats and weaknesses had the highest priority scores. Strengths factors could not sustain the required efficiency of the system. Weaknesses of the system predisposed it to threats. Available opportunities could be explored as interventions to restore efficiency in the system. Defensive strategies such as reorienting the system to offer utility benefits to recording, forming symbiotic and binding collaboration between recording organisations and NARS, and development of institutions to support recording were feasible.

  14. Geospatial Representation, Analysis and Computing Using Bandlimited Functions

    DTIC Science & Technology

    2010-02-19

    navigation of aircraft and missiles require detailed representations of gravity and efficient methods for determining orbits and trajectories. However, many...efficient on today’s computers. Under this grant new, computationally efficient, localized representations of gravity have been developed and tested. As a...step in developing a new approach to estimating gravitational potentials, a multiresolution representation for gravity estimation has been proposed

  15. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  16. Efficient Design and Analysis of Lightweight Reinforced Core Sandwich and PRSEUS Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Yarrington, Phillip W.; Lucking, Ryan C.; Collier, Craig S.; Ainsworth, James J.; Toubia, Elias A.

    2012-01-01

    Design, analysis, and sizing methods for two novel structural panel concepts have been developed and incorporated into the HyperSizer Structural Sizing Software. Reinforced Core Sandwich (RCS) panels consist of a foam core with reinforcing composite webs connecting composite facesheets. Boeing s Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) panels use a pultruded unidirectional composite rod to provide axial stiffness along with integrated transverse frames and stitching. Both of these structural concepts are ovencured and have shown great promise applications in lightweight structures, but have suffered from the lack of efficient sizing capabilities similar to those that exist for honeycomb sandwich, foam sandwich, hat stiffened, and other, more traditional concepts. Now, with accurate design methods for RCS and PRSEUS panels available in HyperSizer, these concepts can be traded and used in designs as is done with the more traditional structural concepts. The methods developed to enable sizing of RCS and PRSEUS are outlined, as are results showing the validity and utility of the methods. Applications include several large NASA heavy lift launch vehicle structures.

  17. Inquiring the Most Critical Teacher's Technology Education Competences in the Highest Efficient Technology Education Learning Organization

    ERIC Educational Resources Information Center

    Yung-Kuan, Chan; Hsieh, Ming-Yuan; Lee, Chin-Feng; Huang, Chih-Cheng; Ho, Li-Chih

    2017-01-01

    Under the hyper-dynamic education situation, this research, in order to comprehensively explore the interplays between Teacher Competence Demands (TCD) and Learning Organization Requests (LOR), cross-employs the data refined method of Descriptive Statistics (DS) method and Analysis of Variance (ANOVA) and Principal Components Analysis (PCA)…

  18. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  19. Accurate, Streamlined Analysis of mRNA Translation by Sucrose Gradient Fractionation

    PubMed Central

    Aboulhouda, Soufiane; Di Santo, Rachael; Therizols, Gabriel; Weinberg, David

    2017-01-01

    The efficiency with which proteins are produced from mRNA molecules can vary widely across transcripts, cell types, and cellular states. Methods that accurately assay the translational efficiency of mRNAs are critical to gaining a mechanistic understanding of post-transcriptional gene regulation. One way to measure translational efficiency is to determine the number of ribosomes associated with an mRNA molecule, normalized to the length of the coding sequence. The primary method for this analysis of individual mRNAs is sucrose gradient fractionation, which physically separates mRNAs based on the number of bound ribosomes. Here, we describe a streamlined protocol for accurate analysis of mRNA association with ribosomes. Compared to previous protocols, our method incorporates internal controls and improved buffer conditions that together reduce artifacts caused by non-specific mRNA–ribosome interactions. Moreover, our direct-from-fraction qRT-PCR protocol eliminates the need for RNA purification from gradient fractions, which greatly reduces the amount of hands-on time required and facilitates parallel analysis of multiple conditions or gene targets. Additionally, no phenol waste is generated during the procedure. We initially developed the protocol to investigate the translationally repressed state of the HAC1 mRNA in S. cerevisiae, but we also detail adapted procedures for mammalian cell lines and tissues. PMID:29170751

  20. Multivariate analysis of the volatile components in tobacco based on infrared-assisted extraction coupled to headspace solid-phase microextraction and gas chromatography-mass spectrometry.

    PubMed

    Yang, Yanqin; Pan, Yuanjiang; Zhou, Guojun; Chu, Guohai; Jiang, Jian; Yuan, Kailong; Xia, Qian; Cheng, Changhe

    2016-11-01

    A novel infrared-assisted extraction coupled to headspace solid-phase microextraction followed by gas chromatography with mass spectrometry method has been developed for the rapid determination of the volatile components in tobacco. The optimal extraction conditions for maximizing the extraction efficiency were as follows: 65 μm polydimethylsiloxane-divinylbenzene fiber, extraction time of 20 min, infrared power of 175 W, and distance between the infrared lamp and the headspace vial of 2 cm. Under the optimum conditions, 50 components were found to exist in all ten tobacco samples from different geographical origins. Compared with conventional water-bath heating and nonheating extraction methods, the extraction efficiency of infrared-assisted extraction was greatly improved. Furthermore, multivariate analysis including principal component analysis, hierarchical cluster analysis, and similarity analysis were performed to evaluate the chemical information of these samples and divided them into three classifications, including rich, moderate, and fresh flavors. The above-mentioned classification results were consistent with the sensory evaluation, which was pivotal and meaningful for tobacco discrimination. As a simple, fast, cost-effective, and highly efficient method, the infrared-assisted extraction coupled to headspace solid-phase microextraction technique is powerful and promising for distinguishing the geographical origins of the tobacco samples coupled to suitable chemometrics. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Efficient ICCG on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Hammond, Steven W.; Schreiber, Robert

    1989-01-01

    Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.

  2. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    PubMed

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  3. Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves

    NASA Astrophysics Data System (ADS)

    Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua

    2017-09-01

    In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.

  4. Efficiencies of Rotational Raman, and Rayleigh Techniques for Laser Remote Sensing of the Atmospheric Temperature

    NASA Technical Reports Server (NTRS)

    Ivanova, I. D.; Gurdev, L. L.; Mitev, V. M.

    1992-01-01

    Various lidar methods have been developed for measuring the atmospheric temperature, making use of the temperature dependant characteristics of rotational Raman scattering (RRS) from nitrogen and oxygen, and Rayleigh or Rayleigh-Brillowin scattering (RS or RBS). These methods have various advantages and disadvantages as compared to each other but their potential accuracies are principal characteristics of their efficiency. No systematic attempt has been undertaken so far to compare the efficiences, in the above meaning, of different temperature lidar methods. Two RRS techniques have been compared. Here, we do such a comparison using two methods based on the detection and analysis of RS (RBS) spectra. Four methods are considered here for measuring the atmospheric temperature. One of them (Schwiesow and Lading, 1981) is based on an analysis of the RS linewidth with two Michelson interferometers (MI) in parallel. The second method (Shimisu et al., 1986) employs a high-resolution analysis of the RBS line shape. The third method (Cooney, 1972) employs the temperature dependance of the RRS spectrum envelope. The fourth method (Armstrong, 1974) makes use of a scanning Fabry-Perot interferometer (FPI) as a comb filter for processing the periodic RRS spectrum of the nitrogen. Let us denote the corresponding errors in measuring the temperature by sigma(sub MI), sigma(sub HR), sigma(sub ENV), and sigma(sub FPI). Let us also define the ratios chi(sub 1) = sigma(sub MI)/sigma(sub ENV), chi(sub 2) = sigma(sub HR)/sigma(sub ENV), and chi(sub 3) = sigma(sub FPI)/sigma(sub ENV) interpreted as relative errors with respect to sigma(sub ENV).

  5. Multifractality, efficiency analysis of Chinese stock market and its cross-correlation with WTI crude oil price

    NASA Astrophysics Data System (ADS)

    Zhuang, Xiaoyang; Wei, Yu; Ma, Feng

    2015-07-01

    In this paper, the multifractality and efficiency degrees of ten important Chinese sectoral indices are evaluated using the methods of MF-DFA and generalized Hurst exponents. The study also scrutinizes the dynamics of the efficiency of Chinese sectoral stock market by the rolling window approach. The overall empirical findings revealed that all the sectoral indices of Chinese stock market exist different degrees of multifractality. The results of different efficiency measures have agreed on that the 300 Materials index is the least efficient index. However, they have a slight diffidence on the most efficient one. The 300 Information Technology, 300 Telecommunication Services and 300 Health Care indices are comparatively efficient. We also investigate the cross-correlations between the ten sectoral indices and WTI crude oil price based on Multifractal Detrended Cross-correlation Analysis. At last, some relevant discussions and implications of the empirical results are presented.

  6. A numerical formulation and algorithm for limit and shakedown analysis of large-scale elastoplastic structures

    NASA Astrophysics Data System (ADS)

    Peng, Heng; Liu, Yinghua; Chen, Haofeng

    2018-05-01

    In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.

  7. Development of an Itemwise Efficiency Scoring Method: Concurrent, Convergent, Discriminant, and Neuroimaging-Based Predictive Validity Assessed in a Large Community Sample

    PubMed Central

    Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.

    2016-01-01

    Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796

  8. Application of integrated fluid-thermal-structural analysis methods

    NASA Technical Reports Server (NTRS)

    Wieting, Allan R.; Dechaumphai, Pramote; Bey, Kim S.; Thornton, Earl A.; Morgan, Ken

    1988-01-01

    Hypersonic vehicles operate in a hostile aerothermal environment which has a significant impact on their aerothermostructural performance. Significant coupling occurs between the aerodynamic flow field, structural heat transfer, and structural response creating a multidisciplinary interaction. Interfacing state-of-the-art disciplinary analysis methods is not efficient, hence interdisciplinary analysis methods integrated into a single aerothermostructural analyzer are needed. The NASA Langley Research Center is developing such methods in an analyzer called LIFTS (Langley Integrated Fluid-Thermal-Structural) analyzer. The evolution and status of LIFTS is reviewed and illustrated through applications.

  9. A new code for the design and analysis of the heliostat field layout for power tower system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Xiudong; Lu, Zhenwu; Yu, Weixing

    2010-04-15

    A new code for the design and analysis of the heliostat field layout for power tower system is developed. In the new code, a new method for the heliostat field layout is proposed based on the edge ray principle of nonimaging optics. The heliostat field boundary is constrained by the tower height, the receiver tilt angle and size and the heliostat efficiency factor which is the product of the annual cosine efficiency and the annual atmospheric transmission efficiency. With the new method, the heliostat can be placed with a higher efficiency and a faster response speed of the design andmore » optimization can be obtained. A new module for the analysis of the aspherical heliostat is created in the new code. A new toroidal heliostat field is designed and analyzed by using the new code. Compared with the spherical heliostat, the solar image radius of the field is reduced by about 30% by using the toroidal heliostat if the mirror shape and the tracking are ideal. In addition, to maximize the utilization of land, suitable crops can be considered to be planted under heliostats. To evaluate the feasibility of the crop growth, a method for calculating the annual distribution of sunshine duration on the land surface is developed as well. (author)« less

  10. Efficiency limit factor analysis for the Francis-99 hydraulic turbine

    NASA Astrophysics Data System (ADS)

    Zeng, Y.; Zhang, L. X.; Guo, J. P.; Guo, Y. K.; Pan, Q. L.; Qian, J.

    2017-01-01

    The energy loss in hydraulic turbine is the most direct factor that affects the efficiency of the hydraulic turbine. Based on the analysis theory of inner energy loss of hydraulic turbine, combining the measurement data of the Francis-99, this paper calculates characteristic parameters of inner energy loss of the hydraulic turbine, and establishes the calculation model of the hydraulic turbine power. Taken the start-up test conditions given by Francis-99 as case, characteristics of the inner energy of the hydraulic turbine in transient and transformation law are researched. Further, analyzing mechanical friction in hydraulic turbine, we think that main ingredients of mechanical friction loss is the rotation friction loss between rotating runner and water body, and defined as the inner mechanical friction loss. The calculation method of the inner mechanical friction loss is given roughly. Our purpose is that explore and research the method and way increasing transformation efficiency of water flow by means of analysis energy losses in hydraulic turbine.

  11. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov–Galerkin method

    PubMed Central

    Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.

    2014-01-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358

  12. Partition method and experimental validation for impact dynamics of flexible multibody system

    NASA Astrophysics Data System (ADS)

    Wang, J. Y.; Liu, Z. Y.; Hong, J. Z.

    2018-06-01

    The impact problem of a flexible multibody system is a non-smooth, high-transient, and strong-nonlinear dynamic process with variable boundary. How to model the contact/impact process accurately and efficiently is one of the main difficulties in many engineering applications. The numerical approaches being used widely in impact analysis are mainly from two fields: multibody system dynamics (MBS) and computational solid mechanics (CSM). Approaches based on MBS provide a more efficient yet less accurate analysis of the contact/impact problems, while approaches based on CSM are well suited for particularly high accuracy needs, yet require very high computational effort. To bridge the gap between accuracy and efficiency in the dynamic simulation of a flexible multibody system with contacts/impacts, a partition method is presented considering that the contact body is divided into two parts, an impact region and a non-impact region. The impact region is modeled using the finite element method to guarantee the local accuracy, while the non-impact region is modeled using the modal reduction approach to raise the global efficiency. A three-dimensional rod-plate impact experiment is designed and performed to validate the numerical results. The principle for how to partition the contact bodies is proposed: the maximum radius of the impact region can be estimated by an analytical method, and the modal truncation orders of the non-impact region can be estimated by the highest frequency of the signal measured. The simulation results using the presented method are in good agreement with the experimental results. It shows that this method is an effective formulation considering both accuracy and efficiency. Moreover, a more complicated multibody impact problem of a crank slider mechanism is investigated to strengthen this conclusion.

  13. Correction of gene expression data: Performance-dependency on inter-replicate and inter-treatment biases.

    PubMed

    Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren

    2014-10-20

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Comparing multiple imputation methods for systematically missing subject-level data.

    PubMed

    Kline, David; Andridge, Rebecca; Kaizar, Eloise

    2017-06-01

    When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Recycling stainless steel orthodontic brackets with Er:YAG laser - An environmental scanning electron microscope and shear bond strength study.

    PubMed

    Chacko, Prince K; Kodoth, Jithesh; John, Jacob; Kumar, Kishore

    2013-07-01

    TO DETERMINE THE EFFICIENCY OF ERBIUM: Yttrium aluminum garnet (Er:YAG) laser with Environmental Scanning Electron Microscope (ESEM) and shear bond strength analysis as a method of recycling stainless steel orthodontic brackets and compare with other methods of recycling. Eighty samples of extracted premolar teeth bonded to SS brackets were tested for rebonded shear bond strength after recycling by four methods and compared with a control group of 20 samples. These 80 samples were randomized into four groups which were recycled by four methods, namely, sandblasting, thermal method, adhesive grinding by tungsten carbide bur, and Er: YAG laser method. After recycling, ESEM and shear bond strength analysis were used to analyze the efficiency of the recycling methods. ER: YAG laser group was found to be having the greatest bond strength among the recycled brackets (8.33±2.51 followed by the sandblasting at 6.12±1.12 MPa, thermal and electropolishing at 4.44±0.95 MPa, and lastly the adhesive grinding method at 3.08±1.07 MPa. The shear bond strength of Er: YAG laser group was found to be having no statistically significant difference with that of the control group (P>0.05 and had statistical signifance with sandblasting, thermal and electropolishing and adhesive grinding groups at P>0.001. ESEM analysis showed complete removal of adhesive from the brackets recycled with Er: YAG laser which mimicked that of the control group. ER: YAG laser (2940 nm) was found to be the most efficient method for recycling, followed by the sandblasting, thermal, and the tungsten carbide methods, which had the least shear bond strength value and is not fit for clinical usage.

  16. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  17. Progress in integrated-circuit horn antennas for receiver applications. Part 1: Antenna design

    NASA Technical Reports Server (NTRS)

    Eleftheriades, George V.; Ali-Ahmad, Walid Y.; Rebeiz, Gabriel M.

    1992-01-01

    The purpose of this work is to present a systematic method for the design of multimode quasi-integrated horn antennas. The design methodology is based on the Gaussian beam approach and the structures are optimized for achieving maximum fundamental Gaussian coupling efficiency. For this purpose, a hybrid technique is employed in which the integrated part of the antennas is treated using full-wave analysis, whereas the machined part is treated using an approximate method. This results in a simple and efficient design process. The developed design procedure has been applied for the design of a 20, a 23, and a 25 dB quasi-integrated horn antennas, all with a Gaussian coupling efficiency exceeding 97 percent. The designed antennas have been tested and characterized using both full-wave analysis and 90 GHz/370 GHz measurements.

  18. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  19. MHOST: An efficient finite element program for inelastic analysis of solids and structures

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.

    1988-01-01

    An efficient finite element program for 3-D inelastic analysis of gas turbine hot section components was constructed and validated. A novel mixed iterative solution strategy is derived from the augmented Hu-Washizu variational principle in order to nodally interpolate coordinates, displacements, deformation, strains, stresses and material properties. A series of increasingly sophisticated material models incorporated in MHOST include elasticity, secant plasticity, infinitesimal and finite deformation plasticity, creep and unified viscoplastic constitutive model proposed by Walker. A library of high performance elements is built into this computer program utilizing the concepts of selective reduced integrations and independent strain interpolations. A family of efficient solution algorithms is implemented in MHOST for linear and nonlinear equation solution including the classical Newton-Raphson, modified, quasi and secant Newton methods with optional line search and the conjugate gradient method.

  20. An efficient genome-wide association test for mixed binary and continuous phenotypes with applications to substance abuse research.

    PubMed

    Buu, Anne; Williams, L Keoki; Yang, James J

    2018-03-01

    We propose a new genome-wide association test for mixed binary and continuous phenotypes that uses an efficient numerical method to estimate the empirical distribution of the Fisher's combination statistic under the null hypothesis. Our simulation study shows that the proposed method controls the type I error rate and also maintains its power at the level of the permutation method. More importantly, the computational efficiency of the proposed method is much higher than the one of the permutation method. The simulation results also indicate that the power of the test increases when the genetic effect increases, the minor allele frequency increases, and the correlation between responses decreases. The statistical analysis on the database of the Study of Addiction: Genetics and Environment demonstrates that the proposed method combining multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests.

  1. Comparative analysis of methods for concentrating venom from jellyfish Rhopilema esculentum Kishinouye

    NASA Astrophysics Data System (ADS)

    Li, Cuiping; Yu, Huahua; Feng, Jinhua; Chen, Xiaolin; Li, Pengcheng

    2009-02-01

    In this study, several methods were compared for the efficiency to concentrate venom from the tentacles of jellyfish Rhopilema esculentum Kishinouye. The results show that the methods using either freezing-dry or gel absorption to remove water to concentrate venom are not applicable due to the low concentration of the compounds dissolved. Although the recovery efficiency and the total venom obtained using the dialysis dehydration method are high, some proteins can be lost during the concentrating process. Comparing to the lyophilization method, ultrafiltration is a simple way to concentrate the compounds at high percentage but the hemolytic activities of the proteins obtained by ultrafiltration appear to be lower. Our results suggest that overall lyophilization is the best and recommended method to concentrate venom from the tentacles of jellyfish. It shows not only the high recovery efficiency for the venoms but high hemolytic activities as well.

  2. Efficient l1 -norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method.

    PubMed

    Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai

    2015-02-01

    Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.

  3. 1975 Automotive Characteristics Data Base

    DOT National Transportation Integrated Search

    1976-10-01

    A study of automobile characteristics as a supportive tool for auto energy consumption, fuel economy monitoring, and fleet analysis studies is presented. This report emphasizes the utility of efficient data retrieval methods in fuel economy analysis,...

  4. The detection methods of dynamic objects

    NASA Astrophysics Data System (ADS)

    Knyazev, N. L.; Denisova, L. A.

    2018-01-01

    The article deals with the application of cluster analysis methods for solving the task of aircraft detection on the basis of distribution of navigation parameters selection into groups (clusters). The modified method of cluster analysis for search and detection of objects and then iterative combining in clusters with the subsequent count of their quantity for increase in accuracy of the aircraft detection have been suggested. The course of the method operation and the features of implementation have been considered. In the conclusion the noted efficiency of the offered method for exact cluster analysis for finding targets has been shown.

  5. High resolution melting analysis is a more sensitive and effective alternative to gel-based platforms in analysis of SSR--an example in citrus.

    PubMed

    Distefano, Gaetano; Caruso, Marco; La Malfa, Stefano; Gentile, Alessandra; Wu, Shu-Biao

    2012-01-01

    High resolution melting curve analysis (HRM) has been used as an efficient, accurate and cost-effective tool to detect single nucleotide polymorphisms (SNPs) or insertions or deletions (INDELs). However, its efficiency, accuracy and applicability to discriminate microsatellite polymorphism have not been extensively assessed. The traditional protocols used for SSR genotyping include PCR amplification of the DNA fragment and the separation of the fragments on electrophoresis-based platform. However, post-PCR handling processes are laborious and costly. Furthermore, SNPs present in the sequences flanking repeat motif cannot be detected by polyacrylamide-gel-electrophoresis based methods. In the present study, we compared the discriminating power of HRM with the traditional electrophoresis-based methods and provided a panel of primers for HRM genotyping in Citrus. The results showed that sixteen SSR markers produced distinct polymorphic melting curves among the Citrus spp investigated through HRM analysis. Among those, 10 showed more genotypes by HRM analysis than capillary electrophoresis owing to the presence of SNPs in the amplicons. For the SSR markers without SNPs present in the flanking region, HRM also gave distinct melting curves which detected same genotypes as were shown in capillary electrophoresis (CE) analysis. Moreover, HRM analysis allowed the discrimination of most of the 15 citrus genotypes and the resulting genetic distance analysis clustered them into three main branches. In conclusion, it has been approved that HRM is not only an efficient and cost-effective alternative of electrophoresis-based method for SSR markers, but also a method to uncover more polymorphisms contributed by SNPs present in SSRs. It was therefore suggested that the panel of SSR markers could be used in a variety of applications in the citrus biodiversity and breeding programs using HRM analysis. Furthermore, we speculate that the HRM analysis can be employed to analyse SSR markers in a wide range of applications in all other species.

  6. The Multidimensional Efficiency of Pension System: Definition and Measurement in Cross-Country Studies.

    PubMed

    Chybalski, Filip

    The existing literature on the efficiency of pension system, usually addresses the problem between the choice of different theoretical models, or concerns one or few empirical pension systems. In this paper quite different approach to the measurement of pension system efficiency is proposed. It is dedicated mainly to the cross-country studies of empirical pension systems, however it may be also employed to the analysis of a given pension system on the basis of time series. I identify four dimensions of pension system efficiency, referring to: GDP-distribution, adequacy of pension, influence on the labour market and administrative costs. Consequently, I propose four sets of static and one set of dynamic efficiency indicators. In the empirical part of the paper, I use Spearman's rank correlation coefficient and cluster analysis to verify the proposed method on statistical data covering 28 European countries in years 2007-2011. I prove that the method works and enables some comparisons as well as clustering of analyzed pension systems. The study delivers also some interesting empirical findings. The main goal of pension systems seems to become poverty alleviation, since the efficiency of ensuring protection against poverty, as well as the efficiency of reducing poverty, is very resistant to the efficiency of GDP-distribution. The opposite situation characterizes the efficiency of consumption smoothing-this is generally sensitive to the efficiency of GDP-distribution, and its dynamics are sensitive to the dynamics of GDP-distribution efficiency. The results of the study indicate the Norwegian and the Icelandic pension systems to be the most efficient in the analyzed group.

  7. Joint carbon footprint assessment and data envelopment analysis for the reduction of greenhouse gas emissions in agriculture production.

    PubMed

    Rebolledo-Leiva, Ricardo; Angulo-Meza, Lidia; Iriarte, Alfredo; González-Araya, Marcela C

    2017-09-01

    Operations management tools are critical in the process of evaluating and implementing action towards a low carbon production. Currently, a sustainable production implies both an efficient resource use and the obligation to meet targets for reducing greenhouse gas (GHG) emissions. The carbon footprint (CF) tool allows estimating the overall amount of GHG emissions associated with a product or activity throughout its life cycle. In this paper, we propose a four-step method for a joint use of CF assessment and Data Envelopment Analysis (DEA). Following the eco-efficiency definition, which is the delivery of goods using fewer resources and with decreasing environmental impact, we use an output oriented DEA model to maximize production and reduce CF, taking into account simultaneously the economic and ecological perspectives. In another step, we stablish targets for the contributing CF factors in order to achieve CF reduction. The proposed method was applied to assess the eco-efficiency of five organic blueberry orchards throughout three growing seasons. The results show that this method is a practical tool for determining eco-efficiency and reducing GHG emissions. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Investigation of the dye-sensitized solar cell designed by a series of mixed metal oxides based on ZnAl-layered double hydroxide

    NASA Astrophysics Data System (ADS)

    Zhu, Yatong; Wang, Dali; Yang, Xiaoyu; Liu, Sha; Liu, Dong; Liu, Jie; Xiao, Hongdi; Hao, Xiaotao; Liu, Jianqiang

    2017-10-01

    In this paper, the anode materials for dye-sensitized solar cell (DSSC) were prepared by a facile calcination method using the ZnAl-layered double hydroxide (LDH) as a precursor. The ZnAl-LDHs with different molar ratios (Zn:Al = 2, 4, 6, 8) were prepared by the urea method and the mixed metal oxides (MMO) were prepared by calcining the LDHs at 500 °C. A series of cells were assembled by the corresponding MMOs and different dyes (N3 and N719). The basic parameters were investigated by X-ray diffraction, scanning electron microscope, thermogravimetric and differential thermal analysis, nitrogen sorption analysis and UV-Vis absorption spectrum. The photovoltaic performance of DSSCs was measured by electrochemical method. It could be seen that ZnAl molar ratios and different dyes had great influence on the efficiency of DSSC. The efficiency improved explicitly with increasing ZnAl molar ratio and the DSSC made of N3 showed better efficiency than that of N719. The best efficiency of N3 conditions reached 0.55% when the ratio of ZnAl-LDH precursor was 8:1.

  9. Complementarity and Area-Efficiency in the Prioritization of the Global Protected Area Network.

    PubMed

    Kullberg, Peter; Toivonen, Tuuli; Montesino Pouzols, Federico; Lehtomäki, Joona; Di Minin, Enrico; Moilanen, Atte

    2015-01-01

    Complementarity and cost-efficiency are widely used principles for protected area network design. Despite the wide use and robust theoretical underpinnings, their effects on the performance and patterns of priority areas are rarely studied in detail. Here we compare two approaches for identifying the management priority areas inside the global protected area network: 1) a scoring-based approach, used in recently published analysis and 2) a spatial prioritization method, which accounts for complementarity and area-efficiency. Using the same IUCN species distribution data the complementarity method found an equal-area set of priority areas with double the mean species ranges covered compared to the scoring-based approach. The complementarity set also had 72% more species with full ranges covered, and lacked any coverage only for half of the species compared to the scoring approach. Protected areas in our complementarity-based solution were on average smaller and geographically more scattered. The large difference between the two solutions highlights the need for critical thinking about the selected prioritization method. According to our analysis, accounting for complementarity and area-efficiency can lead to considerable improvements when setting management priorities for the global protected area network.

  10. Liver plasma membranes: an effective method to analyze membrane proteome.

    PubMed

    Cao, Rui; Liang, Songping

    2012-01-01

    Plasma membrane proteins are critical for the maintenance of biological systems and represent important targets for the treatment of disease. The hydrophobicity and low abundance of plasma membrane proteins make them difficult to analyze. The protocols given here are the efficient isolation/digestion procedures for liver plasma membrane proteomic analysis. Both protocol for the isolation of plasma membranes and protocol for the in-gel digestion of gel-embedded plasma membrane proteins are presented. The later method allows the use of a high detergent concentration to achieve efficient solubilization of hydrophobic plasma membrane proteins while avoiding interference with the subsequent LC-MS/MS analysis.

  11. Condenser-type diffusion denuders for the collection of sulfur dioxide in a cleanroom.

    PubMed

    Chang, In-Hyoung; Lee, Dong Soo; Ock, Soon-Ho

    2003-02-01

    High-efficiency condenser-type diffusion denuders of cylindrical and planar geometries are described. The film condensation of water vapor onto a cooled denuder surface can be used as a method for collecting water-soluble gases. By using SO(2) as the test gas, the planar design offers quantitative collection efficiency at air sampling rates up to 5 L min(-1). Coupled to ion chromatography, the limit of detection (LOD) for SO(2) is 0.014 ppbv with a 30-min successive analysis sequence. The method has been successfully applied to the analysis of temperature- and humidity-controlled cleanroom air.

  12. Principal Component Relaxation Mode Analysis of an All-Atom Molecular Dynamics Simulation of Human Lysozyme

    NASA Astrophysics Data System (ADS)

    Nagai, Toshiki; Mitsutake, Ayori; Takano, Hiroshi

    2013-02-01

    A new relaxation mode analysis method, which is referred to as the principal component relaxation mode analysis method, has been proposed to handle a large number of degrees of freedom of protein systems. In this method, principal component analysis is carried out first and then relaxation mode analysis is applied to a small number of principal components with large fluctuations. To reduce the contribution of fast relaxation modes in these principal components efficiently, we have also proposed a relaxation mode analysis method using multiple evolution times. The principal component relaxation mode analysis method using two evolution times has been applied to an all-atom molecular dynamics simulation of human lysozyme in aqueous solution. Slow relaxation modes and corresponding relaxation times have been appropriately estimated, demonstrating that the method is applicable to protein systems.

  13. Retrieval of spheroid particle size distribution from spectral extinction data in the independent mode using PCA approach

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Lin, Jian-Zhong

    2013-01-01

    An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.

  14. Numerical solution of the time fractional reaction-diffusion equation with a moving boundary

    NASA Astrophysics Data System (ADS)

    Zheng, Minling; Liu, Fawang; Liu, Qingxia; Burrage, Kevin; Simpson, Matthew J.

    2017-06-01

    A fractional reaction-diffusion model with a moving boundary is presented in this paper. An efficient numerical method is constructed to solve this moving boundary problem. Our method makes use of a finite difference approximation for the temporal discretization, and spectral approximation for the spatial discretization. The stability and convergence of the method is studied, and the errors of both the semi-discrete and fully-discrete schemes are derived. Numerical examples, motivated by problems from developmental biology, show a good agreement with the theoretical analysis and illustrate the efficiency of our method.

  15. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  16. A novel quantitative analysis method of three-dimensional fluorescence spectra for vegetable oils contents in edible blend oil

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Wang, Yu-Tian; Liu, Xiao-Fei

    2015-04-01

    Edible blend oil is a mixture of vegetable oils. Eligible blend oil can meet the daily need of two essential fatty acids for human to achieve the balanced nutrition. Each vegetable oil has its different composition, so vegetable oils contents in edible blend oil determine nutritional components in blend oil. A high-precision quantitative analysis method to detect the vegetable oils contents in blend oil is necessary to ensure balanced nutrition for human being. Three-dimensional fluorescence technique is high selectivity, high sensitivity, and high-efficiency. Efficiency extraction and full use of information in tree-dimensional fluorescence spectra will improve the accuracy of the measurement. A novel quantitative analysis is proposed based on Quasi-Monte-Carlo integral to improve the measurement sensitivity and reduce the random error. Partial least squares method is used to solve nonlinear equations to avoid the effect of multicollinearity. The recovery rates of blend oil mixed by peanut oil, soybean oil and sunflower are calculated to verify the accuracy of the method, which are increased, compared the linear method used commonly for component concentration measurement.

  17. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  18. The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories

    NASA Technical Reports Server (NTRS)

    Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.

    1972-01-01

    An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.

  19. Improving estimates of genetic maps: a meta-analysis-based approach.

    PubMed

    Stewart, William C L

    2007-07-01

    Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.

  20. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  1. Validity of automated measurement of left ventricular ejection fraction and volume using the Philips EPIQ system.

    PubMed

    Hovnanians, Ninel; Win, Theresa; Makkiya, Mohammed; Zheng, Qi; Taub, Cynthia

    2017-11-01

    To assess the efficiency and reproducibility of automated measurements of left ventricular (LV) volumes and LV ejection fraction (LVEF) in comparison to manually traced biplane Simpson's method. This is a single-center prospective study. Apical four- and two-chamber views were acquired in patients in sinus rhythm. Two operators independently measured LV volumes and LVEF using biplane Simpson's method. In addition, the image analysis software a2DQ on the Philips EPIQ system was applied to automatically assess the LV volumes and LVEF. Time spent on each analysis, using both methods, was documented. Concordance of echocardiographic measures was evaluated using intraclass correlation (ICC) and Bland-Altman analysis. Manual tracing and automated measurement of LV volumes and LVEF were performed in 184 patients with a mean age of 67.3 ± 17.3 years and BMI 28.0 ± 6.8 kg/m 2 . ICC and Bland-Altman analysis showed good agreements between manual and automated methods measuring LVEF, end-systolic, and end-diastolic volumes. The average analysis time was significantly less using the automated method than manual tracing (116 vs 217 seconds/patient, P < .0001). Automated measurement using the novel image analysis software a2DQ on the Philips EPIQ system produced accurate, efficient, and reproducible assessment of LV volumes and LVEF compared with manual measurement. © 2017, Wiley Periodicals, Inc.

  2. Taming Log Files from Game/Simulation-Based Assessments: Data Models and Data Analysis Tools. Research Report. ETS RR-16-10

    ERIC Educational Resources Information Center

    Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm

    2016-01-01

    Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…

  3. Acoustic prediction methods for the NASA generalized advanced propeller analysis system (GAPAS)

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Block, P. J. W.

    1984-01-01

    Classical methods of propeller performance analysis are coupled with state-of-the-art Aircraft Noise Prediction Program (ANOPP:) techniques to yield a versatile design tool, the NASA Generalized Advanced Propeller Analysis System (GAPAS) for the novel quiet and efficient propellers. ANOPP is a collection of modular specialized programs. GAPAS as a whole addresses blade geometry and aerodynamics, rotor performance and loading, and subsonic propeller noise.

  4. A Stirling engine analysis method based upon moving gas nodes

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1986-01-01

    A Lagrangian nodal analysis method for Stirling engines (SEs) is described, validated, and applied to a conventional SE and an isothermalized SE (with fins in the hot and cold spaces). The analysis employs a constant-mass gas node (which moves with respect to the solid nodes during each time step) instead of the fixed gas nodes of Eulerian analysis. The isothermalized SE is found to have efficiency only slightly greater than that of a conventional SE.

  5. Wheelchair racing efficiency.

    PubMed

    Cooper, R A; Boninger, M L; Cooper, R; Robertson, R N; Baldini, F D

    For individuals with disabilities exercise, such as wheelchair racing, can be an important modality for community reintegration, as well as health promotion. The purpose of this study was to examine selected parameters during racing wheelchair propulsion among a sample of elite wheelchair racers. It was hypothesized that blood lactate accumulation and wheeling economy (i.e. oxygen consumed per minute) would increase with speed and that gross mechanical efficiency would reach an optimum for each athlete. Twelve elite wheelchair racers with paraplegia participated in this study. Nine of the subjects were males and three were females. Each subject used his or her personal wheelchair during the experiments. A computer monitored wheelchair dynamometer was used during all testing. The method used was essentially a discontinuous economy protocol. Mixed model analysis of variance (ANOVA) was used to compare blood lactate concentration, economy (minute oxygen consumption), and gross mechanical efficiency across the stages. The results of this study show that both economy and blood lactate concentration increase linearly with speed if resistance is held constant. The subjects in this study had gross mechanical efficiencies (gme) of about 18%, with the range going from 15.222.7%. The results indicate that at the higher speeds of propulsion, for example near race speeds, analysis of respiratory gases may not give a complete energy profile. While there is a good understanding of training methods to improve cardiovascular fitness for wheelchair racers, little is known about improving efficiency (e.g. technique, equipment), therefore methods need to be developed to determine efficiency while training or in race situations.

  6. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less

  7. Privacy-preserving data cube for electronic medical records: An experimental evaluation.

    PubMed

    Kim, Soohyung; Lee, Hyukki; Chung, Yon Dohn

    2017-01-01

    The aim of this study is to evaluate the effectiveness and efficiency of privacy-preserving data cubes of electronic medical records (EMRs). An EMR data cube is a complex of EMR statistics that are summarized or aggregated by all possible combinations of attributes. Data cubes are widely utilized for efficient big data analysis and also have great potential for EMR analysis. For safe data analysis without privacy breaches, we must consider the privacy preservation characteristics of the EMR data cube. In this paper, we introduce a design for a privacy-preserving EMR data cube and the anonymization methods needed to achieve data privacy. We further focus on changes in efficiency and effectiveness that are caused by the anonymization process for privacy preservation. Thus, we experimentally evaluate various types of privacy-preserving EMR data cubes using several practical metrics and discuss the applicability of each anonymization method with consideration for the EMR analysis environment. We construct privacy-preserving EMR data cubes from anonymized EMR datasets. A real EMR dataset and demographic dataset are used for the evaluation. There are a large number of anonymization methods to preserve EMR privacy, and the methods are classified into three categories (i.e., global generalization, local generalization, and bucketization) by anonymization rules. According to this classification, three types of privacy-preserving EMR data cubes were constructed for the evaluation. We perform a comparative analysis by measuring the data size, cell overlap, and information loss of the EMR data cubes. Global generalization considerably reduced the size of the EMR data cube and did not cause the data cube cells to overlap, but incurred a large amount of information loss. Local generalization maintained the data size and generated only moderate information loss, but there were cell overlaps that could decrease the search performance. Bucketization did not cause cells to overlap and generated little information loss; however, the method considerably inflated the size of the EMR data cubes. The utility of anonymized EMR data cubes varies widely according to the anonymization method, and the applicability of the anonymization method depends on the features of the EMR analysis environment. The findings help to adopt the optimal anonymization method considering the EMR analysis environment and goal of the EMR analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Classification of breast tissue in mammograms using efficient coding.

    PubMed

    Costa, Daniel D; Campos, Lúcio F; Barros, Allan K

    2011-06-24

    Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.

  9. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  10. An easily implemented static condensation method for structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  11. Reconstructed phase spaces of intrinsic mode functions. Application to postural stability analysis.

    PubMed

    Snoussi, Hichem; Amoud, Hassan; Doussot, Michel; Hewson, David; Duchêne, Jacques

    2006-01-01

    In this contribution, we propose an efficient nonlinear analysis method characterizing postural steadiness. The analyzed signal is the displacement of the centre of pressure (COP) collected from a force plate used for measuring postural sway. The proposed method consists of analyzing the nonlinear dynamics of the intrinsic mode functions (IMF) of the COP signal. The nonlinear properties are assessed through the reconstructed phase spaces of the different IMFs. This study shows some specific geometries of the attractors of some intrinsic modes. Moreover, the volume spanned by the geometric attractors in the reconstructed phase space represents an efficient indicator of the postural stability of the subject. Experiments results corroborate the effectiveness of the method to blindly discriminate young subjects, elderly subjects and subjects presenting a risk of falling.

  12. Science and Technology Highlights | NREL

    Science.gov Websites

    Leads to Enhanced Upgrading Methods NREL's efforts to standardize techniques for bio-oil analysis inform enhanced modeling capability and affordable methods to increase energy efficiency. December 2012 NREL Meets Performance Demands of Advanced Lithium-ion Batteries Novel surface modification methods are

  13. Approaches to quantitating the results of differentially dyed cottons

    USDA-ARS?s Scientific Manuscript database

    The differential dyeing (DD) method has served as a subjective method for visually determining immature cotton fibers. In an attempt to quantitate the results of the differential dyeing method, and thus offer an efficient means of elucidating cotton maturity without visual discretion, image analysi...

  14. Benchmarking the efficiency of the Chilean water and sewerage companies: a double-bootstrap approach.

    PubMed

    Molinos-Senante, María; Donoso, Guillermo; Sala-Garrido, Ramon; Villegas, Andrés

    2018-03-01

    Benchmarking the efficiency of water companies is essential to set water tariffs and to promote their sustainability. In doing so, most of the previous studies have applied conventional data envelopment analysis (DEA) models. However, it is a deterministic method that does not allow to identify environmental factors influencing efficiency scores. To overcome this limitation, this paper evaluates the efficiency of a sample of Chilean water and sewerage companies applying a double-bootstrap DEA model. Results evidenced that the ranking of water and sewerage companies changes notably whether efficiency scores are computed applying conventional or double-bootstrap DEA models. Moreover, it was found that the percentage of non-revenue water and customer density are factors influencing the efficiency of Chilean water and sewerage companies. This paper illustrates the importance of using a robust and reliable method to increase the relevance of benchmarking tools.

  15. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald; El-Azab, Anter; Pernice, Michael

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis formore » computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.« less

  16. An Attempt at Quantifying Factors that Affect Efficiency in the Management of Solid Waste Produced by Commercial Businesses in the City of Tshwane, South Africa

    PubMed Central

    Worku, Yohannes; Muchie, Mammo

    2012-01-01

    Objective. The objective was to investigate factors that affect the efficient management of solid waste produced by commercial businesses operating in the city of Pretoria, South Africa. Methods. Data was gathered from 1,034 businesses. Efficiency in solid waste management was assessed by using a structural time-based model designed for evaluating efficiency as a function of the length of time required to manage waste. Data analysis was performed using statistical procedures such as frequency tables, Pearson's chi-square tests of association, and binary logistic regression analysis. Odds ratios estimated from logistic regression analysis were used for identifying key factors that affect efficiency in the proper disposal of waste. Results. The study showed that 857 of the 1,034 businesses selected for the study (83%) were found to be efficient enough with regards to the proper collection and disposal of solid waste. Based on odds ratios estimated from binary logistic regression analysis, efficiency in the proper management of solid waste was significantly influenced by 4 predictor variables. These 4 influential predictor variables are lack of adherence to waste management regulations, wrong perception, failure to provide customers with enough trash cans, and operation of businesses by employed managers, in a decreasing order of importance. PMID:23209483

  17. Establishment of an efficient transformation system for Pleurotus ostreatus.

    PubMed

    Lei, Min; Wu, Xiangli; Zhang, Jinxia; Wang, Hexiang; Huang, Chenyang

    2017-11-21

    Pleurotus ostreatus is widely cultivated worldwide, but the lack of an efficient transformation system regarding its use restricts its genetic research. The present study developed an improved and efficient Agrobacterium tumefaciens-mediated transformation method in P. ostreatus. Four parameters were optimized to obtain the most efficient transformation method. The strain LBA4404 was the most suitable for the transformation of P. ostreatus. A bacteria-to-protoplast ratio of 100:1, an acetosyringone (AS) concentration of 0.1 mM, and 18 h of co-culture showed the best transformation efficiency. The hygromycin B phosphotransferase gene (HPH) was used as the selective marker, and EGFP was used as the reporter gene in this study. Southern blot analysis combined with EGFP fluorescence assay showed positive results, and mitotic stability assay showed that more than 75% transformants were stable after five generations. These results showed that our transformation method is effective and stable and may facilitate future genetic studies in P. ostreatus.

  18. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    ERIC Educational Resources Information Center

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  19. Classic Maximum Entropy Recovery of the Average Joint Distribution of Apparent FRET Efficiency and Fluorescence Photons for Single-molecule Burst Measurements

    PubMed Central

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2012-01-01

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694

  20. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  1. Exploratory High-Fidelity Aerostructural Optimization Using an Efficient Monolithic Solution Method

    NASA Astrophysics Data System (ADS)

    Zhang, Jenmy Zimi

    This thesis is motivated by the desire to discover fuel efficient aircraft concepts through exploratory design. An optimization methodology based on tightly integrated high-fidelity aerostructural analysis is proposed, which has the flexibility, robustness, and efficiency to contribute to this goal. The present aerostructural optimization methodology uses an integrated geometry parameterization and mesh movement strategy, which was initially proposed for aerodynamic shape optimization. This integrated approach provides the optimizer with a large amount of geometric freedom for conducting exploratory design, while allowing for efficient and robust mesh movement in the presence of substantial shape changes. In extending this approach to aerostructural optimization, this thesis has addressed a number of important challenges. A structural mesh deformation strategy has been introduced to translate consistently the shape changes described by the geometry parameterization to the structural model. A three-field formulation of the discrete steady aerostructural residual couples the mesh movement equations with the three-dimensional Euler equations and a linear structural analysis. Gradients needed for optimization are computed with a three-field coupled adjoint approach. A number of investigations have been conducted to demonstrate the suitability and accuracy of the present methodology for use in aerostructural optimization involving substantial shape changes. Robustness and efficiency in the coupled solution algorithms is crucial to the success of an exploratory optimization. This thesis therefore also focuses on the design of an effective monolithic solution algorithm for the proposed methodology. This involves using a Newton-Krylov method for the aerostructural analysis and a preconditioned Krylov subspace method for the coupled adjoint solution. Several aspects of the monolithic solution method have been investigated. These include appropriate strategies for scaling and matrix-vector product evaluation, as well as block preconditioning techniques that preserve the modularity between subproblems. The monolithic solution method is applied to problems with varying degrees of fluid-structural coupling, as well as a wing span optimization study. The monolithic solution algorithm typically requires 20%-70% less computing time than its partitioned counterpart. This advantage increases with increasing wing flexibility. The performance of the monolithic solution method is also much less sensitive to the choice of the solution parameter.

  2. Standardisation of the (129)I, (151)Sm and (166m)Ho activity concentration using the CIEMAT/NIST efficiency tracing method.

    PubMed

    Altzitzoglou, Timotheos; Rožkov, Andrej

    2016-03-01

    The (129)I, (151)Sm and (166m)Ho standardisations using the CIEMAT/NIST efficiency tracing method, that have been carried out in the frame of the European Metrology Research Program project "Metrology for Radioactive Waste Management" are described. The radionuclide beta counting efficiencies were calculated using two computer codes CN2005 and MICELLE2. The sensitivity analysis of the code input parameters (ionization quenching factor, beta shape factor) on the calculated efficiencies was performed, and the results are discussed. The combined relative standard uncertainty of the standardisations of the (129)I, (151)Sm and (166m)Ho solutions were 0.4%, 0.5% and 0.4%, respectively. The stated precision obtained using the CIEMAT/NIST method is better than that previously reported in the literature obtained by the TDCR ((129)I), the 4πγ-NaI ((166m)Ho) counting or the CIEMAT/NIST method ((151)Sm). Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Technical efficiency of teaching hospitals in Iran: the use of Stochastic Frontier Analysis, 1999–2011

    PubMed Central

    Goudarzi, Reza; Pourreza, Abolghasem; Shokoohi, Mostafa; Askari, Roohollah; Mahdavi, Mahdi; Moghri, Javad

    2014-01-01

    Background: Hospitals are highly resource-dependent settings, which spend a large proportion of healthcare financial resources. The analysis of hospital efficiency can provide insight into how scarce resources are used to create health values. This study examines the Technical Efficiency (TE) of 12 teaching hospitals affiliated with Tehran University of Medical Sciences (TUMS) between 1999 and 2011. Methods: The Stochastic Frontier Analysis (SFA) method was applied to estimate the efficiency of TUMS hospitals. A best function, referred to as output and input parameters, was calculated for the hospitals. Number of medical doctors, nurses, and other personnel, active beds, and outpatient admissions were considered as the input variables and number of inpatient admissions as an output variable. Results: The mean level of TE was 59% (ranging from 22 to 81%). During the study period the efficiency increased from 61 to 71%. Outpatient admission, other personnel and medical doctors significantly and positively affected the production (P< 0.05). Concerning the Constant Return to Scale (CRS), an optimal production scale was found, implying that the productions of the hospitals were approximately constant. Conclusion: Findings of this study show a remarkable waste of resources in the TUMS hospital during the decade considered. This warrants policy-makers and top management in TUMS to consider steps to improve the financial management of the university hospitals. PMID:25114947

  4. Efficient Power Network Analysis with Modeling of Inductive Effects

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Yu, Wenjian; Hong, Xianlong; Cheng, Chung-Kuan

    In this paper, an efficient method is proposed to accurately analyze large-scale power/ground (P/G) networks, where inductive parasitics are modeled with the partial reluctance. The method is based on frequency-domain circuit analysis and the technique of vector fitting [14], and obtains the time-domain voltage response at given P/G nodes. The frequency-domain circuit equation including partial reluctances is derived, and then solved with the GMRES algorithm with rescaling, preconditioning and recycling techniques. With the merit of sparsified reluctance matrix and iterative solving techniques for the frequency-domain circuit equations, the proposed method is able to handle large-scale P/G networks with complete inductive modeling. Numerical results show that the proposed method is orders of magnitude faster than HSPICE, several times faster than INDUCTWISE [4], and capable of handling the inductive P/G structures with more than 100, 000 wire segments.

  5. Structural optimisation of cage induction motors using finite element analysis

    NASA Astrophysics Data System (ADS)

    Palko, S.

    The current trend in motor design is to have highly efficient, low noise, low cost, and modular motors with a high power factor. High torque motors are useful in applications like servo motors, lifts, cranes, and rolling mills. This report contains a detailed review of different optimization methods applicable in various design problems. Special attention is given to the performance of different methods, when they are used with finite element analysis (FEA) as an objective function, and accuracy problems arising from the numerical simulations. Also an effective method for designing high starting torque and high efficiency motors is presented. The method described in this work utilizes FEA combined with algorithms for the optimization of the slot geometry. The optimization algorithm modifies the position of the nodal points in the element mesh. The number of independent variables ranges from 14 to 140 in this work.

  6. The analysis of transient noise of PCB P/G network based on PI/SI co-simulation

    NASA Astrophysics Data System (ADS)

    Haohang, Su

    2018-02-01

    With the frequency of the space camera become higher than before, the power noise of the imaging electronic system become the important factor. Much more power noise would disturb the transmissions signal, and even influence the image sharpness and system noise. "Target impedance method" is one of the traditional design method of P/G network (power and ground network), which is shorted of transient power noise analysis and often made "over design". In this paper, a new design method of P/G network is provided which simulated by PI/SI co-simulation. The transient power noise can be simulated and then applied in the design of noise reduction, thus effectively controlling the change of the noise in the P/G network. The method can efficiently control the number of adding decoupling capacitor, and is very efficient and feasible to keep the power integrity.

  7. Assessment of Intralaminar Progressive Damage and Failure Analysis Using an Efficient Evaluation Framework

    NASA Technical Reports Server (NTRS)

    Hyder, Imran; Schaefer, Joseph; Justusson, Brian; Wanthal, Steve; Leone, Frank; Rose, Cheryl

    2017-01-01

    Reducing the timeline for development and certification for composite structures has been a long standing objective of the aerospace industry. This timeline can be further exacerbated when attempting to integrate new fiber-reinforced composite materials due to the large number of testing required at every level of design. computational progressive damage and failure analysis (PDFA) attempts to mitigate this effect; however, new PDFA methods have been slow to be adopted in industry since material model evaluation techniques have not been fully defined. This study presents an efficient evaluation framework which uses a piecewise verification and validation (V&V) approach for PDFA methods. Specifically, the framework is applied to evaluate PDFA research codes within the context of intralaminar damage. Methods are incrementally taken through various V&V exercises specifically tailored to study PDFA intralaminar damage modeling capability. Finally, methods are evaluated against a defined set of success criteria to highlight successes and limitations.

  8. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  9. A sensitive and efficient method for trace analysis of some phenolic compounds using simultaneous derivatization and air-assisted liquid-liquid microextraction from human urine and plasma samples followed by gas chromatography-nitrogen phosphorous detection.

    PubMed

    Farajzadeh, Mir Ali; Afshar Mogaddam, Mohammad Reza; Alizadeh Nabil, Ali Akbar

    2015-12-01

    In present study, a simultaneous derivatization and air-assisted liquid-liquid microextraction method combined with gas chromatography-nitrogen phosphorous detection has been developed for the determination of some phenolic compounds in biological samples. The analytes are derivatized and extracted simultaneously by a fast reaction with 1-flouro-2,4-dinitrobenzene under mild conditions. Under optimal conditions low limits of detection in the range of 0.05-0.34 ng mL(-1) are achievable. The obtained extraction recoveries are between 84 and 97% and the relative standard deviations are less than 7.2% for intraday (n = 6) and interday (n = 4) precisions. The proposed method was demonstrated to be a simple and efficient method for the analysis of phenols in biological samples. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Quantitative photogrammetric analysis of the Klapp method for treating idiopathic scoliosis.

    PubMed

    Iunes, Denise H; Cecílio, Maria B B; Dozza, Marina A; Almeida, Polyanna R

    2010-01-01

    Few studies have proved that physical therapy techniques are efficient in the treatment of scoliosis. To analyze the efficiency of the Klapp method for the treatment of scoliosis, through a quantitative analysis using computerized biophotogrammetry. Sixteen participants of a mean age of 15+/-2.61 yrs. with idiopathic scoliosis were treated using the Klapp method. To analyze the results from the treatment, they were all of photographed before and after the treatments, following a standardized photographic method. All of the photographs were analyzed quantitatively by the same examiner using the ALCimagem 2000 software. The statistical analyses were performed using the paired t-test with a significance level of 5%. The treatments showed improvements in the angles which evaluated the symmetry of the shoulders, i.e. the acromioclavicular joint angle (AJ; p=0.00) and sternoclavicular joint angle (SJ; p=0.01). There were also improvements in the angle that evaluated the left Thales triangle (DeltaT; p=0.02). Regarding flexibility, there were improvements in the tibiotarsal angle (TTA; p=0.01) and in the hip joint angles (HJA; p=0.00). There were no changes in the vertebral curvatures and nor improvements in head positioning. Only the lumbar curvature, evaluated by the lumbar lordosis angle (LL; p=0.00), changed after the treatments. The Klapp method was an efficient therapeutic technique for treating asymmetries of the trunk and improving its flexibility. However, it was not efficient for pelvic asymmetry modifications in head positioning, cervical lordosis or thoracic kyphosis.

  11. Research on the self-absorption corrections for PGNAA of large samples

    NASA Astrophysics Data System (ADS)

    Yang, Jian-Bo; Liu, Zhi; Chang, Kang; Li, Rui

    2017-02-01

    When a large sample is analysed with the prompt gamma neutron activation analysis (PGNAA) neutron self-shielding and gamma self-absorption affect the accuracy, the correction method for the detection efficiency of the relative H of each element in a large sample is described. The influences of the thickness and density of the cement samples on the H detection efficiency, as well as the impurities Fe2O3 and SiO2 on the prompt γ ray yield for each element in the cement samples, were studied. The phase functions for Ca, Fe, and Si on H with changes in sample thickness and density were provided to avoid complicated procedures for preparing the corresponding density or thickness scale for measuring samples under each density or thickness value and to present a simplified method for the measurement efficiency scale for prompt-gamma neutron activation analysis.

  12. Technical Efficiency and Organ Transplant Performance: A Mixed-Method Approach

    PubMed Central

    de-Pablos-Heredero, Carmen; Fernández-Renedo, Carlos; Medina-Merodio, Jose-Amelio

    2015-01-01

    Mixed methods research is interesting to understand complex processes. Organ transplants are complex processes in need of improved final performance in times of budgetary restrictions. As the main objective a mixed method approach is used in this article to quantify the technical efficiency and the excellence achieved in organ transplant systems and to prove the influence of organizational structures and internal processes in the observed technical efficiency. The results show that it is possible to implement mechanisms for the measurement of the different components by making use of quantitative and qualitative methodologies. The analysis show a positive relationship between the levels related to the Baldrige indicators and the observed technical efficiency in the donation and transplant units of the 11 analyzed hospitals. Therefore it is possible to conclude that high levels in the Baldrige indexes are a necessary condition to reach an increased level of the service offered. PMID:25950653

  13. High-efficiency (6 + 1) × 1 pump-signal combiner based on low-deformation and high-precision alignment fabrication

    NASA Astrophysics Data System (ADS)

    Zou, Shuzhen; Chen, Han; Yu, Haijuan; Sun, Jing; Zhao, Pengfei; Lin, Xuechun

    2017-12-01

    We demonstrate a new method for fabricating a (6 + 1) × 1 pump-signal combiner based on the reduction of signal fiber diameter by corrosion. This method avoids the mismatch loss of the splice between the signal fiber and the output fiber caused by the signal fiber taper processing. The optimum radius of the corroded signal fiber was calculated according to the analysis of the influence of the cladding thickness on the laser propagating in the fiber core. Besides, we also developed a two-step splicing method to complete the high-precision alignment between the signal fiber core and the output fiber core. A high-efficiency (6 + 1) × 1 pump-signal combiner was produced with an average pump power transmission efficiency of 98.0% and a signal power transmission efficiency of 97.7%, which is well suitable for application to high-power fiber laser system.

  14. Spectrum auto-correlation analysis and its application to fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Ming, A. B.; Qin, Z. Y.; Zhang, W.; Chu, F. L.

    2013-12-01

    Bearing failure is one of the most common reasons of machine breakdowns and accidents. Therefore, the fault diagnosis of rolling element bearings is of great significance to the safe and efficient operation of machines owing to its fault indication and accident prevention capability in engineering applications. Based on the orthogonal projection theory, a novel method is proposed to extract the fault characteristic frequency for the incipient fault diagnosis of rolling element bearings in this paper. With the capability of exposing the oscillation frequency of the signal energy, the proposed method is a generalized form of the squared envelope analysis and named as spectral auto-correlation analysis (SACA). Meanwhile, the SACA is a simplified form of the cyclostationary analysis as well and can be iteratively carried out in applications. Simulations and experiments are used to evaluate the efficiency of the proposed method. Comparing the results of SACA, the traditional envelope analysis and the squared envelope analysis, it is found that the result of SACA is more legible due to the more prominent harmonic amplitudes of the fault characteristic frequency and that the SACA with the proper iteration will further enhance the fault features.

  15. NEAT: an efficient network enrichment analysis test.

    PubMed

    Signorelli, Mirko; Vinciotti, Veronica; Wit, Ernst C

    2016-09-05

    Network enrichment analysis is a powerful method, which allows to integrate gene enrichment analysis with the information on relationships between genes that is provided by gene networks. Existing tests for network enrichment analysis deal only with undirected networks, they can be computationally slow and are based on normality assumptions. We propose NEAT, a test for network enrichment analysis. The test is based on the hypergeometric distribution, which naturally arises as the null distribution in this context. NEAT can be applied not only to undirected, but to directed and partially directed networks as well. Our simulations indicate that NEAT is considerably faster than alternative resampling-based methods, and that its capacity to detect enrichments is at least as good as the one of alternative tests. We discuss applications of NEAT to network analyses in yeast by testing for enrichment of the Environmental Stress Response target gene set with GO Slim and KEGG functional gene sets, and also by inspecting associations between functional sets themselves. NEAT is a flexible and efficient test for network enrichment analysis that aims to overcome some limitations of existing resampling-based tests. The method is implemented in the R package neat, which can be freely downloaded from CRAN ( https://cran.r-project.org/package=neat ).

  16. LHCb trigger streams optimization

    NASA Astrophysics Data System (ADS)

    Derkach, D.; Kazeev, N.; Neychev, R.; Panin, A.; Trofimov, I.; Ustyuzhanin, A.; Vesterinen, M.

    2017-10-01

    The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.

  17. An Efficient Glycoblotting-Based Analysis of Oxidized Lipids in Liposomes and a Lipoprotein.

    PubMed

    Furukawa, Takayuki; Hinou, Hiroshi; Takeda, Seiji; Chiba, Hitoshi; Nishimura, Shin-Ichiro; Hui, Shu-Ping

    2017-10-05

    Although widely occurring lipid oxidation, which is triggered by reactive oxygen species (ROS), produces a variety of oxidized lipids, practical methods to efficiently analyze oxidized lipids remain elusive. Herein, it is shown that the glycoblotting platform can be used to analyze oxidized lipids. Analysis is based on the principle that lipid aldehydes, one of the oxidized lipid species, can be captured selectively, enriched, and detected. Moreover, 3-methyl-1-p-tolyltriazene (MTT) methylates phosphoric and carboxylic acids, and this MTT-mediated methylation is, in combination with conventional tandem mass spectrometry (MS/MS) analysis, an effective method for the structural analysis of oxidized lipids. By using three classes of standards, liposomes, and a lipoprotein, it is demonstrated that glycoblotting represents a powerful approach for focused lipidomics, even in complex macromolecules. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Numerical bifurcation analysis of immunological models with time delays

    NASA Astrophysics Data System (ADS)

    Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady

    2005-12-01

    In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.

  19. Mixed Reality Meets Pharmaceutical Development.

    PubMed

    Forrest, William P; Mackey, Megan A; Shah, Vivek M; Hassell, Kerry M; Shah, Prashant; Wylie, Jennifer L; Gopinath, Janakiraman; Balderhaar, Henning; Li, Li; Wuelfing, W Peter; Helmy, Roy

    2017-12-01

    As science evolves, the need for more efficient and innovative knowledge transfer capabilities becomes evident. Advances in drug discovery and delivery sciences have directly impacted the pharmaceutical industry, though the added complexities have not shortened the development process. These added complexities also make it difficult for scientists to rapidly and effectively transfer knowledge to offset the lengthened drug development timelines. While webcams, camera phones, and iPads have been explored as potential new methods of real-time information sharing, the non-"hands-free" nature and lack of viewer and observer point-of-view render them unsuitable for the R&D laboratory or manufacturing setting. As an alternative solution, the Microsoft HoloLens mixed-reality headset was evaluated as a more efficient, hands-free method of knowledge transfer and information sharing. After completing a traditional method transfer between 3 R&D sites (Rahway, NJ; West Point, PA and Schnachen, Switzerland), a retrospective analysis of efficiency gain was performed through the comparison of a mock method transfer between NJ and PA sites using the HoloLens. The results demonstrated a minimum 10-fold gain in efficiency, weighing in from a savings in time, cost, and the ability to have real-time data analysis and discussion. In addition, other use cases were evaluated involving vendor and contract research/manufacturing organizations. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  20. [Analysis of triterpenoids in Ganoderma lucidum by microwave-assisted continuous extraction].

    PubMed

    Lu, Yan-fang; An, Jing; Jiang, Ye

    2015-04-01

    For further improving the extraction efficiency of microwave extraction, a microwave-assisted contijuous extraction (MACE) device has been designed and utilized. By contrasting with the traditional methods, the characteristics and extraction efficiency of MACE has also been studied. The method was validated by the analysis of the triterpenoids in Ganoderma lucidum. The extraction conditions of MACE were: using 95% ethanol as solvent, microwave power 200 W and radiation time 14.5 min (5 cycles). The extraction results were subsequently compared with traditional heat reflux extraction ( HRE) , soxhlet extraction (SE), ultrasonic extraction ( UE) as well as the conventional microwave extraction (ME). For triterpenoids, the two methods based on the microwaves (ME and MACE) were in general capable of finishing the extraction in 10, 14.5 min, respectively, while other methods should consume 60 min and even more than 100 min. Additionally, ME can produce comparable extraction results as the classical HRE and higher extraction yield than both SE and UE, however, notably lower extraction yield than MASE. More importantly, the purity of the crud extract by MACE is far better than the other methods. MACE can effectively combine the advantages of microwave extraction and soxhlet extraction, thus enabling a more complete extraction of the analytes of TCMs in comparison with ME. And therefore makes the analytic result more accurate. It provides a novel, high efficient, rapid and reliable pretreatment technique for the analysis of TCMs, and it could potentially be extended to ingredient preparation or extracting techniques of TCMs.

  1. New Method to Prepare Mitomycin C Loaded PLA-Nanoparticles with High Drug Entrapment Efficiency

    NASA Astrophysics Data System (ADS)

    Hou, Zhenqing; Wei, Heng; Wang, Qian; Sun, Qian; Zhou, Chunxiao; Zhan, Chuanming; Tang, Xiaolong; Zhang, Qiqing

    2009-07-01

    The classical utilized double emulsion solvent diffusion technique for encapsulating water soluble Mitomycin C (MMC) in PLA nanoparticles suffers from low encapsulation efficiency because of the drug rapid partitioning to the external aqueous phase. In this paper, MMC loaded PLA nanoparticles were prepared by a new single emulsion solvent evaporation method, in which soybean phosphatidylcholine (SPC) was employed to improve the liposolubility of MMC by formation of MMC-SPC complex. Four main influential factors based on the results of a single-factor test, namely, PLA molecular weight, ratio of PLA to SPC (wt/wt) and MMC to SPC (wt/wt), volume ratio of oil phase to water phase, were evaluated using an orthogonal design with respect to drug entrapment efficiency. The drug release study was performed in pH 7.2 PBS at 37 °C with drug analysis using UV/vis spectrometer at 365 nm. MMC-PLA particles prepared by classical method were used as comparison. The formulated MMC-SPC-PLA nanoparticles under optimized condition are found to be relatively uniform in size (594 nm) with up to 94.8% of drug entrapment efficiency compared to 6.44 μm of PLA-MMC microparticles with 34.5% of drug entrapment efficiency. The release of MMC shows biphasic with an initial burst effect, followed by a cumulated drug release over 30 days is 50.17% for PLA-MMC-SPC nanoparticles, and 74.1% for PLA-MMC particles. The IR analysis of MMC-SPC complex shows that their high liposolubility may be attributed to some weak physical interaction between MMC and SPC during the formation of the complex. It is concluded that the new method is advantageous in terms of smaller size, lower size distribution, higher encapsulation yield, and longer sustained drug release in comparison to classical method.

  2. Recurrence time statistics: versatile tools for genomic DNA sequence analysis.

    PubMed

    Cao, Yinhe; Tung, Wen-Wen; Gao, J B

    2004-01-01

    With the completion of the human and a few model organisms' genomes, and the genomes of many other organisms waiting to be sequenced, it has become increasingly important to develop faster computational tools which are capable of easily identifying the structures and extracting features from DNA sequences. One of the more important structures in a DNA sequence is repeat-related. Often they have to be masked before protein coding regions along a DNA sequence are to be identified or redundant expressed sequence tags (ESTs) are to be sequenced. Here we report a novel recurrence time based method for sequence analysis. The method can conveniently study all kinds of periodicity and exhaustively find all repeat-related features from a genomic DNA sequence. An efficient codon index is also derived from the recurrence time statistics, which has the salient features of being largely species-independent and working well on very short sequences. Efficient codon indices are key elements of successful gene finding algorithms, and are particularly useful for determining whether a suspected EST belongs to a coding or non-coding region. We illustrate the power of the method by studying the genomes of E. coli, the yeast S. cervisivae, the nematode worm C. elegans, and the human, Homo sapiens. Computationally, our method is very efficient. It allows us to carry out analysis of genomes on the whole genomic scale by a PC.

  3. A study on technical efficiency of a DMU (review of literature)

    NASA Astrophysics Data System (ADS)

    Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Sankar, J. Ravi

    2017-11-01

    In this research paper the concept of technical efficiency (due to Farell) [1] of a decision making unit (DMU) has been introduced and the measure of technical and cost efficiencies are derived. Timmer’s [2] deterministic approach to estimate the Cobb-Douglas production frontier has been proposed. The idea of extension of Timmer’s [2] method to any production frontier which is linear in parameters has been presented here. The estimation of parameters of Cobb-Douglas production frontier by linear programming approach has been discussed in this paper. Mark et al. [3] proposed a non-parametric method to assess efficiency. Nuti et al. [4] investigated the relationships among technical efficiency scores, weighted per capita cost and overall performance Gahe Zing Samuel Yank et al. [5] used Data envelopment analysis to assess technical assessment in banking sectors.

  4. Data Envelopment Analysis in the Presence of Measurement Error: Case Study from the National Database of Nursing Quality Indicators® (NDNQI®)

    PubMed Central

    Gajewski, Byron J.; Lee, Robert; Dunton, Nancy

    2012-01-01

    Data Envelopment Analysis (DEA) is the most commonly used approach for evaluating healthcare efficiency (Hollingsworth, 2008), but a long-standing concern is that DEA assumes that data are measured without error. This is quite unlikely, and DEA and other efficiency analysis techniques may yield biased efficiency estimates if it is not realized (Gajewski, Lee, Bott, Piamjariyakul and Taunton, 2009; Ruggiero, 2004). We propose to address measurement error systematically using a Bayesian method (Bayesian DEA). We will apply Bayesian DEA to data from the National Database of Nursing Quality Indicators® (NDNQI®) to estimate nursing units’ efficiency. Several external reliability studies inform the posterior distribution of the measurement error on the DEA variables. We will discuss the case of generalizing the approach to situations where an external reliability study is not feasible. PMID:23328796

  5. Highly sensitive analysis of polycyclic aromatic hydrocarbons in environmental water with porous cellulose/zeolitic imidazolate framework-8 composite microspheres as a novel adsorbent coupled with high-performance liquid chromatography.

    PubMed

    Liang, Xiaotong; Liu, Shengquan; Zhu, Rong; Xiao, Lixia; Yao, Shouzhuo

    2016-07-01

    In this work, novel cellulose/zeolitic imidazolate frameworks-8 composite microspheres have been successfully fabricated and utilized as sorbent for environmental polycyclic aromatic hydrocarbons efficient extraction and sensitive analysis. The composite microspheres were synthesized through the in situ hydrothermal growth of zeolitic imidazolate frameworks-8 on cellulose matrix, and exhibited favorable hierarchical structure with chemical composition as assumed through scanning electron microscopy, Fourier transform infrared spectroscopy, X-ray diffraction patterns, and Brunauer-Emmett-Teller surface areas characterization. A robust and highly efficient method was then successfully developed with as-prepared composite microspheres as novel solid-phase extraction sorbent with optimum extraction conditions, such as sorbent amount, sample volume, extraction time, desorption conditions, volume of organic modifier, and ionic strength. The method exhibited high sensitivity with low limit of detection down to 0.1-1.0 ng/L and satisfactory linearity with correlation coefficients ranging from 0.9988 to 0.9999, as well as good recoveries of 66.7-121.2% with relative standard deviations less than 10% for environmental polycyclic aromatic hydrocarbons analysis. Thus, our method was convenient and efficient for polycyclic aromatic hydrocarbons extraction and detection, potential for future environmental water samples analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Mild extraction methods using aqueous glucose solution for the analysis of natural dyes in textile artefacts dyed with Dyer's madder (Rubia tinctorum L.).

    PubMed

    Ford, Lauren; Henderson, Robert L; Rayner, Christopher M; Blackburn, Richard S

    2017-03-03

    Madder (Rubia tinctorum L.) has been widely used as a red dye throughout history. Acid-sensitive colorants present in madder, such as glycosides (lucidin primeveroside, ruberythric acid, galiosin) and sensitive aglycons (lucidin), are degraded in the textile back extraction process; in previous literature these sensitive molecules are either absent or present in only low concentrations due to the use of acid in typical textile back extraction processes. Anthraquinone aglycons alizarin and purpurin are usually identified in analysis following harsh back extraction methods, such those using solvent mixtures with concentrated hydrochloric acid at high temperatures. Use of softer extraction techniques potentially allows for dye components present in madder to be extracted without degradation, which can potentially provide more information about the original dye profile, which varies significantly between madder varieties, species and dyeing technique. Herein, a softer extraction method involving aqueous glucose solution was developed and compared to other back extraction techniques on wool dyed with root extract from different varieties of Rubia tinctorum. Efficiencies of the extraction methods were analysed by HPLC coupled with diode array detection. Acidic literature methods were evaluated and they generally caused hydrolysis and degradation of the dye components, with alizarin, lucidin, and purpurin being the main compounds extracted. In contrast, extraction in aqueous glucose solution provides a highly effective method for extraction of madder dyed wool and is shown to efficiently extract lucidin primeveroside and ruberythric acid without causing hydrolysis and also extract aglycons that are present due to hydrolysis during processing of the plant material. Glucose solution is a favourable extraction medium due to its ability to form extensive hydrogen bonding with glycosides present in madder, and displace them from the fibre. This new glucose method offers an efficient process that preserves these sensitive molecules and is a step-change in analysis of madder dyed textiles as it can provide further information about historical dye preparation and dyeing processes that current methods cannot. The method also efficiently extracts glycosides in artificially aged samples, making it applicable for museum textile artefacts. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Automated Analysis of Renewable Energy Datasets ('EE/RE Data Mining')

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian; Elmore, Ryan; Getman, Dan

    This poster illustrates methods to substantially improve the understanding of renewable energy data sets and the depth and efficiency of their analysis through the application of statistical learning methods ('data mining') in the intelligent processing of these often large and messy information sources. The six examples apply methods for anomaly detection, data cleansing, and pattern mining to time-series data (measurements from metering points in buildings) and spatiotemporal data (renewable energy resource datasets).

  8. An approximation method for configuration optimization of trusses

    NASA Technical Reports Server (NTRS)

    Hansen, Scott R.; Vanderplaats, Garret N.

    1988-01-01

    Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.

  9. Contribution and efficiency of labor allocation analysis of income in household industry using raw material of agricultural commodity in South Sulawesi.

    NASA Astrophysics Data System (ADS)

    Tenriawaru, A. N.; Mahyuddin; Jamil, M. H.; Fudjaja, L.; Nurbaya, S.

    2018-05-01

    In South Sulawesi, various home industry businesses have grown. This industry is actually the basis of community livelihoods that need to be developed and nurtured by the government so family income get increased and the absorption of workers will improve the regional economy in general. The purpose of this study is to analyse the contribution of income, and efficiency of labour allocation in household industries made from raw agricultural commodities. The method of determining the respondents is done by direct appointment (purposive) on the industry players made from raw agricultural commodities. The type of research is quantitative descriptive and data are analysed using income analysis, cost analysis, income contribution analysis, Working Day (HOK) analysis and efficiency analysis of labour allocation. The results showed that the average income earned per year ranged from IDR. 16,866,867.- up to IDR. 125,271,500.-. There are 2 industries that have high contribution to family income such as banana chips industry and rice milling industry with value of 96.3% and 68.7% respectively. In the meantime, there are 5 industries with high average labour allocation efficiency of IDR. 218,135.- / HOK per day and above the efficiency standard of labour allocation based on UMR in South Sulawesi Province.

  10. Overall Traveling-Wave-Tube Efficiency Improved By Optimized Multistage Depressed Collector Design

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2002-01-01

    Depressed Collector Design The microwave traveling wave tube (TWT) is used widely for space communications and high-power airborne transmitting sources. One of the most important features in designing a TWT is overall efficiency. Yet, overall TWT efficiency is strongly dependent on the efficiency of the electron beam collector, particularly for high values of collector efficiency. For these reasons, the NASA Glenn Research Center developed an optimization algorithm based on simulated annealing to quickly design highly efficient multistage depressed collectors (MDC's). Simulated annealing is a strategy for solving highly nonlinear combinatorial optimization problems. Its major advantage over other methods is its ability to avoid becoming trapped in local minima. Simulated annealing is based on an analogy to statistical thermodynamics, specifically the physical process of annealing: heating a material to a temperature that permits many atomic rearrangements and then cooling it carefully and slowly, until it freezes into a strong, minimum-energy crystalline structure. This minimum energy crystal corresponds to the optimal solution of a mathematical optimization problem. The TWT used as a baseline for optimization was the 32-GHz, 10-W, helical TWT developed for the Cassini mission to Saturn. The method of collector analysis and design used was a 2-1/2-dimensional computational procedure that employs two types of codes, a large signal analysis code and an electron trajectory code. The large signal analysis code produces the spatial, energetic, and temporal distributions of the spent beam entering the MDC. An electron trajectory code uses the resultant data to perform the actual collector analysis. The MDC was optimized for maximum MDC efficiency and minimum final kinetic energy of all collected electrons (to reduce heat transfer). The preceding figure shows the geometric and electrical configuration of an optimized collector with an efficiency of 93.8 percent. The results show the improvement in collector efficiency from 89.7 to 93.8 percent, resulting in an increase of three overall efficiency points. In addition, the time to design a highly efficient MDC was reduced from a month to a few days. All work was done in-house at Glenn for the High Rate Data Delivery Program. Future plans include optimizing the MDC and TWT interaction circuit in tandem to further improve overall TWT efficiency.

  11. The Use of Propensity Scores in Mediation Analysis

    ERIC Educational Resources Information Center

    Jo, Booil; Stuart, Elizabeth A.; MacKinnon, David P.; Vinokur, Amiram D.

    2011-01-01

    Mediation analysis uses measures of hypothesized mediating variables to test theory for how a treatment achieves effects on outcomes and to improve subsequent treatments by identifying the most efficient treatment components. Most current mediation analysis methods rely on untested distributional and functional form assumptions for valid…

  12. Formulation of a dynamic analysis method for a generic family of hoop-mast antenna systems

    NASA Technical Reports Server (NTRS)

    Gabriele, A.; Loewy, R.

    1981-01-01

    Analytical studies of mast-cable-hoop-membrane type antennas were conducted using a transfer matrix numerical analysis approach. This method, by virtue of its specialization and the inherently easy compartmentalization of the formulation and numerical procedures, can be significantly more efficient in computer time required and in the time needed to review and interpret the results.

  13. Mixed time integration methods for transient thermal analysis of structures, appendix 5

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    Mixed time integration methods for transient thermal analysis of structures are studied. An efficient solution procedure for predicting the thermal behavior of aerospace vehicle structures was developed. A 2D finite element computer program incorporating these methodologies is being implemented. The performance of these mixed time finite element algorithms can then be evaluated employing the proposed example problem.

  14. An adaptive cubature formula for efficient reliability assessment of nonlinear structural dynamic systems

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Kong, Fan

    2018-05-01

    Extreme value distribution (EVD) evaluation is a critical topic in reliability analysis of nonlinear structural dynamic systems. In this paper, a new method is proposed to obtain the EVD. The maximum entropy method (MEM) with fractional moments as constraints is employed to derive the entire range of EVD. Then, an adaptive cubature formula is proposed for fractional moments assessment involved in MEM, which is closely related to the efficiency and accuracy for reliability analysis. Three point sets, which include a total of 2d2 + 1 integration points in the dimension d, are generated in the proposed formula. In this regard, the efficiency of the proposed formula is ensured. Besides, a "free" parameter is introduced, which makes the proposed formula adaptive with the dimension. The "free" parameter is determined by arranging one point set adjacent to the boundary of the hyper-sphere which contains the bulk of total probability. In this regard, the tail distribution may be better reproduced and the fractional moments could be evaluated with accuracy. Finally, the proposed method is applied to a ten-storey shear frame structure under seismic excitations, which exhibits strong nonlinearity. The numerical results demonstrate the efficacy of the proposed method.

  15. Beluga whale, Delphinapterus leucas, vocalizations from the Churchill River, Manitoba, Canada.

    PubMed

    Chmelnitsky, Elly G; Ferguson, Steven H

    2012-06-01

    Classification of animal vocalizations is often done by a human observer using aural and visual analysis but more efficient, automated methods have also been utilized to reduce bias and increase reproducibility. Beluga whale, Delphinapterus leucas, calls were described from recordings collected in the summers of 2006-2008, in the Churchill River, Manitoba. Calls (n=706) were classified based on aural and visual analysis, and call characteristics were measured; calls were separated into 453 whistles (64.2%; 22 types), 183 pulsed∕noisy calls (25.9%; 15 types), and 70 combined calls (9.9%; seven types). Measured parameters varied within each call type but less variation existed in pulsed and noisy call types and some combined call types than in whistles. A more efficient and repeatable hierarchical clustering method was applied to 200 randomly chosen whistles using six call characteristics as variables; twelve groups were identified. Call characteristics varied less in cluster analysis groups than in whistle types described by visual and aural analysis and results were similar to the whistle contours described. This study provided the first description of beluga calls in Hudson Bay and using two methods provides more robust interpretations and an assessment of appropriate methods for future studies.

  16. Diverse expected gradient active learning for relative attributes.

    PubMed

    You, Xinge; Wang, Ruxin; Tao, Dacheng

    2014-07-01

    The use of relative attributes for semantic understanding of images and videos is a promising way to improve communication between humans and machines. However, it is extremely labor- and time-consuming to define multiple attributes for each instance in large amount of data. One option is to incorporate active learning, so that the informative samples can be actively discovered and then labeled. However, most existing active-learning methods select samples one at a time (serial mode), and may therefore lose efficiency when learning multiple attributes. In this paper, we propose a batch-mode active-learning method, called diverse expected gradient active learning. This method integrates an informativeness analysis and a diversity analysis to form a diverse batch of queries. Specifically, the informativeness analysis employs the expected pairwise gradient length as a measure of informativeness, while the diversity analysis forces a constraint on the proposed diverse gradient angle. Since simultaneous optimization of these two parts is intractable, we utilize a two-step procedure to obtain the diverse batch of queries. A heuristic method is also introduced to suppress imbalanced multiclass distributions. Empirical evaluations of three different databases demonstrate the effectiveness and efficiency of the proposed approach.

  17. Diverse Expected Gradient Active Learning for Relative Attributes.

    PubMed

    You, Xinge; Wang, Ruxin; Tao, Dacheng

    2014-06-02

    The use of relative attributes for semantic understanding of images and videos is a promising way to improve communication between humans and machines. However, it is extremely labor- and time-consuming to define multiple attributes for each instance in large amount of data. One option is to incorporate active learning, so that the informative samples can be actively discovered and then labeled. However, most existing active-learning methods select samples one at a time (serial mode), and may therefore lose efficiency when learning multiple attributes. In this paper, we propose a batch-mode active-learning method, called Diverse Expected Gradient Active Learning (DEGAL). This method integrates an informativeness analysis and a diversity analysis to form a diverse batch of queries. Specifically, the informativeness analysis employs the expected pairwise gradient length as a measure of informativeness, while the diversity analysis forces a constraint on the proposed diverse gradient angle. Since simultaneous optimization of these two parts is intractable, we utilize a two-step procedure to obtain the diverse batch of queries. A heuristic method is also introduced to suppress imbalanced multi-class distributions. Empirical evaluations of three different databases demonstrate the effectiveness and efficiency of the proposed approach.

  18. A modified indirect mathematical model for evaluation of ethanol production efficiency in industrial-scale continuous fermentation processes.

    PubMed

    Canseco Grellet, M A; Castagnaro, A; Dantur, K I; De Boeck, G; Ahmed, P M; Cárdenas, G J; Welin, B; Ruiz, R M

    2016-10-01

    To calculate fermentation efficiency in a continuous ethanol production process, we aimed to develop a robust mathematical method based on the analysis of metabolic by-product formation. This method is in contrast to the traditional way of calculating ethanol fermentation efficiency, where the ratio between the ethanol produced and the sugar consumed is expressed as a percentage of the theoretical conversion yield. Comparison between the two methods, at industrial scale and in sensitivity studies, showed that the indirect method was more robust and gave slightly higher fermentation efficiency values, although fermentation efficiency of the industrial process was found to be low (~75%). The traditional calculation method is simpler than the indirect method as it only requires a few chemical determinations in samples collected. However, a minor error in any measured parameter will have an important impact on the calculated efficiency. In contrast, the indirect method of calculation requires a greater number of determinations but is much more robust since an error in any parameter will only have a minor effect on the fermentation efficiency value. The application of the indirect calculation methodology in order to evaluate the real situation of the process and to reach an optimum fermentation yield for an industrial-scale ethanol production is recommended. Once a high fermentation yield has been reached the traditional method should be used to maintain the control of the process. Upon detection of lower yields in an optimized process the indirect method should be employed as it permits a more accurate diagnosis of causes of yield losses in order to correct the problem rapidly. The low fermentation efficiency obtained in this study shows an urgent need for industrial process optimization where the indirect calculation methodology will be an important tool to determine process losses. © 2016 The Society for Applied Microbiology.

  19. Compatibility of Segments of Thermoelectric Generators

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey; Ursell, Tristan

    2009-01-01

    A method of calculating (usually for the purpose of maximizing) the power-conversion efficiency of a segmented thermoelectric generator is based on equations derived from the fundamental equations of thermoelectricity. Because it is directly traceable to first principles, the method provides physical explanations in addition to predictions of phenomena involved in segmentation. In comparison with the finite-element method used heretofore to predict (without being able to explain) the behavior of a segmented thermoelectric generator, this method is much simpler to implement in practice: in particular, the efficiency of a segmented thermoelectric generator can be estimated by evaluating equations using only hand-held calculator with this method. In addition, the method provides for determination of cascading ratios. The concept of cascading is illustrated in the figure and the definition of the cascading ratio is defined in the figure caption. An important aspect of the method is its approach to the issue of compatibility among segments, in combination with introduction of the concept of compatibility within a segment. Prior approaches involved the use of only averaged material properties. Two materials in direct contact could be examined for compatibility with each other, but there was no general framework for analysis of compatibility. The present method establishes such a framework. The mathematical derivation of the method begins with the definition of reduced efficiency of a thermoelectric generator as the ratio between (1) its thermal-to-electric power-conversion efficiency and (2) its Carnot efficiency (the maximum efficiency theoretically attainable, given its hot- and cold-side temperatures). The derivation involves calculation of the reduced efficiency of a model thermoelectric generator for which the hot-side temperature is only infinitesimally greater than the cold-side temperature. The derivation includes consideration of the ratio (u) between the electric current and heat-conduction power and leads to the concept of compatibility factor (s) for a given thermoelectric material, defined as the value of u that maximizes the reduced efficiency of the aforementioned model thermoelectric generator.

  20. Simultaneous analysis of 70 pesticides using HPlc/MS/MS: a comparison of the multiresidue method of Klein and Alder and the QuEChERS method.

    PubMed

    Riedel, Melanie; Speer, Karl; Stuke, Sven; Schmeer, Karl

    2010-01-01

    Since 2003, two new multipesticide residue methods for screening crops for a large number of pesticides, developed by Klein and Alder and Anastassiades et al. (Quick, Easy, Cheap, Effective, Rugged, and Safe; QuEChERS), have been published. Our intention was to compare these two important methods on the basis of their extraction efficiency, reproducibility, ruggedness, ease of use, and speed. In total, 70 pesticides belonging to numerous different substance classes were analyzed at two concentration levels by applying both methods, using five different representative matrixes. In the case of the QuEChERS method, the results of the three sample preparation steps (crude extract, extract after SPE, and extract after SPE and acidification) were compared with each other and with the results obtained with the Klein and Alder method. The extraction efficiencies of the QuEChERS method were far higher, and the sample preparation was much quicker when the last two steps were omitted. In most cases, the extraction efficiencies after the first step were approximately 100%. With extraction efficiencies of mostly less than 70%, the Klein and Alder method did not compare favorably. Some analytes caused problems during evaluation, mostly due to matrix influences.

  1. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  2. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE PAGES

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa; ...

    2016-06-22

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  3. Technical- and environmental-efficiency analysis of irrigated cotton-cropping systems in Punjab, Pakistan using data envelopment analysis.

    PubMed

    Ullah, Asmat; Perret, Sylvain R

    2014-08-01

    Cotton cropping in Pakistan uses substantial quantities of resources and adversely affects the environment with pollutants from the inputs, particularly pesticides. A question remains regarding to what extent the reduction of such environmental impact is possible without compromising the farmers' income. This paper investigates the environmental, technical, and economic performances of selected irrigated cotton-cropping systems in Punjab to quantify the sustainability of cotton farming and reveal options for improvement. Using mostly primary data, our study quantifies the technical, cost, and environmental efficiencies of different farm sizes. A set of indicators has been computed to reflect these three domains of efficiency using the data envelopment analysis technique. The results indicate that farmers are broadly environmentally inefficient; which primarily results from poor technical inefficiency. Based on an improved input mix, the average potential environmental impact reduction for small, medium, and large farms is 9, 13, and 11 %, respectively, without compromising the economic return. Moreover, the differences in technical, cost, and environmental efficiencies between small and medium and small and large farm sizes were statistically significant. The second-stage regression analysis identifies that the entire farm size significantly affects the efficiencies, whereas exposure to extension and training has positive effects, and the sowing methods significantly affect the technical and environmental efficiencies. Paradoxically, the formal education level is determined to affect the efficiencies negatively. This paper discusses policy interventions that can improve the technical efficiency to ultimately increase the environmental efficiency and reduce the farmers' operating costs.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osychenko, A A; Zalesskii, A D; Krivokharchenko, A S

    Using the method of femtosecond laser surgery we study the fusion of two-cell mouse embryos under the action of tightly focused femtosecond laser radiation with the fusion efficiency reaching 60%. The detailed statistical analysis of the efficiency of blastomere fusion and development of the embryo up to the blastocyst stage after exposure of the embryos from different mice to a femtosecond pulse is presented. It is shown that the efficiency of blastocyst formation essentially depends on the biological characteristics of the embryo, namely, the strain and age of the donor mouse. The possibility of obtaining hexaploid embryonal cells using themore » methods of femtosecond laser surgery is demonstrated. (extreme light fields and their applications)« less

  5. Measuring and Benchmarking Technical Efficiency of Public Hospitals in Tianjin, China: A Bootstrap-Data Envelopment Analysis Approach.

    PubMed

    Li, Hao; Dong, Siping

    2015-01-01

    China has long been stuck in applying traditional data envelopment analysis (DEA) models to measure technical efficiency of public hospitals without bias correction of efficiency scores. In this article, we have introduced the Bootstrap-DEA approach from the international literature to analyze the technical efficiency of public hospitals in Tianjin (China) and tried to improve the application of this method for benchmarking and inter-organizational learning. It is found that the bias corrected efficiency scores of Bootstrap-DEA differ significantly from those of the traditional Banker, Charnes, and Cooper (BCC) model, which means that Chinese researchers need to update their DEA models for more scientific calculation of hospital efficiency scores. Our research has helped shorten the gap between China and the international world in relative efficiency measurement and improvement of hospitals. It is suggested that Bootstrap-DEA be widely applied into afterward research to measure relative efficiency and productivity of Chinese hospitals so as to better serve for efficiency improvement and related decision making. © The Author(s) 2015.

  6. Finding stability regions for preserving efficiency classification of variable returns to scale technology in data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Zamani, P.; Borzouei, M.

    2016-12-01

    This paper addresses issue of sensitivity of efficiency classification of variable returns to scale (VRS) technology for enhancing the credibility of data envelopment analysis (DEA) results in practical applications when an additional decision making unit (DMU) needs to be added to the set being considered. It also develops a structured approach to assisting practitioners in making an appropriate selection of variation range for inputs and outputs of additional DMU so that this DMU be efficient and the efficiency classification of VRS technology remains unchanged. This stability region is simply specified by the concept of defining hyperplanes of production possibility set of VRS technology and the corresponding halfspaces. Furthermore, this study determines a stability region for the additional DMU within which, in addition to efficiency classification, the efficiency score of a specific inefficient DMU is preserved and also using a simulation method, a region in which some specific efficient DMUs become inefficient is provided.

  7. More efficient parameter estimates for factor analysis of ordinal variables by ridge generalized least squares.

    PubMed

    Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying

    2017-11-01

    Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.

  8. An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts

    NASA Astrophysics Data System (ADS)

    Yan, Kun; Cheng, Gengdong

    2018-03-01

    For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.

  9. Optimization analysis of the motor cooling method in semi-closed single screw refrigeration compressor

    NASA Astrophysics Data System (ADS)

    Wang, Z. L.; Shen, Y. F.; Wang, Z. B.; Wang, J.

    2017-08-01

    Semi-closed single screw refrigeration compressors (SSRC) are widely used in refrigeration and air conditioning systems owing to the advantages of simple structure, balanced forces on the rotor, high volumetric efficiency and so on. In semi-closed SSRCs, motor is often cooled by suction gas or injected refrigerant liquid. Motor cooling method will changes the suction gas temperature, this to a certain extent, is an important factor influencing the thermal dynamic performance of a compressor. Thus the effects of motor cooling method on the performance of the compressor must be studied. In this paper mathematical models of motor cooling process by using these two methods were established. Influences of motor cooling parameters such as suction gas temperature, suction gas quantity, temperature of the injected refrigerant liquid and quantity of the injected refrigerant liquid on the thermal dynamic performance of the compressor were analyzed. The performances of the compressor using these two kinds of motor cooling methods were compared. The motor cooling capacity of the injected refrigerant liquid is proved to be better than the suction gas. All analysis results obtained can be useful for optimum design of the motor cooling process to improve the efficiency and the energy efficiency of the compressor.

  10. Improved neutron-gamma discrimination for a 6Li-glass neutron detector using digital signal analysis methods

    DOE PAGES

    Wang, Cai -Lin; Riedel, Richard A.

    2016-01-14

    A 6Li-glass scintillator (GS20) based neutron Anger camera was developed for time-of-flight single-crystal diffraction instruments at SNS. Traditional pulse-height analysis (PHA) for neutron-gamma discrimination (NGD) resulted in the neutron-gamma efficiency ratio (defined as NGD ratio) on the order of 10 4. The NGD ratios of Anger cameras need to be improved for broader applications including neutron reflectometers. For this purpose, five digital signal analysis methods of individual waveforms from PMTs were proposed using: i). pulse-amplitude histogram; ii). power spectrum analysis combined with the maximum pulse amplitude; iii). two event parameters (a 1, b 0) obtained from Wiener filter; iv). anmore » effective amplitude (m) obtained from an adaptive least-mean-square (LMS) filter; and v). a cross-correlation (CC) coefficient between an individual waveform and a reference. The NGD ratios can be 1-102 times those from traditional PHA method. A brighter scintillator GS2 has better NGD ratio than GS20, but lower neutron detection efficiency. The ultimate NGD ratio is related to the ambient, high-energy background events. Moreover, our results indicate the NGD capability of neutron Anger cameras can be improved using digital signal analysis methods and brighter neutron scintillators.« less

  11. Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Hou, Gene J. W.; Kenny, Sean P.

    1991-01-01

    A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.

  12. Profiling and relative quantification of phosphatidylethanolamine based on acetone stable isotope derivatization.

    PubMed

    Wang, Xiang; Wei, Fang; Xu, Ji-Qu; Lv, Xin; Dong, Xu-Yan; Han, Xianlin; Quek, Siew-Young; Huang, Feng-Hong; Chen, Hong

    2016-01-01

    Phosphatidylethanolamine (PE) is considered to be one of the pivotal lipids for normal cellular function as well as disease initiation and progression. In this study, a simple, efficient, reliable, and inexpensive method for the qualitative analysis and relative quantification of PE, based on acetone stable isotope derivatization combined with double neutral loss scan-shotgun electrospray ionization tandem-quadrupole mass spectrometry analysis (ASID-DNLS-Shotgun ESI-MS/MS), was developed. The ASID method led to alkylation of the primary amino groups of PE with an isopropyl moiety. The use of acetone (d0-acetone) and deuterium-labeled acetone (d6-acetone) introduced a 6 Da mass shift that was ideally suited for relative quantitative analysis, and enhanced sensitivity for mass analysis. The DNLS model was introduced to simultaneously analyze the differential derivatized PEs by shotgun ESI-MS/MS with high selectivity and accuracy. The reaction specificity, labeling efficiency, and linearity of the ASID method were thoroughly evaluated in this study. Its excellent applicability was validated by qualitative and relative quantitative analysis of PE species presented in liver samples from rats fed different diets. Using the ASID-DNLS-Shotgun ESI-MS/MS method, 45 PE species from rat livers have been identified and quantified in an efficient manner. The level of total PEs tended to decrease in the livers of rats on high fat diets compared with controls. The levels of PE 32:1, 34:3, 34:2, 36:3, 36:2, 42:10, plasmalogen PE 36:1 and lyso PE 22:6 were significantly reduced, while levels of PE 36:1 and lyso PE 16:0 increased. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Measuring Efficiency of Knowledge Production in Health Research Centers Using Data Envelopment Analysis (DEA): A Case Study in Iran

    PubMed Central

    Amiri, Mohammad Meskarpour; Nasiri, Taha; Saadat, Seyed Hassan; Anabad, Hosein Amini; Ardakan, Payman Mahboobi

    2016-01-01

    Introduction Efficiency analysis is necessary in order to avoid waste of materials, energy, effort, money, and time during scientific research. Therefore, analyzing efficiency of knowledge production in health areas is necessary, especially for developing and in-transition countries. As the first step in this field, the aim of this study was the analysis of selected health research center efficiency using data envelopment analysis (DEA). Methods This retrospective and applied study was conducted in 2015 using input and output data of 16 health research centers affiliated with a health sciences university in Iran during 2010–2014. The technical efficiency of health research centers was evaluated based on three basic data envelopment analysis (DEA) models: input-oriented, output-oriented, and hyperbolic-oriented. The input and output data of each health research center for years 2010–2014 were collected from the Iran Ministry of Health and Medical Education (MOHE) profile and analyzed by R software. Results The mean efficiency score in input-oriented, output-oriented, and hyperbolic-oriented models was 0.781, 0.671, and 0.798, respectively. Based on results of the study, half of the health research centers are operating below full efficiency, and about one-third of them are operating under the average efficiency level. There is also a large gap between health research center efficiency relative to each other. Conclusion It is necessary for health research centers to improve their efficiency in knowledge production through better management of available resources. The higher level of efficiency in a significant number of health research centers is achievable through more efficient management of human resources and capital. Further research is needed to measure and follow the efficiency of knowledge production by health research centers around the world and over a period of time. PMID:28344756

  14. The Problem With the Placement Study.

    ERIC Educational Resources Information Center

    Miner, Norris

    This study compared the effectiveness and efficiency of two alternative methods for determining the status of graduates of Seminole Community College. The first method involved the identification of graduates, design and mailing of a questionnaire, and analysis of response data, as mandated by the state. The second method compared computer data…

  15. A multiclass multiresidue LC-MS/MS method for analysis of veterinary drugs in bovine kidney

    USDA-ARS?s Scientific Manuscript database

    The increased efficiency permitted by multiclass, multiresidue methods has made such approaches very attractive to laboratories involved in monitoring veterinary drug residues in animal tissues. In this current work, evaluation of a multiclass multiresidue LC-MS/MS method in bovine kidney is describ...

  16. Development of new methodologies for evaluating the energy performance of new commercial buildings

    NASA Astrophysics Data System (ADS)

    Song, Suwon

    The concept of Measurement and Verification (M&V) of a new building continues to become more important because efficient design alone is often not sufficient to deliver an efficient building. Simulation models that are calibrated to measured data can be used to evaluate the energy performance of new buildings if they are compared to energy baselines such as similar buildings, energy codes, and design standards. Unfortunately, there is a lack of detailed M&V methods and analysis methods to measure energy savings from new buildings that would have hypothetical energy baselines. Therefore, this study developed and demonstrated several new methodologies for evaluating the energy performance of new commercial buildings using a case-study building in Austin, Texas. First, three new M&V methods were developed to enhance the previous generic M&V framework for new buildings, including: (1) The development of a method to synthesize weather-normalized cooling energy use from a correlation of Motor Control Center (MCC) electricity use when chilled water use is unavailable, (2) The development of an improved method to analyze measured solar transmittance against incidence angle for sample glazing using different solar sensor types, including Eppley PSP and Li-Cor sensors, and (3) The development of an improved method to analyze chiller efficiency and operation at part-load conditions. Second, three new calibration methods were developed and analyzed, including: (1) A new percentile analysis added to the previous signature method for use with a DOE-2 calibration, (2) A new analysis to account for undocumented exhaust air in DOE-2 calibration, and (3) An analysis of the impact of synthesized direct normal solar radiation using the Erbs correlation on DOE-2 simulation. Third, an analysis of the actual energy savings compared to three different energy baselines was performed, including: (1) Energy Use Index (EUI) comparisons with sub-metered data, (2) New comparisons against Standards 90.1-1989 and 90.1-2001, and (3) A new evaluation of the performance of selected Energy Conservation Design Measures (ECDMs). Finally, potential energy savings were also simulated from selected improvements, including: minimum supply air flow, undocumented exhaust air, and daylighting.

  17. CRISPR/Cas9 cleavage efficiency regression through boosting algorithms and Markov sequence profiling.

    PubMed

    Peng, Hui; Zheng, Yi; Blumenstein, Michael; Tao, Dacheng; Li, Jinyan

    2018-04-16

    CRISPR/Cas9 system is a widely used genome editing tool. A prediction problem of great interests for this system is: how to select optimal single guide RNAs (sgRNAs) such that its cleavage efficiency is high meanwhile the off-target effect is low. This work proposed a two-step averaging method (TSAM) for the regression of cleavage efficiencies of a set of sgRNAs by averaging the predicted efficiency scores of a boosting algorithm and those by a support vector machine (SVM).We also proposed to use profiled Markov properties as novel features to capture the global characteristics of sgRNAs. These new features are combined with the outstanding features ranked by the boosting algorithm for the training of the SVM regressor. TSAM improved the mean Spearman correlation coefficiencies comparing with the state-of-the-art performance on benchmark datasets containing thousands of human, mouse and zebrafish sgRNAs. Our method can be also converted to make binary distinctions between efficient and inefficient sgRNAs with superior performance to the existing methods. The analysis reveals that highly efficient sgRNAs have lower melting temperature at the middle of the spacer, cut at 5'-end closer parts of the genome and contain more 'A' but less 'G' comparing with inefficient ones. Comprehensive further analysis also demonstrates that our tool can predict an sgRNA's cutting efficiency with consistently good performance no matter it is expressed from an U6 promoter in cells or from a T7 promoter in vitro. Online tool is available at http://www.aai-bioinfo.com/CRISPR/. Python and Matlab source codes are freely available at https://github.com/penn-hui/TSAM. Jinyan.Li@uts.edu.au. Supplementary data are available at Bioinformatics online.

  18. Efficient analysis of three dimensional EUV mask induced imaging artifacts using the waveguide decomposition method

    NASA Astrophysics Data System (ADS)

    Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas

    2009-10-01

    This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.

  19. Time and Learning Efficiency in Internet-Based Learning: A Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Cook, David A.; Levinson, Anthony J.; Garside, Sarah

    2010-01-01

    Authors have claimed that Internet-based instruction promotes greater learning efficiency than non-computer methods. Objectives Determine, through a systematic synthesis of evidence in health professions education, how Internet-based instruction compares with non-computer instruction in time spent learning, and what features of Internet-based…

  20. Prefield methods: streamlining forest or nonforest determinations to increase inventory efficiency

    Treesearch

    Sara Goeking; Gretchen Moisen; Kevin Megown; Jason Toombs

    2009-01-01

    Interior West Forest Inventory and Analysis has developed prefield protocols to distinguish forested plots that require field visits from nonforested plots that do not require field visits. Recent innovations have increased the efficiency of the prefield process. First, the incorporation of periodic inventory data into a prefield database increased the amount of...

  1. On the Use of "Green" Metrics in the Undergraduate Organic Chemistry Lecture and Lab to Assess the Mass Efficiency of Organic Reactions

    ERIC Educational Resources Information Center

    Andraos, John; Sayed, Murtuzaali

    2007-01-01

    A general analysis of reaction mass efficiency and raw material cost is developed using an Excel spread sheet format which can be applied to any chemical transformation. These new methods can be easily incorporated into standard laboratory exercises.

  2. A Sector Capacity Assessment Method Based on Airspace Utilization Efficiency

    NASA Astrophysics Data System (ADS)

    Zhang, Jianping; Zhang, Ping; Li, Zhen; Zou, Xiang

    2018-02-01

    Sector capacity is one of the core factors affecting the safety and the efficiency of the air traffic system. Most of previous sector capacity assessment methods only considered the air traffic controller’s (ATCO’s) workload. These methods are not only limited which only concern about the safety, but also not accurate enough. In this paper, we employ the integrated quantitative index system proposed in one of our previous literatures. We use the principal component analysis (PCA) to find out the principal indicators among the indicators so as to calculate the airspace utilization efficiency. In addition, we use a series of fitting functions to test and define the correlation between the dense of air traffic flow and the airspace utilization efficiency. The sector capacity is then decided as the value of the dense of air traffic flow corresponding to the maximum airspace utilization efficiency. We also use the same series of fitting functions to test the correlation between the dese of air traffic flow and the ATCOs’ workload. We examine our method with a large amount of empirical operating data of Chengdu Controlling Center and obtain a reliable sector capacity value. Experiment results also show superiority of our method against those only consider the ATCO’s workload in terms of better correlation between the airspace utilization efficiency and the dense of air traffic flow.

  3. Rapid protein concentration, efficient fluorescence labeling and purification on a micro/nanofluidics chip.

    PubMed

    Wang, Chen; Ouyang, Jun; Ye, De-Kai; Xu, Jing-Juan; Chen, Hong-Yuan; Xia, Xing-Hua

    2012-08-07

    Fluorescence analysis has proved to be a powerful detection technique for achieving single molecule analysis. However, it usually requires the labeling of targets with bright fluorescent tags since most chemicals and biomolecules lack fluorescence. Conventional fluorescence labeling methods require a considerable quantity of biomolecule samples, long reaction times and extensive chromatographic purification procedures. Herein, a micro/nanofluidics device integrating a nanochannel in a microfluidics chip has been designed and fabricated, which achieves rapid protein concentration, fluorescence labeling, and efficient purification of product in a miniaturized and continuous manner. As a demonstration, labeling of the proteins bovine serum albumin (BSA) and IgG with fluorescein isothiocyanate (FITC) is presented. Compared to conventional methods, the present micro/nanofluidics device performs about 10(4)-10(6) times faster BSA labeling with 1.6 times higher yields due to the efficient nanoconfinement effect, improved mass, and heat transfer in the chip device. The results demonstrate that the present micro/nanofluidics device promises rapid and facile fluorescence labeling of small amount of reagents such as proteins, nucleic acids and other biomolecules with high efficiency.

  4. Semi-automatic mapping of geological Structures using UAV-based photogrammetric data: An image analysis approach

    NASA Astrophysics Data System (ADS)

    Vasuki, Yathunanthan; Holden, Eun-Jung; Kovesi, Peter; Micklethwaite, Steven

    2014-08-01

    Recent advances in data acquisition technologies, such as Unmanned Aerial Vehicles (UAVs), have led to a growing interest in capturing high-resolution rock surface images. However, due to the large volumes of data that can be captured in a short flight, efficient analysis of this data brings new challenges, especially the time it takes to digitise maps and extract orientation data. We outline a semi-automated method that allows efficient mapping of geological faults using photogrammetric data of rock surfaces, which was generated from aerial photographs collected by a UAV. Our method harnesses advanced automated image analysis techniques and human data interaction to rapidly map structures and then calculate their dip and dip directions. Geological structures (faults, joints and fractures) are first detected from the primary photographic dataset and the equivalent three dimensional (3D) structures are then identified within a 3D surface model generated by structure from motion (SfM). From this information the location, dip and dip direction of the geological structures are calculated. A structure map generated by our semi-automated method obtained a recall rate of 79.8% when compared against a fault map produced using expert manual digitising and interpretation methods. The semi-automated structure map was produced in 10 min whereas the manual method took approximately 7 h. In addition, the dip and dip direction calculation, using our automated method, shows a mean±standard error of 1.9°±2.2° and 4.4°±2.6° respectively with field measurements. This shows the potential of using our semi-automated method for accurate and efficient mapping of geological structures, particularly from remote, inaccessible or hazardous sites.

  5. Spectrally formulated user-defined element in conventional finite element environment for wave motion analysis in 2-D composite structures

    NASA Astrophysics Data System (ADS)

    Khalili, Ashkan; Jha, Ratneshwar; Samaratunga, Dulip

    2016-11-01

    Wave propagation analysis in 2-D composite structures is performed efficiently and accurately through the formulation of a User-Defined Element (UEL) based on the wavelet spectral finite element (WSFE) method. The WSFE method is based on the first-order shear deformation theory which yields accurate results for wave motion at high frequencies. The 2-D WSFE model is highly efficient computationally and provides a direct relationship between system input and output in the frequency domain. The UEL is formulated and implemented in Abaqus (commercial finite element software) for wave propagation analysis in 2-D composite structures with complexities. Frequency domain formulation of WSFE leads to complex valued parameters, which are decoupled into real and imaginary parts and presented to Abaqus as real values. The final solution is obtained by forming a complex value using the real number solutions given by Abaqus. Five numerical examples are presented in this article, namely undamaged plate, impacted plate, plate with ply drop, folded plate and plate with stiffener. Wave motions predicted by the developed UEL correlate very well with Abaqus simulations. The results also show that the UEL largely retains computational efficiency of the WSFE method and extends its ability to model complex features.

  6. Isotopic composition analysis and age dating of uranium samples by high resolution gamma ray spectrometry

    NASA Astrophysics Data System (ADS)

    Apostol, A. I.; Pantelica, A.; Sima, O.; Fugaru, V.

    2016-09-01

    Non-destructive methods were applied to determine the isotopic composition and the time elapsed since last chemical purification of nine uranium samples. The applied methods are based on measuring gamma and X radiations of uranium samples by high resolution low energy gamma spectrometric system with planar high purity germanium detector and low background gamma spectrometric system with coaxial high purity germanium detector. The ;Multigroup γ-ray Analysis Method for Uranium; (MGAU) code was used for the precise determination of samples' isotopic composition. The age of the samples was determined from the isotopic ratio 214Bi/234U. This ratio was calculated from the analyzed spectra of each uranium sample, using relative detection efficiency. Special attention is paid to the coincidence summing corrections that have to be taken into account when performing this type of analysis. In addition, an alternative approach for the age determination using full energy peak efficiencies obtained by Monte Carlo simulations with the GESPECOR code is described.

  7. Church ownership and hospital efficiency.

    PubMed

    White, K R; Ozcan, Y A

    1996-01-01

    Using a sample of California hospitals, the effect of church ownership was examined as it relates to nonprofit hospital efficiency. Efficiency scores were computed using a nonparametric method called data envelopment analysis (DEA). Controlling for hospital size, location, system membership, and type of church ownership, church-owned hospitals were found to be more frequently in the efficient category than their secular nonprofit counterparts. The outcomes have policy implications for reducing healthcare expenditures by focusing on increasing outputs or decreasing inputs, as appropriate, and bolstering the case for church-sponsored hospitals to retain the tax-exempt status due to their ability to manage their resources as efficiently as (or more efficiently than) secular hospitals.

  8. Gait Analysis Using Wearable Sensors

    PubMed Central

    Tao, Weijun; Liu, Tao; Zheng, Rencheng; Feng, Hutian

    2012-01-01

    Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications. PMID:22438763

  9. Dominant Epistasis Between Two Quantitative Trait Loci Governing Sporulation Efficiency in Yeast Saccharomyces cerevisiae

    PubMed Central

    Bergman, Juraj; Mitrikeski, Petar T.

    2015-01-01

    Summary Sporulation efficiency in the yeast Saccharomyces cerevisiae is a well-established model for studying quantitative traits. A variety of genes and nucleotides causing different sporulation efficiencies in laboratory, as well as in wild strains, has already been extensively characterised (mainly by reciprocal hemizygosity analysis and nucleotide exchange methods). We applied a different strategy in order to analyze the variation in sporulation efficiency of laboratory yeast strains. Coupling classical quantitative genetic analysis with simulations of phenotypic distributions (a method we call phenotype modelling) enabled us to obtain a detailed picture of the quantitative trait loci (QTLs) relationships underlying the phenotypic variation of this trait. Using this approach, we were able to uncover a dominant epistatic inheritance of loci governing the phenotype. Moreover, a molecular analysis of known causative quantitative trait genes and nucleotides allowed for the detection of novel alleles, potentially responsible for the observed phenotypic variation. Based on the molecular data, we hypothesise that the observed dominant epistatic relationship could be caused by the interaction of multiple quantitative trait nucleotides distributed across a 60--kb QTL region located on chromosome XIV and the RME1 locus on chromosome VII. Furthermore, we propose a model of molecular pathways which possibly underlie the phenotypic variation of this trait. PMID:27904371

  10. Frequency analysis of uncertain structures using imprecise probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modares, Mehdi; Bergerson, Joshua

    2015-01-01

    Two new methods for finite element based frequency analysis of a structure with uncertainty are developed. An imprecise probability formulation based on enveloping p-boxes is used to quantify the uncertainty present in the mechanical characteristics of the structure. For each element, independent variations are considered. Using the two developed methods, P-box Frequency Analysis (PFA) and Interval Monte-Carlo Frequency Analysis (IMFA), sharp bounds on natural circular frequencies at different probability levels are obtained. These methods establish a framework for handling incomplete information in structural dynamics. Numerical example problems are presented that illustrate the capabilities of the new methods along with discussionsmore » on their computational efficiency.« less

  11. Efficient determination of average valence of manganese in manganese oxides by reaction headspace gas chromatography.

    PubMed

    Xie, Wei-Qi; Gong, Yi-Xian; Yu, Kong-Xian

    2017-08-18

    This work investigates a new reaction headspace gas chromatographic (HS-GC) technique for efficient quantifying average valence of manganese (Mn) in manganese oxides. This method is on the basis of the oxidation reaction between manganese oxides and sodium oxalate under the acidic condition. The carbon dioxide (CO 2 ) formed from the oxidation reaction can be quantitatively analyzed by headspace gas chromatography. The data showed that the reaction in the closed headspace vial can be completed in 20min at 80°C. The relative standard deviation of this reaction HS-GC method in the precision testing was within 1.08%, the relative differences between the new method and the reference method (titration method) were no more than 5.71%. The new HS-GC method is automated, efficient, and can be a reliable tool for the quantitative analysis of average valence of manganese in the manganese oxide related research and applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Efficiency of Health Care Production in Low-Resource Settings: A Monte-Carlo Simulation to Compare the Performance of Data Envelopment Analysis, Stochastic Distance Functions, and an Ensemble Model

    PubMed Central

    Giorgio, Laura Di; Flaxman, Abraham D.; Moses, Mark W.; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O.; Wollum, Alexandra; Murray, Christopher J. L.

    2016-01-01

    Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings. PMID:26812685

  13. Green analytical method development for statin analysis.

    PubMed

    Assassi, Amira Louiza; Roy, Claude-Eric; Perovitch, Philippe; Auzerie, Jack; Hamon, Tiphaine; Gaudin, Karen

    2015-02-06

    Green analytical chemistry method was developed for pravastatin, fluvastatin and atorvastatin analysis. HPLC/DAD method using ethanol-based mobile phase with octadecyl-grafted silica with various grafting and related-column parameters such as particle sizes, core-shell and monolith was studied. Retention, efficiency and detector linearity were optimized. Even for column with particle size under 2 μm, the benefit of keeping efficiency within a large range of flow rate was not obtained with ethanol based mobile phase compared to acetonitrile one. Therefore the strategy to shorten analysis by increasing the flow rate induced decrease of efficiency with ethanol based mobile phase. An ODS-AQ YMC column, 50 mm × 4.6 mm, 3 μm was selected which showed the best compromise between analysis time, statin separation, and efficiency. HPLC conditions were at 1 mL/min, ethanol/formic acid (pH 2.5, 25 mM) (50:50, v/v) and thermostated at 40°C. To reduce solvent consumption for sample preparation, 0.5mg/mL concentration of each statin was found the highest which respected detector linearity. These conditions were validated for each statin for content determination in high concentrated hydro-alcoholic solutions. Solubility higher than 100mg/mL was found for pravastatin and fluvastatin, whereas for atorvastatin calcium salt the maximum concentration was 2mg/mL for hydro-alcoholic binary mixtures between 35% and 55% of ethanol in water. Using atorvastatin instead of its calcium salt, solubility was improved. Highly concentrated solution of statins offered potential fluid for per Buccal Per-Mucous(®) administration with the advantages of rapid and easy passage of drugs. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Efficient option valuation of single and double barrier options

    NASA Astrophysics Data System (ADS)

    Kabaivanov, Stanimir; Milev, Mariyan; Koleva-Petkova, Dessislava; Vladev, Veselin

    2017-12-01

    In this paper we present an implementation of pricing algorithm for single and double barrier options using Mellin transformation with Maximum Entropy Inversion and its suitability for real-world applications. A detailed analysis of the applied algorithm is accompanied by implementation in C++ that is then compared to existing solutions in terms of efficiency and computational power. We then compare the applied method with existing closed-form solutions and well known methods of pricing barrier options that are based on finite differences.

  15. Automatic computation and solution of generalized harmonic balance equations

    NASA Astrophysics Data System (ADS)

    Peyton Jones, J. C.; Yaser, K. S. A.; Stevenson, J.

    2018-02-01

    Generalized methods are presented for generating and solving the harmonic balance equations for a broad class of nonlinear differential or difference equations and for a general set of harmonics chosen by the user. In particular, a new algorithm for automatically generating the Jacobian of the balance equations enables efficient solution of these equations using continuation methods. Efficient numeric validation techniques are also presented, and the combined algorithm is applied to the analysis of dc, fundamental, second and third harmonic response of a nonlinear automotive damper.

  16. Freeform lens design for LED collimating illumination.

    PubMed

    Chen, Jin-Jia; Wang, Te-Yuan; Huang, Kuang-Lung; Liu, Te-Shu; Tsai, Ming-Da; Lin, Chin-Tang

    2012-05-07

    We present a simple freeform lens design method for an application to LED collimating illumination. The method is derived from a basic geometric-optics analysis and construction approach. By using this method, a highly collimating lens with LED chip size of 1.0 mm × 1.0 mm and optical simulation efficiency of 86.5% under a view angle of ± 5 deg is constructed. To verify the practical performance of the lens, a prototype of the collimator lens is also made, and an optical efficiency of 90.3% with a beam angle of 4.75 deg is measured.

  17. Conditional analysis of mixed Poisson processes with baseline counts: implications for trial design and analysis.

    PubMed

    Cook, Richard J; Wei, Wei

    2003-07-01

    The design of clinical trials is typically based on marginal comparisons of a primary response under two or more treatments. The considerable gains in efficiency afforded by models conditional on one or more baseline responses has been extensively studied for Gaussian models. The purpose of this article is to present methods for the design and analysis of clinical trials in which the response is a count or a point process, and a corresponding baseline count is available prior to randomization. The methods are based on a conditional negative binomial model for the response given the baseline count and can be used to examine the effect of introducing selection criteria on power and sample size requirements. We show that designs based on this approach are more efficient than those proposed by McMahon et al. (1994).

  18. Computational efficient segmentation of cell nuclei in 2D and 3D fluorescent micrographs

    NASA Astrophysics Data System (ADS)

    De Vylder, Jonas; Philips, Wilfried

    2011-02-01

    This paper proposes a new segmentation technique developed for the segmentation of cell nuclei in both 2D and 3D fluorescent micrographs. The proposed method can deal with both blurred edges as with touching nuclei. Using a dual scan line algorithm its both memory as computational efficient, making it interesting for the analysis of images coming from high throughput systems or the analysis of 3D microscopic images. Experiments show good results, i.e. recall of over 0.98.

  19. Analysis of financing efficiency of big data industry in Guizhou province based on DEA models

    NASA Astrophysics Data System (ADS)

    Li, Chenggang; Pan, Kang; Luo, Cong

    2018-03-01

    Taking 20 listed enterprises of big data industry in Guizhou province as samples, this paper uses DEA method to evaluate the financing efficiency of big data industry in Guizhou province. The results show that the pure technical efficiency of big data enterprise in Guizhou province is high, whose mean value reaches to 0.925. The mean value of scale efficiency reaches to 0.749. The average value of comprehensive efficiency reaches 0.693. The comprehensive financing efficiency is low. According to the results of the study, this paper puts forward some policy and recommendations to improve the financing efficiency of the big data industry in Guizhou.

  20. RICH detectors: Analysis methods and their impact on physics

    NASA Astrophysics Data System (ADS)

    Križan, Peter

    2017-12-01

    The paper discusses the importance of particle identification in particle physics experiments, and reviews the impact of ring imaging Cherenkov (RICH) counters in experiments that are currently running, or are under construction. Several analysis methods are discussed that are needed to calibrate a RICH counter, and to align its components with the rest of the detector. Finally, methods are reviewed on how to employ the collected data to efficiently separate one particle species from the other.

  1. Analysis and optimization of gyrokinetic toroidal simulations on homogenous and heterogenous platforms

    DOE PAGES

    Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...

    2013-07-18

    The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.

  2. Expediting Combinatorial Data Set Analysis by Combining Human and Algorithmic Analysis.

    PubMed

    Stein, Helge Sören; Jiao, Sally; Ludwig, Alfred

    2017-01-09

    A challenge in combinatorial materials science remains the efficient analysis of X-ray diffraction (XRD) data and its correlation to functional properties. Rapid identification of phase-regions and proper assignment of corresponding crystal structures is necessary to keep pace with the improved methods for synthesizing and characterizing materials libraries. Therefore, a new modular software called htAx (high-throughput analysis of X-ray and functional properties data) is presented that couples human intelligence tasks used for "ground-truth" phase-region identification with subsequent unbiased verification by an algorithm to efficiently analyze which phases are present in a materials library. Identified phases and phase-regions may then be correlated to functional properties in an expedited manner. For the functionality of htAx to be proven, two previously published XRD benchmark data sets of the materials systems Al-Cr-Fe-O and Ni-Ti-Cu are analyzed by htAx. The analysis of ∼1000 XRD patterns takes less than 1 day with htAx. The proposed method reliably identifies phase-region boundaries and robustly identifies multiphase structures. The method also addresses the problem of identifying regions with previously unpublished crystal structures using a special daisy ternary plot.

  3. Industrial applications using BASF eco-efficiency analysis: perspectives on green engineering principles.

    PubMed

    Shonnard, David R; Kicherer, Andreas; Saling, Peter

    2003-12-01

    Life without chemicals would be inconceivable, but the potential risks and impacts to the environment associated with chemical production and chemical products are viewed critically. Eco-efficiency analysis considers the economic and life cycle environmental effects of a product or process, giving these equal weighting. The major elements of the environmental assessment include primary energy use, raw materials utilization, emissions to all media, toxicity, safety risk, and land use. The relevance of each environmental category and also for the economic versus the environmental impacts is evaluated using national emissions and economic data. The eco-efficiency analysis method of BASF is briefly presented, and results from three applications to chemical processes and products are summarized. Through these applications, the eco-efficiency analyses mostly confirm the 12 Principles listed in Anastas and Zimmerman (Environ. Sci. Technol. 2003, 37(5), 94A), with the exception that, in one application, production systems based on bio-based feedstocks were not the most eco-efficient as compared to those based on fossil resources. Over 180 eco-efficiency analyses have been conducted at BASF, and their results have been used to support strategic decision-making, marketing, research and development, and communication with external parties. Eco-efficiency analysis, as one important strategy and success factor in sustainable development, will continue to be a very strong operational tool at BASF.

  4. Efficient numerical method of freeform lens design for arbitrary irradiance shaping

    NASA Astrophysics Data System (ADS)

    Wojtanowski, Jacek

    2018-05-01

    A computational method to design a lens with a flat entrance surface and a freeform exit surface that can transform a collimated, generally non-uniform input beam into a beam with a desired irradiance distribution of arbitrary shape is presented. The methodology is based on non-linear elliptic partial differential equations, known as Monge-Ampère PDEs. This paper describes an original numerical algorithm to solve this problem by applying the Gauss-Seidel method with simplified boundary conditions. A joint MATLAB-ZEMAX environment is used to implement and verify the method. To prove the efficiency of the proposed approach, an exemplary study where the designed lens is faced with the challenging illumination task is shown. An analysis of solution stability, iteration-to-iteration ray mapping evolution (attached in video format), depth of focus and non-zero étendue efficiency is performed.

  5. Comparison of fusion methods from the abstract level and the rank level in a dispersed decision-making system

    NASA Astrophysics Data System (ADS)

    Przybyła-Kasperek, M.; Wakulicz-Deja, A.

    2017-05-01

    Issues related to decision making based on dispersed knowledge are discussed in the paper. A dispersed decision-making system, which was proposed by the authors in previous articles, is used in this paper. In the system, a process of combining classifiers into coalitions with a negotiation stage is realized. The novelty that is proposed in this article involves the use of six different methods of conflict analysis that are known from the literature.The main purpose of the tests, which were performed, was to compare the methods from the two groups - the abstract level and the rank level. An additional aim was to investigate the efficiency of the fusion methods used in a dispersed system with a dynamic structure with the efficiency that is obtained when no structure is used. Conclusions were drawn that, in most cases, the use of a dispersed system improves the efficiency of inference.

  6. Interquantile Shrinkage in Regression Models

    PubMed Central

    Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.

    2012-01-01

    Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546

  7. Efficiency of different methods of extra-cavity second harmonic generation of continuous wave single-frequency radiation.

    PubMed

    Khripunov, Sergey; Kobtsev, Sergey; Radnatarov, Daba

    2016-01-20

    This work presents for the first time to the best of our knowledge a comparative efficiency analysis among various techniques of extra-cavity second harmonic generation (SHG) of continuous-wave single-frequency radiation in nonperiodically poled nonlinear crystals within a broad range of power levels. Efficiency of nonlinear radiation transformation at powers from 1 W to 10 kW was studied in three different configurations: with an external power-enhancement cavity and without the cavity in the case of single and double radiation pass through a nonlinear crystal. It is demonstrated that at power levels exceeding 1 kW, the efficiencies of methods with and without external power-enhancement cavities become comparable, whereas at even higher powers, SHG by a single or double pass through a nonlinear crystal becomes preferable because of the relatively high efficiency of nonlinear transformation and fairly simple implementation.

  8. The role of finite-difference methods in design and analysis for supersonic cruise

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.

    1976-01-01

    Finite-difference methods for analysis of steady, inviscid supersonic flows are described, and their present state of development is assessed with particular attention to their applicability to vehicles designed for efficient cruise flight. Current work is described which will allow greater geometric latitude, improve treatment of embedded shock waves, and relax the requirement that the axial velocity must be supersonic.

  9. Sequential-Injection Analysis: Principles, Instrument Construction, and Demonstration by a Simple Experiment

    ERIC Educational Resources Information Center

    Economou, A.; Tzanavaras, P. D.; Themelis, D. G.

    2005-01-01

    The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…

  10. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  11. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  12. Table-top job analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-12-01

    The purpose of this Handbook is to establish general training program guidelines for training personnel in developing training for operation, maintenance, and technical support personnel at Department of Energy (DOE) nuclear facilities. TTJA is not the only method of job analysis; however, when conducted properly TTJA can be cost effective, efficient, and self-validating, and represents an effective method of defining job requirements. The table-top job analysis is suggested in the DOE Training Accreditation Program manuals as an acceptable alternative to traditional methods of analyzing job requirements. DOE 5480-20A strongly endorses and recommends it as the preferred method for analyzing jobsmore » for positions addressed by the Order.« less

  13. The efficiency and budgeting of public hospitals: case study of iran.

    PubMed

    Yusefzadeh, Hasan; Ghaderi, Hossein; Bagherzade, Rafat; Barouni, Mohsen

    2013-05-01

    Hospitals are the most costly and important components of any health care system, so it is important to know their economic values, pay attention to their efficiency and consider factors affecting them. The aim of this study was to assess the technical scale and economic efficiency of hospitals in the West Azerbaijan province of Iran, for which Data Envelopment Analysis (DEA) was used to propose a model for operational budgeting. This study was a descriptive-analysis that was conducted in 2009 and had three inputs and two outputs. Deap2, 1 software was used for data analysis. Slack and radial movements and surplus of inputs were calculated for selected hospitals. Finally, a model was proposed for performance-based budgeting of hospitals and health sectors using the DEA technique. The average scores of technical efficiency, pure technical efficiency (managerial efficiency) and scale efficiency of hospitals were 0.584, 0.782 and 0.771, respectively. In other words the capacity of efficiency promotion in hospitals without any increase in costs and with the same amount of inputs was about 41.5%. Only four hospitals among all hospitals had the maximum level of technical efficiency. Moreover, surplus production factors were evident in these hospitals. Reduction of surplus production factors through comprehensive planning based on the results of the Data Envelopment Analysis can play a major role in cost reduction of hospitals and health sectors. In hospitals with a technical efficiency score of less than one, the original and projected values of inputs were different; resulting in a surplus. Hence, these hospitals should reduce their values of inputs to achieve maximum efficiency and optimal performance. The results of this method was applied to hospitals a benchmark for making decisions about resource allocation; linking budgets to performance results; and controlling and improving hospitals performance.

  14. Full cost accounting in the analysis of separated waste collection efficiency: A methodological proposal.

    PubMed

    D'Onza, Giuseppe; Greco, Giulio; Allegrini, Marco

    2016-02-01

    Recycling implies additional costs for separated municipal solid waste (MSW) collection. The aim of the present study is to propose and implement a management tool - the full cost accounting (FCA) method - to calculate the full collection costs of different types of waste. Our analysis aims for a better understanding of the difficulties of putting FCA into practice in the MSW sector. We propose a FCA methodology that uses standard cost and actual quantities to calculate the collection costs of separate and undifferentiated waste. Our methodology allows cost efficiency analysis and benchmarking, overcoming problems related to firm-specific accounting choices, earnings management policies and purchase policies. Our methodology allows benchmarking and variance analysis that can be used to identify the causes of off-standards performance and guide managers to deploy resources more efficiently. Our methodology can be implemented by companies lacking a sophisticated management accounting system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Bayesian analysis of rare events

    NASA Astrophysics Data System (ADS)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  16. A High-Order Method Using Unstructured Grids for the Aeroacoustic Analysis of Realistic Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Lockard, David P.

    1999-01-01

    A method for the prediction of acoustic scatter from complex geometries is presented. The discontinuous Galerkin method provides a framework for the development of a high-order method using unstructured grids. The method's compact form contributes to its accuracy and efficiency, and makes the method well suited for distributed memory parallel computing platforms. Mesh refinement studies are presented to validate the expected convergence properties of the method, and to establish the absolute levels of a error one can expect at a given level of resolution. For a two-dimensional shear layer instability wave and for three-dimensional wave propagation, the method is demonstrated to be insensitive to mesh smoothness. Simulations of scatter from a two-dimensional slat configuration and a three-dimensional blended-wing-body demonstrate the capability of the method to efficiently treat realistic geometries.

  17. Measuring the Efficiency of a Hospital based on the Econometric Stochastic Frontier Analysis (SFA) Method.

    PubMed

    Rezaei, Satar; Zandian, Hamed; Baniasadi, Akram; Moghadam, Telma Zahirian; Delavari, Somayeh; Delavari, Sajad

    2016-02-01

    Hospitals are the most expensive health services provider in the world. Therefore, the evaluation of their performance can be used to reduce costs. The aim of this study was to determine the efficiency of the hospitals at the Kurdistan University of Medical Sciences using stochastic frontier analysis (SFA). This was a cross-sectional and retrospective study that assessed the performance of Kurdistan teaching hospitals (n = 12) between 2007 and 2013. The Stochastic Frontier Analysis method was used to achieve this aim. The numbers of active beds, nurses, physicians, and other staff members were considered as input variables, while the inpatient admission was considered as the output. The data were analyzed using Frontier 4.1 software. The mean technical efficiency of the hospitals we studied was 0.67. The results of the Cobb-Douglas production function showed that the maximum elasticity was related to the active beds and the elasticity of nurses was negative. Also, the return to scale was increasing. The results of this study indicated that the performances of the hospitals were not appropriate in terms of technical efficiency. In addition, there was a capacity enhancement of the output of the hospitals, compared with the most efficient hospitals studied, of about33%. It is suggested that the effect of various factors, such as the quality of health care and the patients' satisfaction, be considered in the future studies to assess hospitals' performances.

  18. Transient loads analysis for space flight applications

    NASA Technical Reports Server (NTRS)

    Thampi, S. K.; Vidyasagar, N. S.; Ganesan, N.

    1992-01-01

    A significant part of the flight readiness verification process involves transient analysis of the coupled Shuttle-payload system to determine the low frequency transient loads. This paper describes a methodology for transient loads analysis and its implementation for the Spacelab Life Sciences Mission. The analysis is carried out using two major software tools - NASTRAN and an external FORTRAN code called EZTRAN. This approach is adopted to overcome some of the limitations of NASTRAN's standard transient analysis capabilities. The method uses Data Recovery Matrices (DRM) to improve computational efficiency. The mode acceleration method is fully implemented in the DRM formulation to recover accurate displacements, stresses, and forces. The advantages of the method are demonstrated through a numerical example.

  19. Design and performance analysis of gas and liquid radial turbines

    NASA Astrophysics Data System (ADS)

    Tan, Xu

    In the first part of the research, pumps running in reverse as turbines are studied. This work uses experimental data of wide range of pumps representing the centrifugal pumps' configurations in terms of specific speed. Based on specific speed and specific diameter an accurate correlation is developed to predict the performances at best efficiency point of the centrifugal pump in its turbine mode operation. The proposed prediction method yields very good results to date compared to previous such attempts. The present method is compared to nine previous methods found in the literature. The comparison results show that the method proposed in this paper is the most accurate. The proposed method can be further complemented and supplemented by more future tests to increase its accuracy. The proposed method is meaningful because it is based both specific speed and specific diameter. The second part of the research is focused on the design and analysis of the radial gas turbine. The specification of the turbine is obtained from the solar biogas hybrid system. The system is theoretically analyzed and constructed based on the purchased compressor. Theoretical analysis results in a specification of 100lb/min, 900ºC inlet total temperature and 1.575atm inlet total pressure. 1-D and 3-D geometry of the rotor is generated based on Aungier's method. 1-D loss model analysis and 3-D CFD simulations are performed to examine the performances of the rotor. The total-to-total efficiency of the rotor is more than 90%. With the help of CFD analysis, modifications on the preliminary design obtained optimized aerodynamic performances. At last, the theoretical performance analysis on the hybrid system is performed with the designed turbine.

  20. ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra

    PubMed Central

    2011-01-01

    Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817

  1. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  2. Analysis of 161Tb by radiochemical separation and liquid scintillation counting

    DOE PAGES

    Jiang, J.; Davies, A.; Arrigo, L.; ...

    2015-12-05

    The determination of 161Tb activity is problematic due to its very low fission yield, short half-life, and the complication of its gamma spectrum. At AWE, radiochemically purified 161Tb solution was measured on a PerkinElmer 1220 Quantulus TM Liquid Scintillation Spectrometer. Since there was no 161Tb certified standard solution available commercially, the counting efficiency was determined by the CIEMAT/NIST Efficiency Tracing method. The method was validated during a recent inter-laboratory comparison exercise involving the analysis of a uranium sample irradiated with thermal neutrons. Lastly, the measured 161Tb result was in excellent agreement with the result using gamma spectrometry and the resultmore » obtained by Pacific Northwest National Laboratory.« less

  3. The Constraint Method for Solid Finite Elements.

    DTIC Science & Technology

    1980-09-30

    9. ’Hierarchical Approximation in Finite Element Analysis", by I. Norman Katz, International Symposium on Innovative Numerical Analysis In Applied ... Engineering Science, Versailles, France, May 23-27, 1977. 10. "Efficient Generation of Hierarchal Finite Elamnts Through the Use of Precomputed Arrays

  4. Independent components analysis to increase efficiency of discriminant analysis methods (FDA and LDA): Application to NMR fingerprinting of wine.

    PubMed

    Monakhova, Yulia B; Godelmann, Rolf; Kuballa, Thomas; Mushtakova, Svetlana P; Rutledge, Douglas N

    2015-08-15

    Discriminant analysis (DA) methods, such as linear discriminant analysis (LDA) or factorial discriminant analysis (FDA), are well-known chemometric approaches for solving classification problems in chemistry. In most applications, principle components analysis (PCA) is used as the first step to generate orthogonal eigenvectors and the corresponding sample scores are utilized to generate discriminant features for the discrimination. Independent components analysis (ICA) based on the minimization of mutual information can be used as an alternative to PCA as a preprocessing tool for LDA and FDA classification. To illustrate the performance of this ICA/DA methodology, four representative nuclear magnetic resonance (NMR) data sets of wine samples were used. The classification was performed regarding grape variety, year of vintage and geographical origin. The average increase for ICA/DA in comparison with PCA/DA in the percentage of correct classification varied between 6±1% and 8±2%. The maximum increase in classification efficiency of 11±2% was observed for discrimination of the year of vintage (ICA/FDA) and geographical origin (ICA/LDA). The procedure to determine the number of extracted features (PCs, ICs) for the optimum DA models was discussed. The use of independent components (ICs) instead of principle components (PCs) resulted in improved classification performance of DA methods. The ICA/LDA method is preferable to ICA/FDA for recognition tasks based on NMR spectroscopic measurements. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Parallel AFSA algorithm accelerating based on MIC architecture

    NASA Astrophysics Data System (ADS)

    Zhou, Junhao; Xiao, Hong; Huang, Yifan; Li, Yongzhao; Xu, Yuanrui

    2017-05-01

    Analysis AFSA past for solving the traveling salesman problem, the algorithm efficiency is often a big problem, and the algorithm processing method, it does not fully responsive to the characteristics of the traveling salesman problem to deal with, and therefore proposes a parallel join improved AFSA process. The simulation with the current TSP known optimal solutions were analyzed, the results showed that the AFSA iterations improved less, on the MIC cards doubled operating efficiency, efficiency significantly.

  6. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.

    2014-10-20

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less

  7. Directional Slack-Based Measure for the Inverse Data Envelopment Analysis

    PubMed Central

    Abu Bakar, Mohd Rizam; Lee, Lai Soon; Jaafar, Azmi B.; Heydar, Maryam

    2014-01-01

    A novel technique has been introduced in this research which lends its basis to the Directional Slack-Based Measure for the inverse Data Envelopment Analysis. In practice, the current research endeavors to elucidate the inverse directional slack-based measure model within a new production possibility set. On one occasion, there is a modification imposed on the output (input) quantities of an efficient decision making unit. In detail, the efficient decision making unit in this method was omitted from the present production possibility set but substituted by the considered efficient decision making unit while its input and output quantities were subsequently modified. The efficiency score of the entire DMUs will be retained in this approach. Also, there would be an improvement in the efficiency score. The proposed approach was investigated in this study with reference to a resource allocation problem. It is possible to simultaneously consider any upsurges (declines) of certain outputs associated with the efficient decision making unit. The significance of the represented model is accentuated by presenting numerical examples. PMID:24883350

  8. A multifractal detrended fluctuation analysis of financial market efficiency: Comparison using Dow Jones sector ETF indices

    NASA Astrophysics Data System (ADS)

    Tiwari, Aviral Kumar; Albulescu, Claudiu Tiberiu; Yoon, Seong-Min

    2017-10-01

    This study challenges the efficient market hypothesis, relying on the Dow Jones sector Exchange-Traded Fund (ETF) indices. For this purpose, we use the generalized Hurst exponent and multifractal detrended fluctuation analysis (MF-DFA) methods, using daily data over the timespan from 2000 to 2015. We compare the sector ETF indices in terms of market efficiency between short- and long-run horizons, small and large fluctuations, and before and after the global financial crisis (GFC). Our findings can be summarized as follows. First, there is clear evidence that the sector ETF markets are multifractal in nature. We also find a crossover in the multifractality of sector ETF market dynamics. Second, the utilities and consumer goods sector ETF markets are more efficient compared with the financial and telecommunications sector ETF markets, in terms of price prediction. Third, there are noteworthy discrepancies in terms of market efficiency, between the short- and long-term horizons. Fourth, the ETF market efficiency is considerably diminished after the global financial crisis.

  9. [Impact of the funding reform of teaching hospitals in Brazil].

    PubMed

    Lobo, M S C; Silva, A C M; Lins, M P E; Fiszman, R

    2009-06-01

    To assess the impact of funding reform on the productivity of teaching hospitals. Based on the Information System of Federal University Hospitals of Brazil, 2003 and 2006 efficiency and productivity were measured using frontier methods with a linear programming technique, data envelopment analysis, and input-oriented variable returns to scale model. The Malmquist index was calculated to detect changes during the study period: 'technical efficiency change,' or the relative variation of the efficiency of each unit; and 'technological change' after frontier shift. There was 51% mean budget increase and improvement of technical efficiency of teaching hospitals (previously 11, 17 hospitals reached the empirical efficiency frontier) but the same was not seen for the technology frontier. Data envelopment analysis set benchmark scores for each inefficient unit (before and after reform) and there was a positive correlation between technical efficiency and teaching intensity and dedication. The reform promoted management improvements but there is a need of further follow-up to assess the effectiveness of funding changes.

  10. Development of a novel ultrasound-assisted headspace liquid-phase microextraction and its application to the analysis of chlorophenols in real aqueous samples.

    PubMed

    Xu, Hui; Liao, Ying; Yao, Jinrong

    2007-10-05

    A new sample pretreatment technique, ultrasound-assisted headspace liquid-phase microextraction was developed as mentioned in this paper. In the technique, the volatile analytes were headspace extracted into a small drop of solvent, which suspended on the bottom of a cone-shaped PCR tube instead of the needle tip of a microsyringe. More solvent could be suspended in the PCR tube than microsyringe due to the larger interfacial tension, thus the analysis sensitivity was significantly improved with the increase of the extractant volume. Moreover, ultrasound-assisted extraction and independent controlling temperature of the extractant and the sample were performed to enhance the extraction efficiency. Following the extraction, the solvent-loaded sample was analyzed by high-performance liquid chromatography. Chlorophenols (2-chlorophenol, 2,4-dichlorophenol and 2,6-dichlorophenol) were chosen as model analytes to investigate the feasibility of the method. The experimental conditions related to the extraction efficiency were systematically studied. Under the optimum experimental conditions, the detection limit (S/N=3), intra- and inter-day RSD were 6 ng mL(-1), 4.6%, 3.9% for 2-chlorophenol, 12 ng mL(-1), 2.4%, 8.8% for 2,4-dichlorophenol and 23 ng mL(-1), 3.3%, 5.3% for 2,6-dichlorophenol, respectively. The proposed method was successfully applied to determine chlorophenols in real aqueous samples. Good recoveries ranging from 84.6% to 100.7% were obtained. In addition, the extraction efficiency of our method and the conventional headspace liquid-phase microextraction were compared; the extraction efficiency of the former was about 21 times higher than that of the latter. The results demonstrated that the proposed method is a promising sample pretreatment approach, its advantages over the conventional headspace liquid-phase microextraction include simple setup, ease of operation, rapidness, sensitivity, precision and no cross-contamination. The method is very suitable for the analysis of trace volatile and semivolatile pollutants in real aqueous sample.

  11. Sizing and economic analysis of stand alone photovoltaic system with hydrogen storage

    NASA Astrophysics Data System (ADS)

    Nordin, N. D.; Rahman, H. A.

    2017-11-01

    This paper proposes a design steps in sizing of standalone photovoltaic system with hydrogen storage using intuitive method. The main advantage of this method is it uses a direct mathematical approach to find system’s size based on daily load consumption and average irradiation data. The keys of system design are to satisfy a pre-determined load requirement and maintain hydrogen storage’s state of charge during low solar irradiation period. To test the effectiveness of the proposed method, a case study is conducted using Kuala Lumpur’s generated meteorological data and rural area’s typical daily load profile of 2.215 kWh. In addition, an economic analysis is performed to appraise the proposed system feasibility. The finding shows that the levelized cost of energy for proposed system is RM 1.98 kWh. However, based on sizing results obtained using a published method with AGM battery as back-up supply, the system cost is lower and more economically viable. The feasibility of PV system with hydrogen storage can be improved if the efficiency of hydrogen storage technologies significantly increases in the future. Hence, a sensitivity analysis is performed to verify the effect of electrolyzer and fuel cell efficiencies towards levelized cost of energy. Efficiencies of electrolyzer and fuel cell available in current market are validated using laboratory’s experimental data. This finding is needed to envisage the applicability of photovoltaic system with hydrogen storage as a future power supply source in Malaysia.

  12. A comparison of methods for teaching receptive labeling to children with autism spectrum disorders: a systematic replication.

    PubMed

    Grow, Laura L; Kodak, Tiffany; Carr, James E

    2014-01-01

    Previous research has demonstrated that the conditional-only method (starting with a multiple-stimulus array) is more efficient than the simple-conditional method (progressive incorporation of more stimuli into the array) for teaching receptive labeling to children with autism spectrum disorders (Grow, Carr, Kodak, Jostad, & Kisamore,). The current study systematically replicated the earlier study by comparing the 2 approaches using progressive prompting with 2 boys with autism. The results showed that the conditional-only method was a more efficient and reliable teaching procedure than the simple-conditional method. The results further call into question the practice of teaching simple discriminations to facilitate acquisition of conditional discriminations. © Society for the Experimental Analysis of Behavior.

  13. Dynamical analysis of the avian-human influenza epidemic model using the semi-analytical method

    NASA Astrophysics Data System (ADS)

    Jabbari, Azizeh; Kheiri, Hossein; Bekir, Ahmet

    2015-03-01

    In this work, we present a dynamic behavior of the avian-human influenza epidemic model by using efficient computational algorithm, namely the multistage differential transform method(MsDTM). The MsDTM is used here as an algorithm for approximating the solutions of the avian-human influenza epidemic model in a sequence of time intervals. In order to show the efficiency of the method, the obtained numerical results are compared with the fourth-order Runge-Kutta method (RK4M) and differential transform method(DTM) solutions. It is shown that the MsDTM has the advantage of giving an analytical form of the solution within each time interval which is not possible in purely numerical techniques like RK4M.

  14. Chapter 8: Whole-Building Retrofit with Consumption Data Analysis Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Agnew, Ken; Goldberg, Mimi

    Whole-building retrofits involve the installation of multiple measures. Whole-building retrofit programs take many forms. With a focus on overall building performance, these programs usually begin with an energy audit to identify cost-effective energy efficiency measures for the home. Measures are then installed, either at no cost to the homeowner or partially paid for by rebates and/or financing. The methods described here may also be applied to evaluation of single-measure retrofit programs. Related methods exist for replace-on-failure programs and for new construction, but are not the subject of this chapter.

  15. Downdating a time-varying square root information filter

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.

    1990-01-01

    A new method to efficiently downdate an estimate and covariance generated by a discrete time Square Root Information Filter (SRIF) is presented. The method combines the QR factor downdating algorithm of Gill and the decentralized SRIF algorithm of Bierman. Efficient removal of either measurements or a priori information is possible without loss of numerical integrity. Moreover, the method includes features for detecting potential numerical degradation. Performance on a 300 parameter system with 5800 data points shows that the method can be used in real time and hence is a promising tool for interactive data analysis. Additionally, updating a time-varying SRIF filter with either additional measurements or a priori information proceeds analogously.

  16. Application of data envelopment analysis in measuring the efficiency of mutual fund

    NASA Astrophysics Data System (ADS)

    Nik, Marzieh Geramian; Mihanzadeh, Hooman; Izadifar, Mozhgan; Nik, Babak Geramian

    2015-05-01

    The growth of mutual fund industry during the past decades emphasizes the importance of this investment vehicle particularly in prosperity of financial markets and in turn, financial growth of each country. Therefore, evaluating the relative efficiency of mutual funds as investment tool is of importance. In this study, a combined model of DEA (data envelopment analysis), and goal programming (GoDEA) approaches contributes widely to analyze the return efficiency of Mutual Funds in an attempt to separate efficient and inefficient Funds as well as identifying the inefficiency resources. Mixed asset local funds, which are managed jointly by CIMB and Public Mutual Berhad, have been selected for the purpose of this paper. As a result, Public Small Cap Fund (P Small Cap) is regarded as the most efficient mutual fund during the period of study. The integrated model aims to first guide investors to choose the best performing fund among other mutual funds, secondly provides the realistic and appropriate benchmark in compare to other classic method, and finally confirms the utility of data envelopment analysis (DEA) as decision-making tool.

  17. Analyzing thresholds and efficiency with hierarchical Bayesian logistic regression.

    PubMed

    Houpt, Joseph W; Bittner, Jennifer L

    2018-07-01

    Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Root Gravitropism: Quantification, Challenges, and Solutions.

    PubMed

    Muller, Lukas; Bennett, Malcolm J; French, Andy; Wells, Darren M; Swarup, Ranjan

    2018-01-01

    Better understanding of root traits such as root angle and root gravitropism will be crucial for development of crops with improved resource use efficiency. This chapter describes a high-throughput, automated image analysis method to trace Arabidopsis (Arabidopsis thaliana) seedling roots grown on agar plates. The method combines a "particle-filtering algorithm with a graph-based method" to trace the center line of a root and can be adopted for the analysis of several root parameters such as length, curvature, and stimulus from original root traces.

  19. Research in Computational Astrobiology

    NASA Technical Reports Server (NTRS)

    Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.

    2003-01-01

    We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.

  20. Development and application of an analysis of axisymmetric body effects on helicopter rotor aerodynamics using modified slender body theory

    NASA Technical Reports Server (NTRS)

    Yamauchi, G.; Johnson, W.

    1984-01-01

    A computationally efficient body analysis designed to couple with a comprehensive helicopter analysis is developed in order to calculate the body-induced aerodynamic effects on rotor performance and loads. A modified slender body theory is used as the body model. With the objective of demonstrating the accuracy, efficiency, and application of the method, the analysis at this stage is restricted to axisymmetric bodies at zero angle of attack. By comparing with results from an exact analysis for simple body shapes, it is found that the modified slender body theory provides an accurate potential flow solution for moderately thick bodies, with only a 10%-20% increase in computational effort over that of an isolated rotor analysis. The computational ease of this method provides a means for routine assessment of body-induced effects on a rotor. Results are given for several configurations that typify those being used in the Ames 40- by 80-Foot Wind Tunnel and in the rotor-body aerodynamic interference tests being conducted at Ames. A rotor-hybrid airship configuration is also analyzed.

  1. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  2. Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification

    NASA Astrophysics Data System (ADS)

    Ji, Yi; Sun, Shanlin; Xie, Hong-Bo

    2017-06-01

    Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.

  3. Thermodynamic analysis of the efficiency of high-temperature steam electrolysis system for hydrogen production

    NASA Astrophysics Data System (ADS)

    Mingyi, Liu; Bo, Yu; Jingming, Xu; Jing, Chen

    High-temperature steam electrolysis (HTSE), a reversible process of solid oxide fuel cell (SOFC) in principle, is a promising method for highly efficient large-scale hydrogen production. In our study, the overall efficiency of the HTSE system was calculated through electrochemical and thermodynamic analysis. A thermodynamic model in regards to the efficiency of the HTSE system was established and the quantitative effects of three key parameters, electrical efficiency (η el), electrolysis efficiency (η es), and thermal efficiency (η th) on the overall efficiency (η overall) of the HTSE system were investigated. Results showed that the contribution of η el, η es, η th to the overall efficiency were about 70%, 22%, and 8%, respectively. As temperatures increased from 500 °C to 1000 °C, the effect of η el on η overall decreased gradually and the η es effect remained almost constant, while the η th effect increased gradually. The overall efficiency of the high-temperature gas-cooled reactor (HTGR) coupled with the HTSE system under different conditions was also calculated. With the increase of electrical, electrolysis, and thermal efficiency, the overall efficiencies were anticipated to increase from 33% to a maximum of 59% at 1000 °C, which is over two times higher than that of the conventional alkaline water electrolysis.

  4. Multi-objective shape optimization of runner blade for Kaplan turbine

    NASA Astrophysics Data System (ADS)

    Semenova, A.; Chirkov, D.; Lyutov, A.; Chemy, S.; Skorospelov, V.; Pylev, I.

    2014-03-01

    Automatic runner shape optimization based on extensive CFD analysis proved to be a useful design tool in hydraulic turbomachinery. Previously the authors developed an efficient method for Francis runner optimization. It was successfully applied to the design of several runners with different specific speeds. In present work this method is extended to the task of a Kaplan runner optimization. Despite of relatively simpler blade shape, Kaplan turbines have several features, complicating the optimization problem. First, Kaplan turbines normally operate in a wide range of discharges, thus CFD analysis of each variant of the runner should be carried out for several operation points. Next, due to a high specific speed, draft tube losses have a great impact on the overall turbine efficiency, and thus should be accurately evaluated. Then, the flow in blade tip and hub clearances significantly affects the velocity profile behind the runner and draft tube behavior. All these features are accounted in the present optimization technique. Parameterization of runner blade surface using 24 geometrical parameters is described in details. For each variant of runner geometry steady state three-dimensional turbulent flow computations are carried out in the domain, including wicket gate, runner, draft tube, blade tip and hub clearances. The objectives are maximization of efficiency in best efficiency and high discharge operation points, with simultaneous minimization of cavitation area on the suction side of the blade. Multiobjective genetic algorithm is used for the solution of optimization problem, requiring the analysis of several thousands of runner variants. The method is applied to optimization of runner shape for several Kaplan turbines with different heads.

  5. Flotation removal of the microalga Nannochloropsis sp. using Moringa protein-oil emulsion: A novel green approach.

    PubMed

    Kandasamy, Ganesan; Shaleh, Sitti Raehanah Muhamad

    2018-01-01

    A new approach to recover microalgae from aqueous medium using a bio-flotation method is reported. The method involves utilizing a Moringa protein extract - oil emulsion (MPOE) for flotation removal of Nannochloropsis sp. The effect of various factors has been assessed using this method, including operating parameters such as pH, MPOE dose, algae concentration and mixing time. A maximum flotation efficiency of 86.5% was achieved without changing the pH condition of algal medium. Moreover, zeta potential analysis showed a marked difference in the zeta potential values when increase the MPOE dose concentration. An optimum condition of MPOE dosage of 50ml/L, pH 8, mixing time 4min, and a flotation efficiency of greater than 86% was accomplished. The morphology of algal flocs produced by protein-oil emulsion flocculant were characterized by microscopy. This flotation method is not only simple, but also an efficient method for harvesting microalgae from culture medium. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty

    NASA Astrophysics Data System (ADS)

    Kewlani, Gaurav; Crawford, Justin; Iagnemma, Karl

    2012-05-01

    The ability of ground vehicles to quickly and accurately analyse their dynamic response to a given input is critical to their safety and efficient autonomous operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this uncertainty must be considered in the analysis of vehicle motion dynamics. Here, polynomial chaos approaches that explicitly consider parametric uncertainty during modelling of vehicle dynamics are presented. They are shown to be computationally more efficient than the standard Monte Carlo scheme, and experimental results compared with the simulation results performed on ANVEL (a vehicle simulator) indicate that the method can be utilised for efficient and accurate prediction of vehicle motion in realistic scenarios.

  7. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  8. Probabilistic structural analysis using a general purpose finite element program

    NASA Astrophysics Data System (ADS)

    Riha, D. S.; Millwater, H. R.; Thacker, B. H.

    1992-07-01

    This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.

  9. Design of a high altitude long endurance flying-wing solar-powered unmanned air vehicle

    NASA Astrophysics Data System (ADS)

    Alsahlani, A. A.; Johnston, L. J.; Atcliffe, P. A.

    2017-06-01

    The low-Reynolds number environment of high-altitude §ight places severe demands on the aerodynamic design and stability and control of a high altitude, long endurance (HALE) unmanned air vehicle (UAV). The aerodynamic efficiency of a §ying-wing configuration makes it an attractive design option for such an application and is investigated in the present work. The proposed configuration has a high-aspect ratio, swept-wing planform, the wing sweep being necessary to provide an adequate moment arm for outboard longitudinal and lateral control surfaces. A design optimization framework is developed under a MATLAB environment, combining aerodynamic, structural, and stability analysis. Low-order analysis tools are employed to facilitate efficient computations, which is important when there are multiple optimization loops for the various engineering analyses. In particular, a vortex-lattice method is used to compute the wing planform aerodynamics, coupled to a twodimensional (2D) panel method to derive aerofoil sectional characteristics. Integral boundary-layer methods are coupled to the panel method in order to predict §ow separation boundaries during the design iterations. A quasi-analytical method is adapted for application to flyingwing con¦gurations to predict the wing weight and a linear finite-beam element approach is used for structural analysis of the wing-box. Stability is a particular concern in the low-density environment of high-altitude flight for flying-wing aircraft and so provision of adequate directional stability and control power forms part of the optimization process. At present, a modified Genetic Algorithm is used in all of the optimization loops. Each of the low-order engineering analysis tools is validated using higher-order methods to provide con¦dence in the use of these computationally-efficient tools in the present design-optimization framework. This paper includes the results of employing the present optimization tools in the design of a HALE, flying-wing UAV to indicate that this is a viable design configuration option.

  10. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mardechay

    1992-01-01

    The purpose of the research project was to continue the development of new methods for efficient aeroservoelastic analysis and optimization. The main targets were as follows: to complete the development of analytical tools for the investigation of flutter with large stiffness changes; to continue the work on efficient continuous gust response and sensitivity derivatives; and to advance the techniques of calculating dynamic loads with control and unsteady aerodynamic effects. An efficient and highly accurate mathematical model for time-domain analysis of flutter during which large structural changes occur was developed in cooperation with Carol D. Wieseman of NASA LaRC. The model was based on the second-year work 'Modal Coordinates for Aeroelastic Analysis with Large Local Structural Variations'. The work on continuous gust response was completed. An abstract of the paper 'Continuous Gust Response and Sensitivity Derivatives Using State-Space Models' was submitted for presentation in the 33rd Israel Annual Conference on Aviation and Astronautics, Feb. 1993. The abstract is given in Appendix A. The work extends the optimization model to deal with continuous gust objectives in a way that facilitates their inclusion in the efficient multi-disciplinary optimization scheme. Currently under development is a work designed to extend the analysis and optimization capabilities to loads and stress considerations. The work is on aircraft dynamic loads in response to impulsive and non-impulsive excitation. The work extends the formulations of the mode-displacement and summation-of-forces methods to include modes with significant local distortions, and load modes. An abstract of the paper,'Structural Dynamic Loads in Response to Impulsive Excitation' is given in appendix B. Another work performed this year under the Grant was 'Size-Reduction Techniques for the Determination of Efficient Aeroservoelastic Models' given in Appendix C.

  11. Evolution of efficient methods to sample lead sources, such as house dust and hand dust, in the homes of children

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Que Hee, S.S.; Peace, B.; Clark, C.S.

    Efficient sampling methods to recover lead-containing house dust and hand dust have been evolved so that sufficient lead is collected for analysis and to ensure that correlational analyses linking these two parameters to blood lead are not dependent on the efficiency of sampling. Precise collection of loose house dust from a 1-unit area (484 cmS) with a Tygon or stainless steel sampling tube connected to a portable sampling pump (1.2 to 2.5 liters/min) required repetitive sampling (three times). The Tygon tube sampling technique for loose house dust <177 m in diameter was around 72% efficient with respect to dust weightmore » and lead collection. A representative house dust contained 81% of its total weight in this fraction. A single handwipe for applied loose hand dust was not acceptably efficient or precise, and at least three wipes were necessary to achieve recoveries of >80% of the lead applied. House dusts of different particle sizes <246 m adhered equally well to hands. Analysis of lead-containing material usually required at least three digestions/decantations using hot plate or microwave techniques to allow at least 90% of the lead to be recovered. It was recommended that other investigators validate their handwiping, house dust sampling, and digestion techniques to facilitate comparison of results across studies. The final methodology for the Cincinnati longitudinal study was three sampling passes for surface dust using a stainless steel sampling tube; three microwave digestion/decantations for analysis of dust and paint; and three wipes with handwipes with one digestion/decantation for the analysis of six handwipes together.« less

  12. Secure and Efficient Regression Analysis Using a Hybrid Cryptographic Framework: Development and Evaluation.

    PubMed

    Sadat, Md Nazmus; Jiang, Xiaoqian; Aziz, Md Momin Al; Wang, Shuang; Mohammed, Noman

    2018-03-05

    Machine learning is an effective data-driven tool that is being widely used to extract valuable patterns and insights from data. Specifically, predictive machine learning models are very important in health care for clinical data analysis. The machine learning algorithms that generate predictive models often require pooling data from different sources to discover statistical patterns or correlations among different attributes of the input data. The primary challenge is to fulfill one major objective: preserving the privacy of individuals while discovering knowledge from data. Our objective was to develop a hybrid cryptographic framework for performing regression analysis over distributed data in a secure and efficient way. Existing secure computation schemes are not suitable for processing the large-scale data that are used in cutting-edge machine learning applications. We designed, developed, and evaluated a hybrid cryptographic framework, which can securely perform regression analysis, a fundamental machine learning algorithm using somewhat homomorphic encryption and a newly introduced secure hardware component of Intel Software Guard Extensions (Intel SGX) to ensure both privacy and efficiency at the same time. Experimental results demonstrate that our proposed method provides a better trade-off in terms of security and efficiency than solely secure hardware-based methods. Besides, there is no approximation error. Computed model parameters are exactly similar to plaintext results. To the best of our knowledge, this kind of secure computation model using a hybrid cryptographic framework, which leverages both somewhat homomorphic encryption and Intel SGX, is not proposed or evaluated to this date. Our proposed framework ensures data security and computational efficiency at the same time. ©Md Nazmus Sadat, Xiaoqian Jiang, Md Momin Al Aziz, Shuang Wang, Noman Mohammed. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 05.03.2018.

  13. Antibody biosensors for spoilage yeast detection based on impedance spectroscopy.

    PubMed

    Tubía, I; Paredes, J; Pérez-Lorenzo, E; Arana, S

    2018-04-15

    Brettanomyces is a yeast species responsible for wine and cider spoilage, producing volatile phenols that result in off-odors and loss of fruity sensorial qualities. Current commercial detection methods for these spoilage species are liable to frequent false positives, long culture times and fungal contamination. In this work, an interdigitated (IDE) biosensor was created to detect Brettanomyces using immunological reactions and impedance spectroscopy analysis. To promote efficient antibody immobilization on the electrodes' surface and to decrease non-specific adsorption, a Self-Assembled Monolayer (SAM) was developed. An impedance spectroscopy analysis, over four yeast strains, confirmed our device's increased efficacy. Compared to label-free sensors, antibody biosensors showed a higher relative impedance. The results also suggested that these biosensors could be a promising method to monitor some spoilage yeasts, offering an efficient alternative to the laborious and expensive traditional methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Theoretical, thermodynamic and electrochemical analysis of biotin drug as an impending corrosion inhibitor for mild steel in 15% hydrochloric acid

    PubMed Central

    Xu, Xihua; Sun, Zhipeng; Ansari, K. R.; Lin, Yuanhua

    2017-01-01

    The corrosion mitigation efficiency of biotin drug for mild steel in 15% hydrochloric acid was thoroughly investigated by weight loss and electrochemical methods. The surface morphology was studied by the contact angle, scanning electrochemical microscopy, atomic force microscopy and scanning electron microscopy methods. Quantum chemical calculation and Fukui analysis were done to correlate the experimental and theoretical data. The influence of the concentration of inhibitor, immersion time, temperature, activation energy, enthalpy and entropy has been reported. The mitigation efficiency of biotin obtained by all methods was in good correlation with each other. Polarization studies revealed that biotin acted as a mixed inhibitor. The adsorption of biotin was found to obey the Langmuir adsorption isotherm. Surface studies showed the hydrophobic nature of the steel with inhibitor and vindicated the formation of a film on the metal surface that reduced the corrosion rate. PMID:29308235

  15. Feasibility and Utility of Lexical Analysis for Occupational Health Text.

    PubMed

    Harber, Philip; Leroy, Gondy

    2017-06-01

    Assess feasibility and potential utility of natural language processing (NLP) for storing and analyzing occupational health data. Basic NLP lexical analysis methods were applied to 89,000 Mine Safety and Health Administration (MSHA) free text records. Steps included tokenization, term and co-occurrence counts, term annotation, and identifying exposure-health effect relationships. Presence of terms in the Unified Medical Language System (UMLS) was assessed. The methods efficiently demonstrated common exposures, health effects, and exposure-injury relationships. Many workplace terms are not present in UMLS or map inaccurately. Use of free text rather than narrowly defined numerically coded fields is feasible, flexible, and efficient. It has potential to encourage workers and clinicians to provide more data and to support automated knowledge creation. The lexical method used is easily generalizable to other areas. The UMLS vocabularies should be enhanced to be relevant to occupational health.

  16. ACCESS 3. Approximation concepts code for efficient structural synthesis: User's guide

    NASA Technical Reports Server (NTRS)

    Fleury, C.; Schmit, L. A., Jr.

    1980-01-01

    A user's guide is presented for ACCESS-3, a research oriented program which combines dual methods and a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and dual algorithms of mathematical programming are applied in the design optimization procedure. This program retains all of the ACCESS-2 capabilities and the data preparation formats are fully compatible. Four distinct optimizer options were added: interior point penalty function method (NEWSUMT); second order primal projection method (PRIMAL2); second order Newton-type dual method (DUAL2); and first order gradient projection-type dual method (DUAL1). A pure discrete and mixed continuous-discrete design variable capability, and zero order approximation of the stress constraints are also included.

  17. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    NASA Astrophysics Data System (ADS)

    de Araujo, Draulio B.; Kardec Barros, Allan; Estombelo-Montesco, Carlos; Zhao, Hui; Roque da Silva Filho, A. C.; Baffa, Oswaldo; Wakai, Ronald; Ohnishi, Noboru

    2005-10-01

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  18. Theoretical and software considerations for general dynamic analysis using multilevel substructured models

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1985-01-01

    The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.

  19. Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.

    1976-01-01

    An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.

  20. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  1. Spectral Regression Discriminant Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Wu, J.; Huang, H.; Liu, J.

    2012-08-01

    Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.

  2. Multiphysics elastodynamic finite element analysis of space debris deorbit stability and efficiency by electrodynamic tethers

    NASA Astrophysics Data System (ADS)

    Li, Gangqiang; Zhu, Zheng H.; Ruel, Stephane; Meguid, S. A.

    2017-08-01

    This paper developed a new multiphysics finite element method for the elastodynamic analysis of space debris deorbit by a bare flexible electrodynamic tether. Orbital motion limited theory and dynamics of flexible electrodynamic tethers are discretized by the finite element method, where the motional electric field is variant along the tether and coupled with tether deflection and motion. Accordingly, the electrical current and potential bias profiles of tether are solved together with the tether dynamics by the nodal position finite element method. The newly proposed multiphysics finite element method is applied to analyze the deorbit dynamics of space debris by electrodynamic tethers with a two-stage energy control strategy to ensure an efficient and stable deorbit process. Numerical simulations are conducted to study the coupled effect between the motional electric field and the tether dynamics. The results reveal that the coupling effect has a significant influence on the tether stability and the deorbit performance. It cannot be ignored when the libration and deflection of the tether are significant.

  3. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System.

    PubMed

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-06-27

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.

  4. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System

    PubMed Central

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-01-01

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946

  5. Uranium, radium and thorium in soils with high-resolution gamma spectroscopy, MCNP-generated efficiencies, and VRF non-linear full-spectrum nuclide shape fitting

    NASA Astrophysics Data System (ADS)

    Metzger, Robert; Riper, Kenneth Van; Lasche, George

    2017-09-01

    A new method for analysis of uranium and radium in soils by gamma spectroscopy has been developed using VRF ("Visual RobFit") which, unlike traditional peak-search techniques, fits full-spectrum nuclide shapes with non-linear least-squares minimization of the chi-squared statistic. Gamma efficiency curves were developed for a 500 mL Marinelli beaker geometry as a function of soil density using MCNP. Collected spectra were then analyzed using the MCNP-generated efficiency curves and VRF to deconvolute the 90 keV peak complex of uranium and obtain 238U and 235U activities. 226Ra activity was determined either from the radon daughters if the equilibrium status is known, or directly from the deconvoluted 186 keV line. 228Ra values were determined from the 228Ac daughter activity. The method was validated by analysis of radium, thorium and uranium soil standards and by inter-comparison with other methods for radium in soils. The method allows for a rapid determination of whether a sample has been impacted by a man-made activity by comparison of the uranium and radium concentrations to those that would be expected from a natural equilibrium state.

  6. Development of low level 226Ra analysis for live fish using gamma-ray spectrometry

    NASA Astrophysics Data System (ADS)

    Chandani, Z.; Prestwich, W. V.; Byun, S. H.

    2017-06-01

    A low level 226Ra analysis method for live fish was developed using a 4π NaI(Tl) gamma-ray spectrometer. In order to find out the best algorithm for accomplishing the lowest detection limit, the gamma-ray spectrum from a 226Ra point was collected and nine different methods were attempted for spectral analysis. The lowest detection limit of 0.99 Bq for an hour counting occurred when the spectrum was integrated in the energy region of 50-2520 keV. To extend 226Ra analysis to live fish, a Monte Carlo simulation model with a cylindrical fish in a water container was built using the MCNP code. From simulation results, the spatial distribution of the efficiency and the efficiency correction factor for the live fish model were determined. The MCNP model will be able to be conveniently modified when a different fish or container geometry is employed as fish grow up in real experiments.

  7. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  8. [Automatic Extraction and Analysis of Dosimetry Data in Radiotherapy Plans].

    PubMed

    Song, Wei; Zhao, Di; Lu, Hong; Zhang, Biyun; Ma, Jun; Yu, Dahai

    To improve the efficiency and accuracy of extraction and analysis of dosimetry data in radiotherapy plans for a batch of patients. With the interface function provided in Matlab platform, a program was written to extract the dosimetry data exported from treatment planning system in DICOM RT format and exported the dose-volume data to an Excel file with the SPSS compatible format. This method was compared with manual operation for 14 gastric carcinoma patients to validate the efficiency and accuracy. The output Excel data were compatible with SPSS in format, the dosimetry data error for PTV dose interval of 90%-98%, PTV dose interval of 99%-106% and all OARs were -3.48E-5 ± 3.01E-5, -1.11E-3 ± 7.68E-4, -7.85E-5 ± 9.91E-5 respectively. Compared with manual operation, the time required was reduced from 5.3 h to 0.19 h and input error was reduced from 0.002 to 0. The automatic extraction of dosimetry data in DICOM RT format for batch patients, the SPSS compatible data exportation, quick analysis were achieved in this paper. The efficiency of clinical researches based on dosimetry data analysis of large number of patients will be improved with this methods.

  9. IRB Process Improvements: A Machine Learning Analysis.

    PubMed

    Shoenbill, Kimberly; Song, Yiqiang; Cobb, Nichelle L; Drezner, Marc K; Mendonca, Eneida A

    2017-06-01

    Clinical research involving humans is critically important, but it is a lengthy and expensive process. Most studies require institutional review board (IRB) approval. Our objective is to identify predictors of delays or accelerations in the IRB review process and apply this knowledge to inform process change in an effort to improve IRB efficiency, transparency, consistency and communication. We analyzed timelines of protocol submissions to determine protocol or IRB characteristics associated with different processing times. Our evaluation included single variable analysis to identify significant predictors of IRB processing time and machine learning methods to predict processing times through the IRB review system. Based on initial identified predictors, changes to IRB workflow and staffing procedures were instituted and we repeated our analysis. Our analysis identified several predictors of delays in the IRB review process including type of IRB review to be conducted, whether a protocol falls under Veteran's Administration purview and specific staff in charge of a protocol's review. We have identified several predictors of delays in IRB protocol review processing times using statistical and machine learning methods. Application of this knowledge to process improvement efforts in two IRBs has led to increased efficiency in protocol review. The workflow and system enhancements that are being made support our four-part goal of improving IRB efficiency, consistency, transparency, and communication.

  10. A streamlined method for analysing genome-wide DNA methylation patterns from low amounts of FFPE DNA.

    PubMed

    Ludgate, Jackie L; Wright, James; Stockwell, Peter A; Morison, Ian M; Eccles, Michael R; Chatterjee, Aniruddha

    2017-08-31

    Formalin fixed paraffin embedded (FFPE) tumor samples are a major source of DNA from patients in cancer research. However, FFPE is a challenging material to work with due to macromolecular fragmentation and nucleic acid crosslinking. FFPE tissue particularly possesses challenges for methylation analysis and for preparing sequencing-based libraries relying on bisulfite conversion. Successful bisulfite conversion is a key requirement for sequencing-based methylation analysis. Here we describe a complete and streamlined workflow for preparing next generation sequencing libraries for methylation analysis from FFPE tissues. This includes, counting cells from FFPE blocks and extracting DNA from FFPE slides, testing bisulfite conversion efficiency with a polymerase chain reaction (PCR) based test, preparing reduced representation bisulfite sequencing libraries and massively parallel sequencing. The main features and advantages of this protocol are: An optimized method for extracting good quality DNA from FFPE tissues. An efficient bisulfite conversion and next generation sequencing library preparation protocol that uses 50 ng DNA from FFPE tissue. Incorporation of a PCR-based test to assess bisulfite conversion efficiency prior to sequencing. We provide a complete workflow and an integrated protocol for performing DNA methylation analysis at the genome-scale and we believe this will facilitate clinical epigenetic research that involves the use of FFPE tissue.

  11. An efficient visualization method for analyzing biometric data

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; McGonagle, Mike; Yates, J. Harlan; Henning, Ronda; Hackett, Jay

    2013-05-01

    We introduce a novel application for biometric data analysis. This technology can be used as part of a unique and systematic approach designed to augment existing processing chains. Our system provides image quality control and analysis capabilities. We show how analysis and efficient visualization are used as part of an automated process. The goal of this system is to provide a unified platform for the analysis of biometric images that reduce manual effort and increase the likelihood of a match being brought to an examiner's attention from either a manual or lights-out application. We discuss the functionality of FeatureSCOPE™ which provides an efficient tool for feature analysis and quality control of biometric extracted features. Biometric databases must be checked for accuracy for a large volume of data attributes. Our solution accelerates review of features by a factor of up to 100 times. Review of qualitative results and cost reduction is shown by using efficient parallel visual review for quality control. Our process automatically sorts and filters features for examination, and packs these into a condensed view. An analyst can then rapidly page through screens of features and flag and annotate outliers as necessary.

  12. The Evaluation of Efficiency of the Use of Machine Working Time in the Industrial Company - Case Study

    NASA Astrophysics Data System (ADS)

    Kardas, Edyta; Brožova, Silvie; Pustějovská, Pavlína; Jursová, Simona

    2017-12-01

    In the paper the evaluation of efficiency of the use of machines in the selected production company was presented. The OEE method (Overall Equipment Effectiveness) was used for the analysis. The selected company deals with the production of tapered roller bearings. The analysis of effectiveness was done for 17 automatic grinding lines working in the department of grinding rollers. Low level of efficiency of machines was affected by problems with the availability of machines and devices. The causes of machine downtime on these lines was also analyzed. Three basic causes of downtime were identified: no kanban card, diamonding, no operator. Ways to improve the use of these machines were suggested. The analysis takes into account the actual results from the production process and covers the period of one calendar year.

  13. High-efficiency high performance liquid chromatographic analysis of red wine anthocyanins.

    PubMed

    de Villiers, André; Cabooter, Deirdre; Lynen, Frédéric; Desmet, Gert; Sandra, Pat

    2011-07-22

    The analysis of anthocyanins in natural products is of significant relevance in recent times due to the recognised health benefits associated with their consumption. In red grapes and wines in particular, anthocyanins are known to contribute important properties to the sensory (colour and taste), anti-oxidant- and ageing characteristics. However, the detailed investigation of the alteration of these compounds during wine ageing is hampered by the challenges associated with the separation of grape-derived anthocyanins and their derived products. High performance liquid chromatography (HPLC) is primarily used for this purpose, often in combination with mass spectrometric (MS) detection, although conventional HPLC methods provide incomplete resolution. We have previously demonstrated how on-column inter-conversion reactions are responsible for poor chromatographic efficiency in the HPLC analysis of anthocyanins, and how an increase in temperature and decrease in particle size may improve the chromatographic performance. In the current contribution an experimental configuration for the high efficiency analysis of anthocyanins is derived using the kinetic plot method (KPM). Further, it is shown how analysis under optimal conditions, in combination with MS detection, delivers much improved separation and identification of red wine anthocyanins and their derived products. This improved analytical performance holds promise for the in-depth investigation of these influential compounds in wine during ageing. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Effects of floc and bubble size on the efficiency of the dissolved air flotation (DAF) process.

    PubMed

    Han, Mooyoung; Kim, Tschung-il; Kim, Jinho

    2007-01-01

    Dissolved air flotation (DAF) is a method for removing particles from water using micro bubbles instead of settlement. The process has proved to be successful and, since the 1960s, accepted as an alternative to the conventional sedimentation process for water and wastewater treatment. However, limited research into the process, especially the fundamental characteristics of bubbles and particles, has been carried out. The single collector collision model is not capable of determining the effects of particular characteristics, such as the size and surface charge of bubbles and particles. Han has published a set of modeling results after calculating the collision efficiency between bubbles and particles by trajectory analysis. His major conclusion was that collision efficiency is maximum when the bubbles and particles are nearly the same size but have opposite charge. However, experimental verification of this conclusion has not been carried out yet. This paper describes a new method for measuring the size of particles and bubbles developed using computational image analysis. DAF efficiency is influenced by the effect of the recycle ratio on various average floc sizes. The larger the recycle ratio, the higher the DAF efficiency at the same pressure and particle size. The treatment efficiency is also affected by the saturation pressure, because the bubble size and bubble volume concentration are controlled by the pressure. The highest efficiency is obtained when the floc size is larger than the bubble size. These results, namely that the highest collision efficiency occurs when the particles and bubbles are about the same size, are more in accordance with the trajectory model than with the white water collector model, which implies that the larger the particles, the higher is the collision efficiency.

  15. Robust Mediation Analysis Based on Median Regression

    PubMed Central

    Yuan, Ying; MacKinnon, David P.

    2014-01-01

    Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925

  16. Application of Analytic Hierarchy Process (AHP) in the analysis of the fuel efficiency in the automobile industry with the utilization of Natural Fiber Polymer Composites (NFPC)

    NASA Astrophysics Data System (ADS)

    Jayamani, E.; Perera, D. S.; Soon, K. H.; Bakri, M. K. B.

    2017-04-01

    A systematic method of material analysis aiming for fuel efficiency improvement with the utilization of natural fiber reinforced polymer matrix composites in the automobile industry is proposed. A multi-factor based decision criteria with Analytical Hierarchy Process (AHP) was used and executed through MATLAB to achieve improved fuel efficiency through the weight reduction of vehicular components by effective comparison between two engine hood designs. The reduction was simulated by utilizing natural fiber polymer composites with thermoplastic polypropylene (PP) as the matrix polymer and benchmarked against a synthetic based composite component. Results showed that PP with 35% of flax fiber loading achieved a 0.4% improvement in fuel efficiency, and it was the highest among the 27 candidate fibers.

  17. Fast multidimensional ensemble empirical mode decomposition for the analysis of big spatio-temporal datasets.

    PubMed

    Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min

    2016-04-13

    In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.

  18. Aerodynamic shape optimization using control theory

    NASA Technical Reports Server (NTRS)

    Reuther, James

    1996-01-01

    Aerodynamic shape design has long persisted as a difficult scientific challenge due its highly nonlinear flow physics and daunting geometric complexity. However, with the emergence of Computational Fluid Dynamics (CFD) it has become possible to make accurate predictions of flows which are not dominated by viscous effects. It is thus worthwhile to explore the extension of CFD methods for flow analysis to the treatment of aerodynamic shape design. Two new aerodynamic shape design methods are developed which combine existing CFD technology, optimal control theory, and numerical optimization techniques. Flow analysis methods for the potential flow equation and the Euler equations form the basis of the two respective design methods. In each case, optimal control theory is used to derive the adjoint differential equations, the solution of which provides the necessary gradient information to a numerical optimization method much more efficiently then by conventional finite differencing. Each technique uses a quasi-Newton numerical optimization algorithm to drive an aerodynamic objective function toward a minimum. An analytic grid perturbation method is developed to modify body fitted meshes to accommodate shape changes during the design process. Both Hicks-Henne perturbation functions and B-spline control points are explored as suitable design variables. The new methods prove to be computationally efficient and robust, and can be used for practical airfoil design including geometric and aerodynamic constraints. Objective functions are chosen to allow both inverse design to a target pressure distribution and wave drag minimization. Several design cases are presented for each method illustrating its practicality and efficiency. These include non-lifting and lifting airfoils operating at both subsonic and transonic conditions.

  19. Incorporating additional targets into learning trials for individuals with autism spectrum disorder.

    PubMed

    Nottingham, Casey L; Vladescu, Jason C; Kodak, Tiffany M

    2015-01-01

    Recently, researchers have investigated the effectiveness and efficiency of presenting secondary targets during learning trials for individuals with autism spectrum disorder (ASD). This instructional method may be more efficient than typical methods used with learners with ASD, because learners may acquire secondary targets without additional instruction. This review will discuss the recent literature on providing secondary targets during teaching trials for individuals with ASD, identify common aspects and results among these studies, and identify areas for future research. © Society for the Experimental Analysis of Behavior.

  20. Automatic cloud coverage assessment of Formosat-2 image

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2011-11-01

    Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.

  1. Experimental and theoretical debate on efficient second harmonic generation in Bis (Cinnamic acid): Hexamine cocrystal

    NASA Astrophysics Data System (ADS)

    Vijayalakshmi, S.; Kalyanaraman, S.; Ravindran, T. R.

    2014-02-01

    Second harmonic generation (SHG) in Bis (Cinnamic acid): Hexamine cocrystal was extensively analyzed through charge transfer (CT). The CT interactions through hydrogen bonding were well established with the aid of vibrational analysis and Natural Bond Orbital (NBO) analysis. The retentivity of coplanar nature of the cinnamic acid in the cocrystal was confirmed through UV-Visible spectroscopy and supported by Raman studies. Structural analysis indicated the quinoidal character of the given material presenting a high SHG efficiency. The first order hyperpolarizability value was calculated theoretically by density functional theory (DFT) and Hartree-Fock (HF) methods in support for the large value of SHG.

  2. A study on thermal characteristics analysis model of high frequency switching transformer

    NASA Astrophysics Data System (ADS)

    Yoo, Jin-Hyung; Jung, Tae-Uk

    2015-05-01

    Recently, interest has been shown in research on the module-integrated converter (MIC) in small-scale photovoltaic (PV) generation. In an MIC, the voltage boosting high frequency transformer should be designed to be compact in size and have high efficiency. In response to the need to satisfy these requirements, this paper presents a coupled electromagnetic analysis model of a transformer connected with a high frequency switching DC-DC converter circuit while considering thermal characteristics due to the copper and core losses. A design optimization procedure for high efficiency is also presented using this design analysis method, and it is verified by the experimental result.

  3. Application of Theodorsen's Theory to Propeller Design

    NASA Technical Reports Server (NTRS)

    Crigler, John L

    1948-01-01

    A theoretical analysis is presented for obtaining by use of Theodorsen's propeller theory the load distribution along a propeller radius to give the optimum propeller efficiency for any design condition.The efficiencies realized by designing for the optimum load distribution are given in graphs, and the optimum efficiency for any design condition may be read directly from the graph without any laborious calculations. Examples are included to illustrate the method of obtaining the optimum load distributions for both single-rotating and dual-rotating propellers.

  4. Application of Theodorsen's theory to propeller design

    NASA Technical Reports Server (NTRS)

    Crigler, John L

    1949-01-01

    A theoretical analysis is presented for obtaining, by use of Theodorsen's propeller theory, the load distribution along a propeller radius to give the optimum propeller efficiency for any design condition. The efficiencies realized by designing for the optimum load distribution are given in graphs, and the optimum efficiency for any design condition may be read directly from the graph without any laborious calculations. Examples are included to illustrate the method of obtaining the optimum load distributions for both single-rotating and dual-rotating propellers.

  5. Development and Implementation of Efficiency-Improving Analysis Methods for the SAGE III on ISS Thermal Model Originating

    NASA Technical Reports Server (NTRS)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; Scola, Salvatore; Tobin, Steven; McLeod, Shawn; Mannu, Sergio; Guglielmo, Corrado; Moeller, Timothy

    2013-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle in 2015. A detailed thermal model of the SAGE III payload has been developed in Thermal Desktop (TD). Several novel methods have been implemented to facilitate efficient payload-level thermal analysis, including the use of a design of experiments (DOE) methodology to determine the worst-case orbits for SAGE III while on ISS, use of TD assemblies to move payloads from the Dragon trunk to the Enhanced Operational Transfer Platform (EOTP) to its final home on the Expedite the Processing of Experiments to Space Station (ExPRESS) Logistics Carrier (ELC)-4, incorporation of older models in varying unit sets, ability to change units easily (including hardcoded logic blocks), case-based logic to facilitate activating heaters and active elements for varying scenarios within a single model, incorporation of several coordinate frames to easily map to structural models with differing geometries and locations, and streamlined results processing using an Excel-based text file plotter developed in-house at LaRC. This document presents an overview of the SAGE III thermal model and describes the development and implementation of these efficiency-improving analysis methods.

  6. Needle Trap Device as a New Sampling and Preconcentration Approach for Volatile Organic Compounds of Herbal Medicines and its Application to the Analysis of Volatile Components in Viola tianschanica.

    PubMed

    Qin, Yan; Pang, Yingming; Cheng, Zhihong

    2016-11-01

    The needle trap device (NTD) technique is a new microextraction method for sampling and preconcentration of volatile organic compounds (VOCs). Previous NTD studies predominantly focused on analysis of environmental volatile compounds in the gaseous and liquid phases. Little work has been done on its potential application in biological samples and no work has been reported on analysis of bioactive compounds in essential oils from herbal medicines. The main purpose of the present study is to develop a NTD sampling method for profiling VOCs in biological samples using herbal medicines as a case study. A combined method of NTD sample preparation and gas chromatography-mass spectrometry was developed for qualitative analysis of VOCs in Viola tianschanica. A 22-gauge stainless steel, triple-bed needle packed with Tenax, Carbopack X and Carboxen 1000 sorbents was used for analysis of VOCs in the herb. Furthermore, different parameters affecting the extraction efficiency and capacity were studied. The peak capacity obtained by NTDs was 104, more efficient than those of the static headspace (46) and hydrodistillation (93). This NTD method shows potential to trap a wide range of VOCs including the lower and higher volatile components, while the static headspace and hydrodistillation only detects lower volatile components, and semi-volatile and higher volatile components, respectively. The developed NTD sample preparation method is a more rapid, simpler, convenient, and sensitive extraction/desorption technique for analysis of VOCs in herbal medicines than the conventional methods such as static headspace and hydrodistillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Theoretical study of the accuracy of the pulse method, frontal analysis, and frontal analysis by characteristic points for the determination of single component adsorption isotherms.

    PubMed

    Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges

    2009-02-13

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.

  8. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  9. Mining Feature of Data Fusion in the Classification of Beer Flavor Information Using E-Tongue and E-Nose

    PubMed Central

    Men, Hong; Shi, Yan; Fu, Songlin; Jiao, Yanan; Qiao, Yu; Liu, Jingjing

    2017-01-01

    Multi-sensor data fusion can provide more comprehensive and more accurate analysis results. However, it also brings some redundant information, which is an important issue with respect to finding a feature-mining method for intuitive and efficient analysis. This paper demonstrates a feature-mining method based on variable accumulation to find the best expression form and variables’ behavior affecting beer flavor. First, e-tongue and e-nose were used to gather the taste and olfactory information of beer, respectively. Second, principal component analysis (PCA), genetic algorithm-partial least squares (GA-PLS), and variable importance of projection (VIP) scores were applied to select feature variables of the original fusion set. Finally, the classification models based on support vector machine (SVM), random forests (RF), and extreme learning machine (ELM) were established to evaluate the efficiency of the feature-mining method. The result shows that the feature-mining method based on variable accumulation obtains the main feature affecting beer flavor information, and the best classification performance for the SVM, RF, and ELM models with 96.67%, 94.44%, and 98.33% prediction accuracy, respectively. PMID:28753917

  10. Improving multi-objective reservoir operation optimization with sensitivity-informed dimension reduction

    NASA Astrophysics Data System (ADS)

    Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.

    2015-08-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.

  11. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  12. EVALUATION OF IODINE BASED IMPINGER SOLUTIONS FOR THE EFFICIENT CAPTURE OF HG USING DIRECT INJECTION NEBULIZATION INDUCTIVELY COUPLED PLASMA MASS SPECTROMETRY (DIN-ICP/MS) ANALYSIS

    EPA Science Inventory

    Currently there are no EPA reference sampling methods that have been promulgated for measuring stack emissions of Hg from coal combustion sources, however, EPA Method 29 is most commonly applied. The draft ASTM Ontario Hydro Method for measuring oxidized, elemental, particulate-b...

  13. An efficient genome-wide association test for multivariate phenotypes based on the Fisher combination function.

    PubMed

    Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne

    2016-01-05

    In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.

  14. A method for evaluating photovoltaic potential in China based on GIS platform

    NASA Astrophysics Data System (ADS)

    Wang, L. Z.; Tan, H. W.; Ji, L.; Wang, D.

    2017-11-01

    Solar photovoltaic systems are commonly utilized in China. However, the associated research is still lack of its resource potential analysis in all regions in China. Based on the existed data about solar radiation and system conversion efficiency data, a new method for distributed photovoltaic potential assessment has been presented. The experiment of three kinds of solar photovoltaic system has been set up for the purpose of analyzing the relationship between conversion efficiency and environmental parameters. This paper fits the relationship between conversion efficiency and solar radiation intensity. This method takes into account the amount of solar radiation that is effectively generated and drives away the weak values. With the spatial analysis function of geographic information system (GIS) platform, frequency distribution of solar radiation intensity and PV potential in China can be derived. Furthermore, analytical results show that monocrystalline-silicon PV generation in the north-western and northern areas have reached a level of more than 200 kWh/(m2.a), making those areas be suitable for the development of PV system. However, the potential for southwest areas reaches a level of only 130 kWh/(m2.a). This paper can provide the baseline reference for solar energy development planning.

  15. Liquid scintillation counting methodology for 99Tc analysis. A remedy for radiopharmaceutical waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Mumtaz; Um, Wooyong

    2015-08-13

    This paper presents a new approach for liquid scintillation counting (LSC) analysis of single-radionuclide samples containing appreciable organic or inorganic quench. This work offers better analytical results than existing LSC methods for technetium-99 ( 99gTc) analysis with significant savings in analysis cost and time. The method was developed to quantify 99gTc in environmental liquid and urine samples using LSC. Method efficiency was measured in the presence of 1.9 to 11,900 ppm total dissolved solids. The quench curve was proved to be effective in the case of spiked 99gTc activity calculation for deionized water, tap water, groundwater, seawater, and urine samples.more » Counting efficiency was found to be 91.66% for Ultima Gold LLT (ULG-LLT) and Ultima Gold (ULG). Relative error in spiked 99gTc samples was ±3.98% in ULG and ULG-LLT cocktails. Minimum detectable activity was determined to be 25.3 mBq and 22.7 mBq for ULG-LLT and ULG cocktails, respectively. A pre-concentration factor of 1000 was achieved at 100°C for 100% chemical recovery.« less

  16. Chemical tagging of chlorinated phenols for their facile detection and analysis by NMR spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdez, Carlos A.; Leif, Roald N.

    2015-03-22

    A derivatization method that employs diethyl (bromodifluoromethyl) phosphonate (DBDFP) to efficiently tag the endocrine disruptor pentachlorophenol (PCP) and other chlorinated phenols (CPs) along with their reliable detection and analysis by NMR is presented. The method accomplishes the efficient alkylation of the hydroxyl group in CPs with the difluoromethyl (CF 2H) moiety in extremely rapid fashion (5 min), at room temperature and in an environmentally benign manner. The approach proved successful in difluoromethylating a panel of 18 chlorinated phenols, yielding derivatives that displayed unique 1H, 19F NMR spectra allowing for the clear discrimination between isomerically related CPs. Due to its biphasicmore » nature, the derivatization can be applied to both aqueous and organic mixtures where the analysis of CPs is required. Furthermore, the methodology demonstrates that PCP along with other CPs can be selectively derivatized in the presence of other various aliphatic alcohols, underscoring the superiority of the approach over other general derivatization methods that indiscriminately modify all analytes in a given sample. The present work demonstrates the first application of NMR on the qualitative analysis of these highly toxic and environmentally persistent species.« less

  17. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  18. Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes.

    PubMed

    Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon

    2017-12-01

    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.

  19. Decomposition of potential efficiency gains from hospital mergers in Greece.

    PubMed

    Flokou, Angeliki; Aletras, Vassilis; Niakas, Dimitris

    2017-12-01

    This paper evaluates the technical efficiency of 71 Greek public hospitals and examines potential efficiency gains from 13 candidate mergers among them. Efficiency assessments are performed using bootstrapped Data Envelopment Analysis (DEA) whilst merger analysis is conducted by applying the Bogetoft and Wang methodology which allows the overall potential merger gains to be decomposed into three main components of inefficiency, namely technical (or learning), scope (or harmony) and scale (or size) effects. Thus, the analysis provides important insights not only on the magnitude of the potential total efficiency gains but also on their sources. The overall analysis is conducted in the context of a complete methodological framework where methods for outlier detection, returns to scale identification, and bias corrections for DEA estimations are also applied. Mergers are analyzed under the assumptions of constant, variable and non-decreasing returns to scale in an input oriented DEA model with three inputs and three outputs. The main finding of the study indicates that almost all mergers show substantial potential room for efficiency improvement, which is mainly attributed to the pre-merger technical inefficiencies of the individual hospitals and therefore it might be possible to be achieved without the need of implementing full-scale mergers. The same -though, at a lower extent- applies to the harmony effect whilst the size effect shows marginal or even negative gains.

  20. Predicting internal yellow-poplar log defect features using surface indicators

    Treesearch

    R. Edward Thomas

    2008-01-01

    Determining the defects that are located within the log is crucial to understanding the tree/log resource for efficient processing. However, existing means of doing this non-destructively requires the use of expensive X-ray/CT, MRI, or microwave technology. These methods do not lend themselves to fast, efficient, and cost-effective analysis of logs and tree stems in...

  1. [A retrieval method of drug molecules based on graph collapsing].

    PubMed

    Qu, J W; Lv, X Q; Liu, Z M; Liao, Y; Sun, P H; Wang, B; Tang, Z

    2018-04-18

    To establish a compact and efficient hypergraph representation and a graph-similarity-based retrieval method of molecules to achieve effective and efficient medicine information retrieval. Chemical structural formula (CSF) was a primary search target as a unique and precise identifier for each compound at the molecular level in the research field of medicine information retrieval. To retrieve medicine information effectively and efficiently, a complete workflow of the graph-based CSF retrieval system was introduced. This system accepted the photos taken from smartphones and the sketches drawn on tablet personal computers as CSF inputs, and formalized the CSFs with the corresponding graphs. Then this paper proposed a compact and efficient hypergraph representation for molecules on the basis of analyzing factors that directly affected the efficiency of graph matching. According to the characteristics of CSFs, a hierarchical collapsing method combining graph isomorphism and frequent subgraph mining was adopted. There was yet a fundamental challenge, subgraph overlapping during the collapsing procedure, which hindered the method from establishing the correct compact hypergraph of an original CSF graph. Therefore, a graph-isomorphism-based algorithm was proposed to select dominant acyclic subgraphs on the basis of overlapping analysis. Finally, the spatial similarity among graphical CSFs was evaluated by multi-dimensional measures of similarity. To evaluate the performance of the proposed method, the proposed system was firstly compared with Wikipedia Chemical Structure Explorer (WCSE), the state-of-the-art system that allowed CSF similarity searching within Wikipedia molecules dataset, on retrieval accuracy. The system achieved higher values on mean average precision, discounted cumulative gain, rank-biased precision, and expected reciprocal rank than WCSE from the top-2 to the top-10 retrieved results. Specifically, the system achieved 10%, 1.41, 6.42%, and 1.32% higher than WCSE on these metrics for top-10 retrieval results, respectively. Moreover, several retrieval cases were presented to intuitively compare with WCSE. The results of the above comparative study demonstrated that the proposed method outperformed the existing method with regard to accuracy and effectiveness. This paper proposes a graph-similarity-based retrieval approach for medicine information. To obtain satisfactory retrieval results, an isomorphism-based algorithm is proposed for dominant subgraph selection based on the subgraph overlapping analysis, as well as an effective and efficient hypergraph representation of molecules. Experiment results demonstrate the effectiveness of the proposed approach.

  2. Design of An Energy Efficient Hydraulic Regenerative circuit

    NASA Astrophysics Data System (ADS)

    Ramesh, S.; Ashok, S. Denis; Nagaraj, Shanmukha; Adithyakumar, C. R.; Reddy, M. Lohith Kumar; Naulakha, Niranjan Kumar

    2018-02-01

    Increasing cost and power demand, leads to evaluation of new method to increase through productivity and help to solve the power demands. Many researchers have break through to increase the efficiency of a hydraulic power pack, one of the promising methods is the concept of regenerative. The objective of this research work is to increase the efficiency of a hydraulic circuit by introducing a concept of regenerative circuit. A Regenerative circuit is a system that is used to speed up the extension stroke of the double acting single rod hydraulic cylinder. The output is connected to the input in the directional control value. By this concept, increase in velocity of the piston and decrease the cycle time. For the research, a basic hydraulic circuit and a regenerative circuit are designated and compared both with their results. The analysis was based on their time taken for extension and retraction of the piston. From the detailed analysis of both the hydraulic circuits, it is found that the efficiency by introducing hydraulic regenerative circuit increased by is 5.3%. The obtained results conclude that, implementing hydraulic regenerative circuit in a hydraulic power pack decreases power consumption, reduces cycle time and increases productivity in a longer run.

  3. Computation of full energy peak efficiency for nuclear power plant radioactive plume using remote scintillation gamma-ray spectrometry.

    PubMed

    Grozdov, D S; Kolotov, V P; Lavrukhin, Yu E

    2016-04-01

    A method of full energy peak efficiency estimation in the space around scintillation detector, including the presence of a collimator, has been developed. It is based on a mathematical convolution of the experimental results with the following data extrapolation. The efficiency data showed the average uncertainty less than 10%. Software to calculate integral efficiency for nuclear power plant plume was elaborated. The paper also provides results of nuclear power plant plume height estimation by analysis of the spectral data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Liquid-phase and solid-phase microwave irradiations for reduction of graphite oxide

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Wen, Chen-Yu; Zhang, David Wei; Wu, Dong-Ping; Zhang, Zhi-Bin; Zhang, Shi-Li

    2014-12-01

    In this paper, two microwave irradiation methods: (i) liquid-phase microwave irradiation (MWI) reduction of graphite oxide suspension dissolved in de-ionized water and N, N-dimethylformamide, respectively, and (ii) solid-phase MWI reduction of graphite oxide powder have been successfully carried out to reduce graphite oxide. The reduced graphene oxide products are thoroughly characterized by scanning electron microscopy, atomic force microscopy, X-ray photoelectron spectroscopy, Fourier transform infrared spectral analysis, Raman spectroscopy, UV-Vis absorption spectral analysis, and four-point probe conductivity measurements. The results show that both methods can efficiently remove the oxygen-containing functional groups attached to the graphite layers, though the solid-phase MWI reduction method can obtain far more efficiently a higher quality-reduced graphene oxide with fewer defects. The I(D)/I(G) ratio of the solid-phase MWI sample is as low as 0.46, which is only half of that of the liquid-phase MWI samples. The electrical conductivity of the reduced graphene oxide by the solid method reaches 747.9 S/m, which is about 25 times higher than that made by the liquid-phase method.

  5. High-efficient and high-content cytotoxic recording via dynamic and continuous cell-based impedance biosensor technology.

    PubMed

    Hu, Ning; Fang, Jiaru; Zou, Ling; Wan, Hao; Pan, Yuxiang; Su, Kaiqi; Zhang, Xi; Wang, Ping

    2016-10-01

    Cell-based bioassays were effective method to assess the compound toxicity by cell viability, and the traditional label-based methods missed much information of cell growth due to endpoint detection, while the higher throughputs were demanded to obtain dynamic information. Cell-based biosensor methods can dynamically and continuously monitor with cell viability, however, the dynamic information was often ignored or seldom utilized in the toxin and drug assessment. Here, we reported a high-efficient and high-content cytotoxic recording method via dynamic and continuous cell-based impedance biosensor technology. The dynamic cell viability, inhibition ratio and growth rate were derived from the dynamic response curves from the cell-based impedance biosensor. The results showed that the biosensors has the dose-dependent manners to diarrhetic shellfish toxin, okadiac acid based on the analysis of the dynamic cell viability and cell growth status. Moreover, the throughputs of dynamic cytotoxicity were compared between cell-based biosensor methods and label-based endpoint methods. This cell-based impedance biosensor can provide a flexible, cost and label-efficient platform of cell viability assessment in the shellfish toxin screening fields.

  6. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    ERIC Educational Resources Information Center

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  7. Qualitative Organic Analysis: An Efficient, Safer, and Economical Approach to Preliminary Tests and Functional Group Analysis

    ERIC Educational Resources Information Center

    Dhingra, Sunita; Angrish, Chetna

    2011-01-01

    Qualitative organic analysis of an unknown compound is an integral part of the university chemistry laboratory curriculum. This type of training is essential as students learn to approach a problem systematically and to interpret the results logically. However, considerable quantities of waste are generated by using conventional methods of…

  8. SPAR improved structure-fluid dynamic analysis capability, phase 2

    NASA Technical Reports Server (NTRS)

    Pearson, M. L.

    1984-01-01

    An efficient and general method of analyzing a coupled dynamic system of fluid flow and elastic structures is investigated. The improvement of Structural Performance Analysis and Redesign (SPAR) code is summarized. All error codes are documented and the SPAR processor/subroutine cross reference is included.

  9. EPA’s Non-Targeted Analysis Research Program: Expanding public data resources in support of exposure science

    EPA Science Inventory

    Suspect screening (SSA) and non-targeted analysis (NTA) methods using high-resolution mass spectrometry (HRMS) offer new approaches to efficiently generate exposure data for chemicals in a variety of environmental and biological media. These techniques aid characterization of the...

  10. General Methodology Combining Engineering Optimization of Primary HVAC and R Plants with Decision Analysis Methods--Part II: Uncertainty and Decision Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Wei; Reddy, T. A.; Gurian, Patrick

    2007-01-31

    A companion paper to Jiang and Reddy that presents a general and computationally efficient methodology for dyanmic scheduling and optimal control of complex primary HVAC&R plants using a deterministic engineering optimization approach.

  11. Environmentally friendly and cost-efficient analysis of aflatoxins in corn

    USDA-ARS?s Scientific Manuscript database

    The extraction procedure adds a significant cost to the overall expense of aflatoxin analysis in agricultural commodities. An inexpensive and low-waste extraction method using a household espresso coffee maker was tested. This appliance was used for the high-temperature /high-pressure extraction of ...

  12. Effects of measurement method and transcript availability on inexperienced raters' stuttering frequency scores.

    PubMed

    Chakraborty, Nalanda; Logan, Kenneth J

    To examine the effects of measurement method and transcript availability on the accuracy, reliability, and efficiency of inexperienced raters' stuttering frequency measurements. 44 adults, all inexperienced at evaluating stuttered speech, underwent 20 min of preliminary training in stuttering measurement and then analyzed a series of sentences, with and without access to transcripts of sentence stimuli, using either a syllable-based analysis (SBA) or an utterance-based analysis (UBA). Participants' analyses were compared between groups and to a composite analysis from two experienced evaluators. Stuttering frequency scores from the SBA and UBA groups differed significantly from the experienced evaluators' scores; however, UBA scores were significantly closer to the experienced evaluators' scores and were completed significantly faster than the SBA scores. Transcript availability facilitated scoring accuracy and efficiency in both groups. The internal reliability of stuttering frequency scores was acceptable for the SBA and UBA groups; however, the SBA group demonstrated only modest point-by-point agreement with ratings from the experienced evaluators. Given its accuracy and efficiency advantages over syllable-based analysis, utterance-based fluency analysis appears to be an appropriate context for introducing stuttering frequency measurement to raters who have limited experience in stuttering measurement. To address accuracy gaps between experienced and inexperienced raters, however, use of either analysis must be supplemented with training activities that expose inexperienced raters to the decision-making processes used by experienced raters when identifying stuttered syllables. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  14. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1983-01-01

    The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  15. Bayesian analysis of rare events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straub, Daniel, E-mail: straub@tum.de; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into themore » probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.« less

  16. MrBayes tgMC3++: A High Performance and Resource-Efficient GPU-Oriented Phylogenetic Analysis Method.

    PubMed

    Ling, Cheng; Hamada, Tsuyoshi; Gao, Jingyang; Zhao, Guoguang; Sun, Donghong; Shi, Weifeng

    2016-01-01

    MrBayes is a widespread phylogenetic inference tool harnessing empirical evolutionary models and Bayesian statistics. However, the computational cost on the likelihood estimation is very expensive, resulting in undesirably long execution time. Although a number of multi-threaded optimizations have been proposed to speed up MrBayes, there are bottlenecks that severely limit the GPU thread-level parallelism of likelihood estimations. This study proposes a high performance and resource-efficient method for GPU-oriented parallelization of likelihood estimations. Instead of having to rely on empirical programming, the proposed novel decomposition storage model implements high performance data transfers implicitly. In terms of performance improvement, a speedup factor of up to 178 can be achieved on the analysis of simulated datasets by four Tesla K40 cards. In comparison to the other publicly available GPU-oriented MrBayes, the tgMC 3 ++ method (proposed herein) outperforms the tgMC 3 (v1.0), nMC 3 (v2.1.1) and oMC 3 (v1.00) methods by speedup factors of up to 1.6, 1.9 and 2.9, respectively. Moreover, tgMC 3 ++ supports more evolutionary models and gamma categories, which previous GPU-oriented methods fail to take into analysis.

  17. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  18. Analyzing the efficiency of small and medium-sized enterprises of a national technology innovation research and development program.

    PubMed

    Park, Sungmin

    2014-01-01

    This study analyzes the efficiency of small and medium-sized enterprises (SMEs) of a national technology innovation research and development (R&D) program. In particular, an empirical analysis is presented that aims to answer the following question: "Is there a difference in the efficiency between R&D collaboration types and between government R&D subsidy sizes?" Methodologically, the efficiency of a government-sponsored R&D project (i.e., GSP) is measured by Data Envelopment Analysis (DEA), and a nonparametric analysis of variance method, the Kruskal-Wallis (KW) test is adopted to see if the efficiency differences between R&D collaboration types and between government R&D subsidy sizes are statistically significant. This study's major findings are as follows. First, contrary to our hypothesis, when we controlled the influence of government R&D subsidy size, there was no statistically significant difference in the efficiency between R&D collaboration types. However, the R&D collaboration type, "SME-University-Laboratory" Joint-Venture was superior to the others, achieving the largest median and the smallest interquartile range of DEA efficiency scores. Second, the differences in the efficiency were statistically significant between government R&D subsidy sizes, and the phenomenon of diseconomies of scale was identified on the whole. As the government R&D subsidy size increases, the central measures of DEA efficiency scores were reduced, but the dispersion measures rather tended to get larger.

  19. [Efficiency of industrial energy conservation and carbon emission reduction in Liaoning Pro-vince based on data envelopment analysis (DEA)method.

    PubMed

    Wang, Li; Xi, Feng Ming; Li, Jin Xin; Liu, Li Li

    2016-09-01

    Taking 39 industries as independent decision-making units in Liaoning Province from 2003 to 2012 and considering the benefits of energy, economy and environment, we combined direction distance function and radial DEA method to estimate and decompose the energy conservation and carbon emissions reduction efficiency of the industries. Carbon emission of each industry was calculated and defined as an undesirable output into the model of energy saving and carbon emission reduction efficiency. The results showed that energy saving and carbon emission reduction efficiency of industries had obvious heterogeneity in Liaoning Province. The whole energy conservation and carbon emissions reduction efficiency in each industry of Liaoning Province was not high, but it presented a rising trend. Improvements of pure technical efficiency and scale efficiency were the main measures to enhance energy saving and carbon emission reduction efficiency, especially scale efficiency improvement. In order to improve the energy saving and carbon emission reduction efficiency of each industry in Liaoning Province, we put forward that Liaoning Province should adjust industry structure, encourage the development of low carbon high benefit industries, improve scientific and technological level and adjust the industry scale reasonably, meanwhile, optimize energy structure, and develop renewable and clean energy.

  20. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  1. Sensitive electrospray mass spectrometry analysis of one-bead-one-compound peptide libraries labeled by quaternary ammonium salts.

    PubMed

    Bąchor, Remigiusz; Cydzik, Marzena; Rudowska, Magdalena; Kluczyk, Alicja; Stefanowicz, Piotr; Szewczuk, Zbigniew

    2012-08-01

    A rapid and straightforward method for high-throughput analysis of single resin beads from one-bead-one-compound combinatorial libraries with high resolution electrospray ionization tandem mass spectrometry (HR ESI-MS/MS) is presented. The application of an efficient method of peptide derivatization by quaternary ammonium salts (QAS) formation increases ionization efficiency and reduces the detection limit, allowing analysis of trace amounts of compounds by ESI-MS. Peptides, synthesized on solid support, contain a new cleavable linker composed of a Peg spacer (9-aza-3,6,12,15-tetraoxa-10-on-heptadecanoic acid), lysine with ɛ-amino group marked by the N,N,N-triethylglycine salt, and methionine, which makes possible the selective cleavage by cyanogen bromide. Even a small portion of peptides derivatized by QAS cleaved from a single resin bead is sufficient for sequencing by HR ESI-MS/MS experiments. The developed strategy was applied to a small training library of α chymotrypsin substrates. The obtained results confirm the applicability of the proposed method in combinatorial chemistry.

  2. Development of an SPE/CE method for analyzing HAAs

    USGS Publications Warehouse

    Zhang, L.; Capel, P.D.; Hozalski, R.M.

    2007-01-01

    The haloacetic acid (HAA) analysis methods approved by the US Environmental Protection Agency involve extraction and derivatization of HAAs (typically to their methyl ester form) and analysis by gas chromatography (GC) with electron capture detection (ECD). Concerns associated with these methods include the time and effort of the derivatization process, use of potentially hazardous chemicals or conditions during methylation, poor recoveries because of low extraction efficiencies for some HAAs or matrix effects from sulfate, and loss of tribromoacetic acid because of decarboxylation. The HAA analysis method introduced here uses solid-phase extraction (SPE) followed by capillary electrophoresis (CE) analysis. The method is accurate, reproducible, sensitive, relatively safe, and easy to perform, and avoids the use of large amounts of solvent for liquid-liquid extraction and the potential hazards and hassles of derivatization. The cost of analyzing HAAs using this method should be lower than the currently approved methods, and utilities with a GC/ECD can perform the analysis in-house.

  3. MO-FG-CAMPUS-TeP1-01: An Efficient Method of 3D Patient Dose Reconstruction Based On EPID Measurements for Pre-Treatment Patient Specific QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, R; Lee, C; Calvary Mater Newcastle, Newcastle

    Purpose: To demonstrate an efficient and clinically relevant patient specific QA method by reconstructing 3D patient dose from 2D EPID images for IMRT plans. Also to determine the usefulness of 2D QA metrics when assessing 3D patient dose deviations. Methods: Using the method developed by King et al (Med Phys 39(5),2839–2847), EPID images of IMRT fields were acquired in air and converted to dose at 10 cm depth (SAD setup) in a flat virtual water phantom. Each EPID measured dose map was then divided by the corresponding treatment planning system (TPS) dose map calculated with an identical setup, to derivemore » a 2D “error matrix”. For each field, the error matrix was used to adjust the doses along the respective ray lines in the original patient 3D dose. All field doses were combined to derive a reconstructed 3D patient dose for quantitative analysis. A software tool was developed to efficiently implement the entire process and was tested with a variety of IMRT plans for 2D (virtual flat phantom) and 3D (in-patient) QA analysis. Results: The method was tested on 60 IMRT plans. The mean (± standard deviation) 2D gamma (2%,2mm) pass rate (2D-GPR) was 97.4±3.0% and the mean 2D gamma index (2D-GI) was 0.35±0.06. The 3D PTV mean dose deviation was 1.8±0.8%. The analysis showed very weak correlations between both the 2D-GPR and 2D-GI when compared with PTV mean dose deviations (R2=0.3561 and 0.3632 respectively). Conclusion: Our method efficiently calculates 3D patient dose from 2D EPID images, utilising all of the advantages of an EPID-based dosimetry system. In this study, the 2D QA metrics did not predict the 3D patient dose deviation. This tool allows reporting of the 3D volumetric dose parameters thus providing more clinically relevant patient specific QA.« less

  4. Optimization of the efficiency of search operations in the relational database of radio electronic systems

    NASA Astrophysics Data System (ADS)

    Wajszczyk, Bronisław; Biernacki, Konrad

    2018-04-01

    The increase of interoperability of radio electronic systems used in the Armed Forces requires the processing of very large amounts of data. Requirements for the integration of information from many systems and sensors, including radar recognition, electronic and optical recognition, force to look for more efficient methods to support information retrieval in even-larger database resources. This paper presents the results of research on methods of improving the efficiency of databases using various types of indexes. The data structure indexing technique is a solution used in RDBMS systems (relational database management system). However, the analysis of the performance of indices, the description of potential applications, and in particular the presentation of a specific scale of performance growth for individual indices are limited to few studies in this field. This paper contains analysis of methods affecting the work efficiency of a relational database management system. As a result of the research, a significant increase in the efficiency of operations on data was achieved through the strategy of indexing data structures. The presentation of the research topic discussed in this paper mainly consists of testing the operation of various indexes against the background of different queries and data structures. The conclusions from the conducted experiments allow to assess the effectiveness of the solutions proposed and applied in the research. The results of the research indicate the existence of a real increase in the performance of operations on data using indexation of data structures. In addition, the level of this growth is presented, broken down by index types.

  5. IT Solution concept development for tracking and analyzing the labor effectiveness of employees

    NASA Astrophysics Data System (ADS)

    Ilin, Igor; Shirokova, Svetlana; Lepekhin, Aleksandr

    2018-03-01

    Labor efficiency and productivity of employees is an important aspect for the environment within any type of organization. This is particularly crucial factor for the companies, if which operations are associated with physical labor, such as construction companies. Productivity and efficiency are both very complicated concepts and a huge variety of methods and approaches to its analysis can be implemented within the organization. Despite that, it is important to choose the methods, which not only analyze the key performance indicators of employee, but take into account personal indicators, which might affect performance even more than professional skills. For this complicated analysis task it is important to build IT solution for tracking and analyzing of the labor effectiveness. The concept for designing this IT solution is proposed in the current research.

  6. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  7. A New Parameter for Cardiac Efficiency Analysis

    NASA Astrophysics Data System (ADS)

    Borazjani, Iman; Rajan, Navaneetha Krishnan; Song, Zeying; Hoffmann, Kenneth; MacMahon, Eileen; Belohlavek, Marek

    2014-11-01

    Detecting and evaluating a heart with suboptimal pumping efficiency is a significant clinical goal. However, the routine parameters such as ejection fraction, quantified with current non-invasive techniques are not predictive of heart disease prognosis. Furthermore, they only represent left-ventricular (LV) ejection function and not the efficiency, which might be affected before apparent changes in the function. We propose a new parameter, called the hemodynamic efficiency (H-efficiency) and defined as the ratio of the useful to total power, for cardiac efficiency analysis. Our results indicate that the change in the shape/motion of the LV will change the pumping efficiency of the LV even if the ejection fraction is kept constant at 55% (normal value), i.e., H-efficiency can be used for suboptimal cardiac performance diagnosis. To apply H-efficiency on a patient-specific basis, we are developing a system that combines echocardiography (echo) and computational fluid dynamics (CFD) to provide the 3D pressure and velocity field to directly calculate the H-efficiency parameter. Because the method is based on clinically used 2D echo, which has faster acquisition time and lower cost relative to other imaging techniques, it can have a significant impact on a large number of patients. This work is partly supported by the American Heart Association.

  8. Efficient Numerical Methods for Nonlinear-Facilitated Transport and Exchange in a Blood-Tissue Exchange Unit

    PubMed Central

    Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.

    2010-01-01

    The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808

  9. Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems

    NASA Technical Reports Server (NTRS)

    Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.

    2005-01-01

    The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.

  10. On the analysis of using 3-coil wireless power transfer system in retinal prosthesis.

    PubMed

    Bai, Shun; Skafidas, Stan

    2014-01-01

    Designing a wireless power transmission system(WPTS) using inductive coupling has been investigated extensively in the last decade. Depending on the different configurations of the coupling system, there have been various designing methods to optimise the power transmission efficiency based on the tuning circuitry, quality factor optimisation and geometrical configuration. Recently, a 3-coil WPTS was introduced in retinal prosthesis to overcome the low power transferring efficiency due to low coupling coefficient. Here we present a method to analyse this 3-coil WPTS using the S-parameters to directly obtain maximum achievable power transferring efficiency. Through electromagnetic simulation, we brought a question on the condition of improvement using 3-coil WPTS in powering retinal prosthesis.

  11. Boundary element analysis of post-tensioned slabs

    NASA Astrophysics Data System (ADS)

    Rashed, Youssef F.

    2015-06-01

    In this paper, the boundary element method is applied to carry out the structural analysis of post-tensioned flat slabs. The shear-deformable plate-bending model is employed. The effect of the pre-stressing cables is taken into account via the equivalent load method. The formulation is automated using a computer program, which uses quadratic boundary elements. Verification samples are presented, and finally a practical application is analyzed where results are compared against those obtained from the finite element method. The proposed method is efficient in terms of computer storage and processing time as well as the ease in data input and modifications.

  12. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  13. Combining and Comparing Coalescent, Distance and Character-Based Approaches for Barcoding Microalgaes: A Test with Chlorella-Like Species (Chlorophyta).

    PubMed

    Zou, Shanmei; Fei, Cong; Song, Jiameng; Bao, Yachao; He, Meilin; Wang, Changhai

    2016-01-01

    Several different barcoding methods of distinguishing species have been advanced, but which method is the best is still controversial. Chlorella is becoming particularly promising in the development of second-generation biofuels. However, the taxonomy of Chlorella-like organisms is easily confused. Here we report a comprehensive barcoding analysis of Chlorella-like species from Chlorella, Chloroidium, Dictyosphaerium and Actinastrum based on rbcL, ITS, tufA and 16S sequences to test the efficiency of traditional barcoding, GMYC, ABGD, PTP, P ID and character-based barcoding methods. First of all, the barcoding results gave new insights into the taxonomic assessment of Chlorella-like organisms studied, including the clear species discrimination and resolution of potentially cryptic species complexes in C. sorokiniana, D. ehrenbergianum and C. Vulgaris. The tufA proved to be the most efficient barcoding locus, which thus could be as potential "specific barcode" for Chlorella-like species. The 16S failed in discriminating most closely related species. The resolution of GMYC, PTP, P ID, ABGD and character-based barcoding methods were variable among rbcL, ITS and tufA genes. The best resolution for species differentiation appeared in tufA analysis where GMYC, PTP, ABGD and character-based approaches produced consistent groups while the PTP method over-split the taxa. The character analysis of rbcL, ITS and tufA sequences could clearly distinguish all taxonomic groups respectively, including the potentially cryptic lineages, with many character attributes. Thus, the character-based barcoding provides an attractive complement to coalescent and distance-based barcoding. Our study represents the test that proves the efficiency of multiple DNA barcoding in species discrimination of microalgaes.

  14. Combining and Comparing Coalescent, Distance and Character-Based Approaches for Barcoding Microalgaes: A Test with Chlorella-Like Species (Chlorophyta)

    PubMed Central

    Zou, Shanmei; Fei, Cong; Song, Jiameng; Bao, Yachao; He, Meilin; Wang, Changhai

    2016-01-01

    Several different barcoding methods of distinguishing species have been advanced, but which method is the best is still controversial. Chlorella is becoming particularly promising in the development of second-generation biofuels. However, the taxonomy of Chlorella–like organisms is easily confused. Here we report a comprehensive barcoding analysis of Chlorella-like species from Chlorella, Chloroidium, Dictyosphaerium and Actinastrum based on rbcL, ITS, tufA and 16S sequences to test the efficiency of traditional barcoding, GMYC, ABGD, PTP, P ID and character-based barcoding methods. First of all, the barcoding results gave new insights into the taxonomic assessment of Chlorella-like organisms studied, including the clear species discrimination and resolution of potentially cryptic species complexes in C. sorokiniana, D. ehrenbergianum and C. Vulgaris. The tufA proved to be the most efficient barcoding locus, which thus could be as potential “specific barcode” for Chlorella-like species. The 16S failed in discriminating most closely related species. The resolution of GMYC, PTP, P ID, ABGD and character-based barcoding methods were variable among rbcL, ITS and tufA genes. The best resolution for species differentiation appeared in tufA analysis where GMYC, PTP, ABGD and character-based approaches produced consistent groups while the PTP method over-split the taxa. The character analysis of rbcL, ITS and tufA sequences could clearly distinguish all taxonomic groups respectively, including the potentially cryptic lineages, with many character attributes. Thus, the character-based barcoding provides an attractive complement to coalescent and distance-based barcoding. Our study represents the test that proves the efficiency of multiple DNA barcoding in species discrimination of microalgaes. PMID:27092945

  15. The wave-based substructuring approach for the efficient description of interface dynamics in substructuring

    NASA Astrophysics Data System (ADS)

    Donders, S.; Pluymers, B.; Ragnarsson, P.; Hadjit, R.; Desmet, W.

    2010-04-01

    In the vehicle design process, design decisions are more and more based on virtual prototypes. Due to competitive and regulatory pressure, vehicle manufacturers are forced to improve product quality, to reduce time-to-market and to launch an increasing number of design variants on the global market. To speed up the design iteration process, substructuring and component mode synthesis (CMS) methods are commonly used, involving the analysis of substructure models and the synthesis of the substructure analysis results. Substructuring and CMS enable efficient decentralized collaboration across departments and allow to benefit from the availability of parallel computing environments. However, traditional CMS methods become prohibitively inefficient when substructures are coupled along large interfaces, i.e. with a large number of degrees of freedom (DOFs) at the interface between substructures. The reason is that the analysis of substructures involves the calculation of a number of enrichment vectors, one for each interface degree of freedom (DOF). Since large interfaces are common in vehicles (e.g. the continuous line connections to connect the body with the windshield, roof or floor), this interface bottleneck poses a clear limitation in the vehicle noise, vibration and harshness (NVH) design process. Therefore there is a need to describe the interface dynamics more efficiently. This paper presents a wave-based substructuring (WBS) approach, which allows reducing the interface representation between substructures in an assembly by expressing the interface DOFs in terms of a limited set of basis functions ("waves"). As the number of basis functions can be much lower than the number of interface DOFs, this greatly facilitates the substructure analysis procedure and results in faster design predictions. The waves are calculated once from a full nominal assembly analysis, but these nominal waves can be re-used for the assembly of modified components. The WBS approach thus enables efficient structural modification predictions of the global modes, so that efficient vibro-acoustic design modification, optimization and robust design become possible. The results show that wave-based substructuring offers a clear benefit for vehicle design modifications, by improving both the speed of component reduction processes and the efficiency and accuracy of design iteration predictions, as compared to conventional substructuring approaches.

  16. Sensitivity analysis of dynamic biological systems with time-delays.

    PubMed

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2010-10-15

    Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.

  17. The Location of Sources of Human Computer Processed Cerebral Potentials for the Automated Assessment of Visual Field Impairment

    PubMed Central

    Leisman, Gerald; Ashkenazi, Maureen

    1979-01-01

    Objective psychophysical techniques for investigating visual fields are described. The paper concerns methods for the collection and analysis of evoked potentials using a small laboratory computer and provides efficient methods for obtaining information about the conduction pathways of the visual system.

  18. Study of strength kinetics of sand concrete system of accelerated hardening

    NASA Astrophysics Data System (ADS)

    Sharanova, A. V.; Lenkova, D. A.; Panfilova, A. D.

    2018-04-01

    Methods of calorimetric analysis are used to study the dynamics of the hydration processes of concretes with different accelerator contents. The efficiency of the isothermal calorimetry method is shown for study of strength kinetics of concrete mixtures of accelerated hardening, promising for additive technologies in civil engineering.

  19. Modeling of Melt-Infiltrated SiC/SiC Composite Properties

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Bednarcyk, Brett A.; Arnold, Steven M.; Lang, Jerry

    2009-01-01

    The elastic properties of a two-dimensional five-harness melt-infiltrated silicon carbide fiber reinforced silicon carbide matrix (MI SiC/SiC) ceramic matrix composite (CMC) were predicted using several methods. Methods used in this analysis are multiscale laminate analysis, micromechanics-based woven composite analysis, a hybrid woven composite analysis, and two- and three-dimensional finite element analyses. The elastic properties predicted are in good agreement with each other as well as with the available measured data. However, the various methods differ from each other in three key areas: (1) the fidelity provided, (2) the efforts required for input data preparation, and (3) the computational resources required. Results also indicate that efficient methods are also able to provide a reasonable estimate of local stress fields.

  20. Computer-assisted techniques to evaluate fringe patterns

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1992-01-01

    Strain measurement using interferometry requires an efficient way to extract the desired information from interferometric fringes. Availability of digital image processing systems makes it possible to use digital techniques for the analysis of fringes. In the past, there have been several developments in the area of one dimensional and two dimensional fringe analysis techniques, including the carrier fringe method (spatial heterodyning) and the phase stepping (quasi-heterodyning) technique. This paper presents some new developments in the area of two dimensional fringe analysis, including a phase stepping technique supplemented by the carrier fringe method and a two dimensional Fourier transform method to obtain the strain directly from the discontinuous phase contour map.

  1. Protoplast isolation, transient transformation of leaf mesophyll protoplasts and improved Agrobacterium-mediated leaf disc infiltration of Phaseolus vulgaris: tools for rapid gene expression analysis.

    PubMed

    Nanjareddy, Kalpana; Arthikala, Manoj-Kumar; Blanco, Lourdes; Arellano, Elizabeth S; Lara, Miguel

    2016-06-24

    Phaseolus vulgaris is one of the most extensively studied model legumes in the world. The P. vulgaris genome sequence is available; therefore, the need for an efficient and rapid transformation system is more imperative than ever. The functional characterization of P. vulgaris genes is impeded chiefly due to the non-amenable nature of Phaseolus sp. to stable genetic transformation. Transient transformation systems are convenient and versatile alternatives for rapid gene functional characterization studies. Hence, the present work focuses on standardizing methodologies for protoplast isolation from multiple tissues and transient transformation protocols for rapid gene expression analysis in the recalcitrant grain legume P. vulgaris. Herein, we provide methodologies for the high-throughput isolation of leaf mesophyll-, flower petal-, hypocotyl-, root- and nodule-derived protoplasts from P. vulgaris. The highly efficient polyethylene glycol-mannitol magnesium (PEG-MMG)-mediated transformation of leaf mesophyll protoplasts was optimized using a GUS reporter gene. We used the P. vulgaris SNF1-related protein kinase 1 (PvSnRK1) gene as proof of concept to demonstrate rapid gene functional analysis. An RT-qPCR analysis of protoplasts that had been transformed with PvSnRK1-RNAi and PvSnRK1-OE vectors showed the significant downregulation and ectopic constitutive expression (overexpression), respectively, of the PvSnRK1 transcript. We also demonstrated an improved transient transformation approach, sonication-assisted Agrobacterium-mediated transformation (SAAT), for the leaf disc infiltration of P. vulgaris. Interestingly, this method resulted in a 90 % transformation efficiency and transformed 60-85 % of the cells in a given area of the leaf surface. The constitutive expression of YFP further confirmed the amenability of the system to gene functional characterization studies. We present simple and efficient methodologies for protoplast isolation from multiple P. vulgaris tissues. We also provide a high-efficiency and amenable method for leaf mesophyll transformation for rapid gene functional characterization studies. Furthermore, a modified SAAT leaf disc infiltration approach aids in validating genes and their functions. Together, these methods help to rapidly unravel novel gene functions and are promising tools for P. vulgaris research.

  2. A global optimization method synthesizing heat transfer and thermodynamics for the power generation system with Brayton cycle

    NASA Astrophysics Data System (ADS)

    Fu, Rong-Huan; Zhang, Xing

    2016-09-01

    Supercritical carbon dioxide operated in a Brayton cycle offers a numerous of potential advantages for a power generation system, and a lot of thermodynamics analyses have been conducted to increase its efficiency. Because there are a lot of heat-absorbing and heat-lossing subprocesses in a practical thermodynamic cycle and they are implemented by heat exchangers, it will increase the gross efficiency of the whole power generation system to optimize the system combining thermodynamics and heat transfer theory. This paper analyzes the influence of the performance of heat exchangers on the actual efficiency of an ideal Brayton cycle with a simple configuration, and proposes a new method to optimize the power generation system, which aims at the minimum energy consumption. Although the method is operated only for the ideal working fluid in this paper, its merits compared to that only with thermodynamic analysis are fully shown.

  3. A third-order approximation method for three-dimensional wheel-rail contact

    NASA Astrophysics Data System (ADS)

    Negretti, Daniele

    2012-03-01

    Multibody train analysis is used increasingly by railway operators whenever a reliable and time-efficient method to evaluate the contact between wheel and rail is needed; particularly, the wheel-rail contact is one of the most important aspects that affects a reliable and time-efficient vehicle dynamics computation. The focus of the approach proposed here is to carry out such tasks by means of online wheel-rail elastic contact detection. In order to improve efficiency and save time, a main analytical approach is used for the definition of wheel and rail surfaces as well as for contact detection, then a final numerical evaluation is used to locate contact. The final numerical procedure consists in finding the zeros of a nonlinear function in a single variable. The overall method is based on the approximation of the wheel surface, which does not influence the contact location significantly, as shown in the paper.

  4. Efficient engineering of marker-free synthetic allotetraploids of Saccharomyces.

    PubMed

    Alexander, William G; Peris, David; Pfannenstiel, Brandon T; Opulente, Dana A; Kuang, Meihua; Hittinger, Chris Todd

    2016-04-01

    Saccharomyces interspecies hybrids are critical biocatalysts in the fermented beverage industry, including in the production of lager beers, Belgian ales, ciders, and cold-fermented wines. Current methods for making synthetic interspecies hybrids are cumbersome and/or require genome modifications. We have developed a simple, robust, and efficient method for generating allotetraploid strains of prototrophic Saccharomyces without sporulation or nuclear genome manipulation. S. cerevisiae×S. eubayanus, S. cerevisiae×S. kudriavzevii, and S. cerevisiae×S. uvarum designer hybrid strains were created as synthetic lager, Belgian, and cider strains, respectively. The ploidy and hybrid nature of the strains were confirmed using flow cytometry and PCR-RFLP analysis, respectively. This method provides an efficient means for producing novel synthetic hybrids for beverage and biofuel production, as well as for constructing tetraploids to be used for basic research in evolutionary genetics and genome stability. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Optimization of the multi-turn injection efficiency for a medical synchrotron

    NASA Astrophysics Data System (ADS)

    Kim, J.; Yoon, M.; Yim, H.

    2016-09-01

    We present a method for optimizing the multi-turn injection efficiency for a medical synchrotron. We show that for a given injection energy, the injection efficiency can be greatly enhanced by choosing transverse tunes appropriately and by optimizing the injection bump and the number of turns required for beam injection. We verify our study by applying the method to the Korea Heavy Ion Medical Accelerator (KHIMA) synchrotron which is currently being built at the campus of Dongnam Institute of Radiological and Medical Sciences (DIRAMS) in Busan, Korea. First the frequency map analysis was performed with the help of the ELEGANT and the ACCSIM codes. The tunes that yielded good injection efficiency were then selected. With these tunes, the injection bump and the number of turns required for injection were then optimized by tracking a number of particles for up to one thousand turns after injection, beyond which no further beam loss occurred. Results for the optimization of the injection efficiency for proton ions are presented.

  6. Spatiotemporal Data Mining, Analysis, and Visualization of Human Activity Data

    ERIC Educational Resources Information Center

    Li, Xun

    2012-01-01

    This dissertation addresses the research challenge of developing efficient new methods for discovering useful patterns and knowledge in large volumes of electronically collected spatiotemporal activity data. I propose to analyze three types of such spatiotemporal activity data in a methodological framework that integrates spatial analysis, data…

  7. Genetic diversity and association analysis of leafminer (Liriomyza langei) resistance in spinach (Spinacia oleracea)

    USDA-ARS?s Scientific Manuscript database

    Leafminer (Liriomyza spp.) is a major insect pest of many important agricultural crops, including spinach (Spinacia oleracea). Use of genetic resistance is an efficient, economic and environment-friendly method to control this pest. The objective of this research was to conduct association analysis ...

  8. 20180311 - EPA’s Non-Targeted Analysis Research Program: Expanding public data resources in support of exposure science (SOT)

    EPA Science Inventory

    Suspect screening (SSA) and non-targeted analysis (NTA) methods using high-resolution mass spectrometry (HRMS) offer new approaches to efficiently generate exposure data for chemicals in a variety of environmental and biological media. These techniques aid characterization of the...

  9. An Efficient Method for Genomic DNA Extraction from Different Molluscs Species

    PubMed Central

    Pereira, Jorge C.; Chaves, Raquel; Bastos, Estela; Leitão, Alexandra; Guedes-Pinto, Henrique

    2011-01-01

    The selection of a DNA extraction method is a critical step when subsequent analysis depends on the DNA quality and quantity. Unlike mammals, for which several capable DNA extraction methods have been developed, for molluscs the availability of optimized genomic DNA extraction protocols is clearly insufficient. Several aspects such as animal physiology, the type (e.g., adductor muscle or gills) or quantity of tissue, can explain the lack of efficiency (quality and yield) in molluscs genomic DNA extraction procedure. In an attempt to overcome these aspects, this work describes an efficient method for molluscs genomic DNA extraction that was tested in several species from different orders: Veneridae, Ostreidae, Anomiidae, Cardiidae (Bivalvia) and Muricidae (Gastropoda), with different weight sample tissues. The isolated DNA was of high molecular weight with high yield and purity, even with reduced quantities of tissue. Moreover, the genomic DNA isolated, demonstrated to be suitable for several downstream molecular techniques, such as PCR sequencing among others. PMID:22174651

  10. Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.

    PubMed

    Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y

    2016-11-01

    Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.

  11. Efficiency of endoscopy units can be improved with use of discrete event simulation modeling

    PubMed Central

    Sauer, Bryan G.; Singh, Kanwar P.; Wagner, Barry L.; Vanden Hoek, Matthew S.; Twilley, Katherine; Cohn, Steven M.; Shami, Vanessa M.; Wang, Andrew Y.

    2016-01-01

    Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience. PMID:27853739

  12. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  13. Numerical method of carbon-based material ablation effects on aero-heating for half-sphere

    NASA Astrophysics Data System (ADS)

    Wang, Jiang-Feng; Li, Jia-Wei; Zhao, Fa-Ming; Fan, Xiao-Feng

    2018-05-01

    A numerical method of aerodynamic heating with material thermal ablation effects for hypersonic half-sphere is presented. A surface material ablation model is provided to analyze the ablation effects on aero-thermal properties and structural heat conduction for thermal protection system (TPS) of hypersonic vehicles. To demonstrate its capability, applications for thermal analysis of hypersonic vehicles using carbonaceous ceramic ablators are performed and discussed. The numerical results show the high efficiency and validation of the method developed in thermal characteristics analysis of hypersonic aerodynamic heating.

  14. A comparison of TSS and TRASYS in form factor calculation

    NASA Technical Reports Server (NTRS)

    Golliher, Eric

    1993-01-01

    As the workstation and personal computer become more popular than a centralized mainframe to perform thermal analysis, the methods for space vehicle thermal analysis will change. Already, many thermal analysis codes are now available for workstations, which were not in existence just five years ago. As these changes occur, some organizations will adopt the new codes and analysis techniques, while others will not. This might lead to misunderstandings between thermal shops in different organizations. If thermal analysts make an effort to understand the major differences between the new and old methods, a smoother transition to a more efficient and more versatile thermal analysis environment will be realized.

  15. Immuno-affinity Capture Followed by TMPP N-Terminus Tagging to Study Catabolism of Therapeutic Proteins.

    PubMed

    Kullolli, Majlinda; Rock, Dan A; Ma, Ji

    2017-02-03

    Characterization of in vitro and in vivo catabolism of therapeutic proteins has increasingly become an integral part of discovery and development process for novel proteins. Unambiguous and efficient identification of catabolites can not only facilitate accurate understanding of pharmacokinetic profiles of drug candidates, but also enables follow up protein engineering to generate more catabolically stable molecules with improved properties (pharmacokinetics and pharmacodynamics). Immunoaffinity capture (IC) followed by top-down intact protein analysis using either matrix-assisted laser desorption/ionization or electrospray ionization mass spectrometry analysis have been the primary methods of choice for catabolite identification. However, the sensitivity and efficiency of these methods is not always sufficient for characterization of novel proteins from complex biomatrices such as plasma or serum. In this study a novel bottom-up targeted protein workflow was optimized for analysis of proteolytic degradation of therapeutic proteins. Selective and sensitive tagging of the alpha-amine at the N-terminus of proteins of interest was performed by immunoaffinity capture of therapeutic protein and its catabolites followed by on-bead succinimidyloxycarbonylmethyl tri-(2,4,6-trimethoxyphenyl N-terminus (TMPP-NTT) tagging. The positively charged hydrophobic TMPP tag facilitates unambiguous sequence identification of all N-terminus peptides from complex tryptic digestion samples via data dependent liquid chromatgraphy-tandem mass spectroscopy. Utility of the workflow was illustrated by definitive analysis of in vitro catabolic profile of neurotensin human Fc (NTs-huFc) protein in mouse serum. The results from this study demonstrated that the IC-TMPP-NTT workflow is a simple and efficient method for catabolite formation in therapeutic proteins.

  16. Methods and analysis of factors impact on the efficiency of the photovoltaic generation

    NASA Astrophysics Data System (ADS)

    Tianze, Li; Xia, Zhang; Chuan, Jiang; Luan, Hou

    2011-02-01

    First of all, the thesis elaborates two important breakthroughs which happened In the field of the application of solar energy in the 1950s.The 21st century the development of solar photovoltaic power generation will have the following characteristics: the continued high growth of industrial development, the significantly reducing cost of the solar cell, the large-scale high-tech development of photovoltaic industries, the breakthroughs of the film battery technology, the rapid development of solar PV buildings integration and combined to the grids. The paper makes principles of solar cells the theoretical analysis. On the basis, we study the conversion efficiency of solar cells, find the factors impact on the efficiency of the photovoltaic generation, solve solar cell conversion efficiency of technical problems through the development of new technology, and open up new ways to improve the solar cell conversion efficiency. Finally, the paper connecting with the practice establishes policies and legislation to the use of encourage renewable energy, development strategy, basic applied research etc.

  17. A new transform for the analysis of complex fractionated atrial electrograms

    PubMed Central

    2011-01-01

    Background Representation of independent biophysical sources using Fourier analysis can be inefficient because the basis is sinusoidal and general. When complex fractionated atrial electrograms (CFAE) are acquired during atrial fibrillation (AF), the electrogram morphology depends on the mix of distinct nonsinusoidal generators. Identification of these generators using efficient methods of representation and comparison would be useful for targeting catheter ablation sites to prevent arrhythmia reinduction. Method A data-driven basis and transform is described which utilizes the ensemble average of signal segments to identify and distinguish CFAE morphologic components and frequencies. Calculation of the dominant frequency (DF) of actual CFAE, and identification of simulated independent generator frequencies and morphologies embedded in CFAE, is done using a total of 216 recordings from 10 paroxysmal and 10 persistent AF patients. The transform is tested versus Fourier analysis to detect spectral components in the presence of phase noise and interference. Correspondence is shown between ensemble basis vectors of highest power and corresponding synthetic drivers embedded in CFAE. Results The ensemble basis is orthogonal, and efficient for representation of CFAE components as compared with Fourier analysis (p ≤ 0.002). When three synthetic drivers with additive phase noise and interference were decomposed, the top three peaks in the ensemble power spectrum corresponded to the driver frequencies more closely as compared with top Fourier power spectrum peaks (p ≤ 0.005). The synthesized drivers with phase noise and interference were extractable from their corresponding ensemble basis with a mean error of less than 10%. Conclusions The new transform is able to efficiently identify CFAE features using DF calculation and by discerning morphologic differences. Unlike the Fourier transform method, it does not distort CFAE signals prior to analysis, and is relatively robust to jitter in periodic events. Thus the ensemble method can provide a useful alternative for quantitative characterization of CFAE during clinical study. PMID:21569421

  18. Finite element analysis and computer graphics visualization of flow around pitching and plunging airfoils

    NASA Technical Reports Server (NTRS)

    Bratanow, T.; Ecer, A.

    1973-01-01

    A general computational method for analyzing unsteady flow around pitching and plunging airfoils was developed. The finite element method was applied in developing an efficient numerical procedure for the solution of equations describing the flow around airfoils. The numerical results were employed in conjunction with computer graphics techniques to produce visualization of the flow. The investigation involved mathematical model studies of flow in two phases: (1) analysis of a potential flow formulation and (2) analysis of an incompressible, unsteady, viscous flow from Navier-Stokes equations.

  19. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  20. Linking Automated Data Analysis and Visualization with Applications in Developmental Biology and High-Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruebel, Oliver

    2009-11-20

    Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research coveredmore » in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.« less

  1. Computationally Efficient Clustering of Audio-Visual Meeting Data

    NASA Astrophysics Data System (ADS)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  2. Experimental Evaluation of Stagnation Point Collection Efficiency of the NACA 0012 Swept Wing Tip

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Kreeger, Richard E.

    2010-01-01

    This paper presents the experimental work of a number of icing tests conducted in the Icing Research Tunnel at NASA Glenn Research Center to develop a test method for measuring the local collection efficiency of an impinging cloud at the leading edge of a NACA 0012 swept wing and with the data obtained to further calibrate a proposed correlation for such impingement efficiency calculation as a function of the modified inertia parameter and the sweep angle. The preliminary results showed that there could be some limitation of the test method due to the ice erosion problem when encountered, and also found that, for conditions free of such problem, the stagnation point collection efficiency measurement for sweep angles up to 45 could be well approximated by the proposed correlation. Further evaluation of this correlation is recommended in order to assess its applicability for swept-wing icing scaling analysis.

  3. An Efficient, Noniterative Method of Identifying the Cost-Effectiveness Frontier.

    PubMed

    Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D

    2016-01-01

    Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we also provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. © The Author(s) 2015.

  4. An Efficient, Non-iterative Method of Identifying the Cost-Effectiveness Frontier

    PubMed Central

    Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D.

    2015-01-01

    Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we additionally provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. PMID:25926282

  5. Improving the efficiency of a chemotherapy day unit: applying a business approach to oncology.

    PubMed

    van Lent, Wineke A M; Goedbloed, N; van Harten, W H

    2009-03-01

    To improve the efficiency of a hospital-based chemotherapy day unit (CDU). The CDU was benchmarked with two other CDUs to identify their attainable performance levels for efficiency, and causes for differences. Furthermore, an in-depth analysis using a business approach, called lean thinking, was performed. An integrated set of interventions was implemented, among them a new planning system. The results were evaluated using pre- and post-measurements. We observed 24% growth of treatments and bed utilisation, a 12% increase of staff member productivity and an 81% reduction of overtime. The used method improved process design and led to increased efficiency and a more timely delivery of care. Thus, the business approaches, which were adapted for healthcare, were successfully applied. The method may serve as an example for other oncology settings with problems concerning waiting times, patient flow or lack of beds.

  6. Post-staining electroblotting for efficient and reliable peptide blotting.

    PubMed

    Lee, Der-Yen; Chang, Geen-Dong

    2015-01-01

    Post-staining electroblotting has been previously described to transfer Coomassie blue-stained proteins from polyacrylamide gel onto polyvinylidene difluoride (PVDF) membranes. Actually, stained peptides can also be efficiently and reliably transferred. Because of selective staining procedures for peptides and increased retention of stained peptides on the membrane, even peptides with molecular masses less than 2 kDa such as bacitracin and granuliberin R are transferred with satisfactory results. For comparison, post-staining electroblotting is about 16-fold more sensitive than the conventional electroblotting for visualization of insulin on the membrane. Therefore, the peptide blots become practicable and more accessible to further applications, e.g., blot overlay detection or immunoblotting analysis. In addition, the efficiency of peptide transfer is favorable for N-terminal sequence analysis. With this method, peptide blotting can be normalized for further analysis such as blot overlay assay, immunoblotting, and N-terminal sequencing for identification of peptide in crude or partially purified samples.

  7. Energy distribution analysis in boosted HCCI-like / LTGC engines – Understanding the trade-offs to maximize the thermal efficiency

    DOE PAGES

    Dernotte, Jeremie; Dec, John E.; Ji, Chunsheng

    2015-04-14

    A detailed understanding of the various factors affecting the trends in gross-indicated thermal efficiency with changes in key operating parameters has been carried out, applied to a one-liter displacement single-cylinder boosted Low-Temperature Gasoline Combustion (LTGC) engine. This work systematically investigates how the supplied fuel energy splits into the following four energy pathways: gross-indicated thermal efficiency, combustion inefficiency, heat transfer and exhaust losses, and how this split changes with operating conditions. Additional analysis is performed to determine the influence of variations in the ratio of specific heat capacities (γ) and the effective expansion ratio, related to the combustion-phasing retard (CA50), onmore » the energy split. Heat transfer and exhaust losses are computed using multiple standard cycle analysis techniques. Furthermore, the various methods are evaluated in order to validate the trends.« less

  8. Evaluation of qPCR curve analysis methods for reliable biomarker discovery: bias, resolution, precision, and implications.

    PubMed

    Ruijter, Jan M; Pfaffl, Michael W; Zhao, Sheng; Spiess, Andrej N; Boggy, Gregory; Blom, Jochen; Rutledge, Robert G; Sisti, Davide; Lievens, Antoon; De Preter, Katleen; Derveaux, Stefaan; Hellemans, Jan; Vandesompele, Jo

    2013-01-01

    RNA transcripts such as mRNA or microRNA are frequently used as biomarkers to determine disease state or response to therapy. Reverse transcription (RT) in combination with quantitative PCR (qPCR) has become the method of choice to quantify small amounts of such RNA molecules. In parallel with the democratization of RT-qPCR and its increasing use in biomedical research or biomarker discovery, we witnessed a growth in the number of gene expression data analysis methods. Most of these methods are based on the principle that the position of the amplification curve with respect to the cycle-axis is a measure for the initial target quantity: the later the curve, the lower the target quantity. However, most methods differ in the mathematical algorithms used to determine this position, as well as in the way the efficiency of the PCR reaction (the fold increase of product per cycle) is determined and applied in the calculations. Moreover, there is dispute about whether the PCR efficiency is constant or continuously decreasing. Together this has lead to the development of different methods to analyze amplification curves. In published comparisons of these methods, available algorithms were typically applied in a restricted or outdated way, which does not do them justice. Therefore, we aimed at development of a framework for robust and unbiased assessment of curve analysis performance whereby various publicly available curve analysis methods were thoroughly compared using a previously published large clinical data set (Vermeulen et al., 2009) [11]. The original developers of these methods applied their algorithms and are co-author on this study. We assessed the curve analysis methods' impact on transcriptional biomarker identification in terms of expression level, statistical significance, and patient-classification accuracy. The concentration series per gene, together with data sets from unpublished technical performance experiments, were analyzed in order to assess the algorithms' precision, bias, and resolution. While large differences exist between methods when considering the technical performance experiments, most methods perform relatively well on the biomarker data. The data and the analysis results per method are made available to serve as benchmark for further development and evaluation of qPCR curve analysis methods (http://qPCRDataMethods.hfrc.nl). Copyright © 2012 Elsevier Inc. All rights reserved.

  9. 2D signature for detection and identification of drugs

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Shen, Jingling; Zhang, Cunlin; Zhou, Qingli; Shi, Yulei

    2011-06-01

    The method of spectral dynamics analysis (SDA-method) is used for obtaining the2D THz signature of drugs. This signature is used for the detection and identification of drugs with similar Fourier spectra by transmitted THz signal. We discuss the efficiency of SDA method for the identification problem of pure methamphetamine (MA), methylenedioxyamphetamine (MDA), 3, 4-methylenedioxymethamphetamine (MDMA) and Ketamine.

  10. Determination of rhenium content in molybdenite by ICP-MS after separation of the major matrix by solvent extraction with N-benzoyl-N-phenylhydroxalamine.

    PubMed

    Li, Jie; Zhong, Li-feng; Tu, Xiang-lin; Liang, Xi-rong; Xu, Ji-feng

    2010-05-15

    A simple and rapid analytical method for determining the concentration of rhenium in molybdenite for Re-Os dating was developed. The method used isotope dilution-inductively coupled plasma-mass spectrometry (ID-ICP-MS) after the removal of major matrix elements (e.g., Mo, Fe, and W) from Re by solvent extraction with N-benzoyl-N-phenylhydroxylamine (BPHA) in chloroform solution. The effect on extraction efficiency of parameters such as pH (HCl concentration), BPHA concentration, and extraction time were also assessed. Under the optimal experimental conditions, the validity of the separation method was accessed by measuring (187)Re/(185)Re values for a molybdenite reference material (JDC). The obtained values were in good agreement with previously measured values of the Re standard. The proposed method was applied to replicate Re-Os dating of JDC and seven samples of molybdenite from the Yuanzhuding large Cu-Mo porphyry deposit. The results demonstrate good precision and accuracy for the proposed method. The advantages of the method (i.e., simplicity, efficiency, short analysis time, and low cost) make it suitable for routine analysis.

  11. PEEK tube-based online solid-phase microextraction-high-performance liquid chromatography for the determination of yohimbine in rat plasma and its application in pharmacokinetics study.

    PubMed

    Xiang, Xiaowei; Shang, Bing; Wang, Xiaozheng; Chen, Qinhua

    2017-04-01

    Yohimbine is a novel compound for the treatment of erectile dysfunction derived from natural products, and pharmacokinetic study is important for its further development as a new medicine. In this work, we developed a novel PEEK tube-based solid-phase microextraction (SPME)-HPLC method for analysis of yohimbine in plasma and further for pharmacokinetic study. Poly (AA-EGDMA) was synthesized inside a PEEK tube as the sorbent for microextraction of yohimbine, and parameters that could influence extraction efficiency were systematically investigated. Under optimum conditions, the PEEK tube-based SPME method exhibits excellent enrichment efficiency towards yohimbine. By using berberine as internal standard, an online SPME-HPLC method was developed for analysis of yohimbine in human plasma sample. The method has wide linear range (2-1000 ng/mL) with an R 2 of 0.9962; the limit of detection was determined and was as low as 0.1 ng/mL using UV detection. Finally, a pharmacokinetic study of yohimbine was carried out by the online SPME-HPLC method and the results have been compared with those of reported methods. Copyright © 2016 John Wiley & Sons, Ltd.

  12. A spectral dynamic stiffness method for free vibration analysis of plane elastodynamic problems

    NASA Astrophysics Data System (ADS)

    Liu, X.; Banerjee, J. R.

    2017-03-01

    A highly efficient and accurate analytical spectral dynamic stiffness (SDS) method for modal analysis of plane elastodynamic problems based on both plane stress and plane strain assumptions is presented in this paper. First, the general solution satisfying the governing differential equation exactly is derived by applying two types of one-dimensional modified Fourier series. Then the SDS matrix for an element is formulated symbolically using the general solution. The SDS matrices are assembled directly in a similar way to that of the finite element method, demonstrating the method's capability to model complex structures. Any arbitrary boundary conditions are represented accurately in the form of the modified Fourier series. The Wittrick-Williams algorithm is then used as the solution technique where the mode count problem (J0) of a fully-clamped element is resolved. The proposed method gives highly accurate solutions with remarkable computational efficiency, covering low, medium and high frequency ranges. The method is applied to both plane stress and plane strain problems with simple as well as complex geometries. All results from the theory in this paper are accurate up to the last figures quoted to serve as benchmarks.

  13. An introduction to tree-structured modeling with application to quality of life data.

    PubMed

    Su, Xiaogang; Azuero, Andres; Cho, June; Kvale, Elizabeth; Meneses, Karen M; McNees, M Patrick

    2011-01-01

    Investigators addressing nursing research are faced increasingly with the need to analyze data that involve variables of mixed types and are characterized by complex nonlinearity and interactions. Tree-based methods, also called recursive partitioning, are gaining popularity in various fields. In addition to efficiency and flexibility in handling multifaceted data, tree-based methods offer ease of interpretation. The aims of this study were to introduce tree-based methods, discuss their advantages and pitfalls in application, and describe their potential use in nursing research. In this article, (a) an introduction to tree-structured methods is presented, (b) the technique is illustrated via quality of life (QOL) data collected in the Breast Cancer Education Intervention study, and (c) implications for their potential use in nursing research are discussed. As illustrated by the QOL analysis example, tree methods generate interesting and easily understood findings that cannot be uncovered via traditional linear regression analysis. The expanding breadth and complexity of nursing research may entail the use of new tools to improve efficiency and gain new insights. In certain situations, tree-based methods offer an attractive approach that help address such needs.

  14. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  15. Improved drug loading and antibacterial activity of minocycline-loaded PLGA nanoparticles prepared by solid/oil/water ion pairing method

    PubMed Central

    Kashi, Tahereh Sadat Jafarzadeh; Eskandarion, Solmaz; Esfandyari-Manesh, Mehdi; Marashi, Seyyed Mahmoud Amin; Samadi, Nasrin; Fatemi, Seyyed Mostafa; Atyabi, Fatemeh; Eshraghi, Saeed; Dinarvand, Rassoul

    2012-01-01

    Background Low drug entrapment efficiency of hydrophilic drugs into poly(lactic-co-glycolic acid) (PLGA) nanoparticles is a major drawback. The objective of this work was to investigate different methods of producing PLGA nanoparticles containing minocycline, a drug suitable for periodontal infections. Methods Different methods, such as single and double solvent evaporation emulsion, ion pairing, and nanoprecipitation were used to prepare both PLGA and PEGylated PLGA nanoparticles. The resulting nanoparticles were analyzed for their morphology, particle size and size distribution, drug loading and entrapment efficiency, thermal properties, and antibacterial activity. Results The nanoparticles prepared in this study were spherical, with an average particle size of 85–424 nm. The entrapment efficiency of the nanoparticles prepared using different methods was as follows: solid/oil/water ion pairing (29.9%) > oil/oil (5.5%) > water/oil/water (4.7%) > modified oil/water (4.1%) > nano precipitation (0.8%). Addition of dextran sulfate as an ion pairing agent, acting as an ionic spacer between PEGylated PLGA and minocycline, decreased the water solubility of minocycline, hence increasing the drug entrapment efficiency. Entrapment efficiency was also increased when low molecular weight PLGA and high molecular weight dextran sulfate was used. Drug release studies performed in phosphate buffer at pH 7.4 indicated slow release of minocycline from 3 days to several weeks. On antibacterial analysis, the minimum inhibitory concentration and minimum bactericidal concentration of nanoparticles was at least two times lower than that of the free drug. Conclusion Novel minocycline-PEGylated PLGA nanoparticles prepared by the ion pairing method had the best drug loading and entrapment efficiency compared with other prepared nanoparticles. They also showed higher in vitro antibacterial activity than the free drug. PMID:22275837

  16. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm.

    PubMed

    Yang, Mengzhao; Song, Wei; Mei, Haibin

    2017-07-23

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.

  17. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm

    PubMed Central

    Song, Wei; Mei, Haibin

    2017-01-01

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient. PMID:28737699

  18. Ownership and technical efficiency of hospitals: evidence from Ghana using data envelopment analysis

    PubMed Central

    2014-01-01

    Background In order to measure and analyse the technical efficiency of district hospitals in Ghana, the specific objectives of this study were to (a) estimate the relative technical and scale efficiency of government, mission, private and quasi-government district hospitals in Ghana in 2005; (b) estimate the magnitudes of output increases and/or input reductions that would have been required to make relatively inefficient hospitals more efficient; and (c) use Tobit regression analysis to estimate the impact of ownership on hospital efficiency. Methods In the first stage, we used data envelopment analysis (DEA) to estimate the efficiency of 128 hospitals comprising of 73 government hospitals, 42 mission hospitals, 7 quasi-government hospitals and 6 private hospitals. In the second stage, the estimated DEA efficiency scores are regressed against hospital ownership variable using a Tobit model. This was a retrospective study. Results In our DEA analysis, using the variable returns to scale model, out of 128 district hospitals, 31 (24.0%) were 100% efficient, 25 (19.5%) were very close to being efficient with efficiency scores ranging from 70% to 99.9% and 71 (56.2%) had efficiency scores below 50%. The lowest-performing hospitals had efficiency scores ranging from 21% to 30%. Quasi-government hospitals had the highest mean efficiency score (83.9%) followed by public hospitals (70.4%), mission hospitals (68.6%) and private hospitals (55.8%). However, public hospitals also got the lowest mean technical efficiency scores (27.4%), implying they have some of the most inefficient hospitals. Regarding regional performance, Northern region hospitals had the highest mean efficiency score (83.0%) and Volta Region hospitals had the lowest mean score (43.0%). From our Tobit regression, we found out that while quasi-government ownership is positively associated with hospital technical efficiency, private ownership negatively affects hospital efficiency. Conclusions It would be prudent for policy-makers to examine the least efficient hospitals to correct widespread inefficiency. This would include reconsidering the number of hospitals and their distribution, improving efficiency and reducing duplication by closing or scaling down hospitals with efficiency scores below a certain threshold. For private hospitals with inefficiency related to large size, there is a need to break down such hospitals into manageable sizes. PMID:24708886

  19. A Radial Basis Function Approach to Financial Time Series Analysis

    DTIC Science & Technology

    1993-12-01

    including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes

  20. Robust network data envelopment analysis approach to evaluate the efficiency of regional electricity power networks under uncertainty.

    PubMed

    Fathollah Bayati, Mohsen; Sadjadi, Seyed Jafar

    2017-01-01

    In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model.

Top