Sample records for efficient approach based

  1. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    NASA Astrophysics Data System (ADS)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  2. Investigating the Efficiency of Scenario Based Learning and Reflective Learning Approaches in Teacher Education

    ERIC Educational Resources Information Center

    Hursen, Cigdem; Fasli, Funda Gezer

    2017-01-01

    The main purpose of this research is to investigate the efficiency of scenario based learning and reflective learning approaches in teacher education. The impact of applications of scenario based learning and reflective learning on prospective teachers' academic achievement and views regarding application and professional self-competence…

  3. A highly efficient approach to protein interactome mapping based on collaborative filtering framework.

    PubMed

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-09

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.

  4. A Highly Efficient Approach to Protein Interactome Mapping Based on Collaborative Filtering Framework

    PubMed Central

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-01

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly. PMID:25572661

  5. A Highly Efficient Approach to Protein Interactome Mapping Based on Collaborative Filtering Framework

    NASA Astrophysics Data System (ADS)

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-01

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.

  6. Energy Efficiency Under Alternative Carbon Policies. Incentives, Measurement, and Interregional Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinberg, Daniel C.; Boyd, Erin

    2015-08-28

    In this report, we examine and compare how tradable mass-based polices and tradable rate-based policies create different incentives for energy efficiency investments. Through a generalized demonstration and set of examples, we show that as a result of the output subsidy they create, traditional rate-based policies, those that do not credit energy savings from efficiency measures, reduce the incentive for investment in energy efficiency measures relative to an optimally designed mass-based policy or equivalent carbon tax. We then show that this reduced incentive can be partially addressed by modifying the rate-based policy such that electricity savings from energy efficiency measures aremore » treated as a source of zero-carbon generation within the framework of the standard, or equivalently, by assigning avoided emissions credit to the electricity savings at the rate of the intensity target. These approaches result in an extension of the output subsidy to efficiency measures and eliminate the distortion between supply-side and demand-side options for GHG emissions reduction. However, these approaches do not address electricity price distortions resulting from the output subsidy that also impact the value of efficiency measures. Next, we assess alternative approaches for crediting energy efficiency savings within the framework of a rate-based policy. Finally, we identify a number of challenges that arise in implementing a rate-based policy with efficiency crediting, including the requirement to develop robust estimates of electricity savings in order to assess compliance, and the requirement to track the regionality of the generation impacts of efficiency measures to account for their interstate effects.« less

  7. Comparison of Predicted Thermoelectric Energy Conversion Efficiency by Cumulative Properties and Reduced Variables Approaches

    NASA Astrophysics Data System (ADS)

    Linker, Thomas M.; Lee, Glenn S.; Beekman, Matt

    2018-06-01

    The semi-analytical methods of thermoelectric energy conversion efficiency calculation based on the cumulative properties approach and reduced variables approach are compared for 21 high performance thermoelectric materials. Both approaches account for the temperature dependence of the material properties as well as the Thomson effect, thus the predicted conversion efficiencies are generally lower than that based on the conventional thermoelectric figure of merit ZT for nearly all of the materials evaluated. The two methods also predict material energy conversion efficiencies that are in very good agreement which each other, even for large temperature differences (average percent difference of 4% with maximum observed deviation of 11%). The tradeoff between obtaining a reliable assessment of a material's potential for thermoelectric applications and the complexity of implementation of the three models, as well as the advantages of using more accurate modeling approaches in evaluating new thermoelectric materials, are highlighted.

  8. Correlation between the Availability of Resources and Efficiency of the School System within the Framework of the Implementation of Competency-Based Teaching Approaches in Cameroon

    ERIC Educational Resources Information Center

    Esongo, Njie Martin

    2017-01-01

    The study takes an in-depth examination of the extent to which the availability of resources relates to the efficiency of the school system within the framework of the implementation of competency-based teaching approaches in Cameroon. The study employed a mix of probability sampling approaches, namely simple, cluster and stratified random…

  9. Rice growing farmers efficiency measurement using a slack based interval DEA model with undesirable outputs

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2017-11-01

    In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.

  10. A two-stage DEA approach for environmental efficiency measurement.

    PubMed

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  11. Opportunity cost based analysis of corporate eco-efficiency: a methodology and its application to the CO2-efficiency of German companies.

    PubMed

    Hahn, Tobias; Figge, Frank; Liesen, Andrea; Barkemeyer, Ralf

    2010-10-01

    In this paper, we propose the return-to-cost-ratio (RCR) as an alternative approach to the analysis of operational eco-efficiency of companies based on the notion of opportunity costs. RCR helps to overcome two fundamental deficits of existing approaches to eco-efficiency. (1) It translates eco-efficiency into managerial terms by applying the well-established notion of opportunity costs to eco-efficiency analysis. (2) RCR allows to identify and quantify the drivers behind changes in corporate eco-efficiency. RCR is applied to the analysis of the CO(2)-efficiency of German companies in order to illustrate its usefulness for a detailed analysis of changes in corporate eco-efficiency as well as for the development of effective environmental strategies. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  12. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    NASA Astrophysics Data System (ADS)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  13. Polyglutamine Disease Modeling: Epitope Based Screen for Homologous Recombination using CRISPR/Cas9 System.

    PubMed

    An, Mahru C; O'Brien, Robert N; Zhang, Ningzhe; Patra, Biranchi N; De La Cruz, Michael; Ray, Animesh; Ellerby, Lisa M

    2014-04-15

    We have previously reported the genetic correction of Huntington's disease (HD) patient-derived induced pluripotent stem cells using traditional homologous recombination (HR) approaches. To extend this work, we have adopted a CRISPR-based genome editing approach to improve the efficiency of recombination in order to generate allelic isogenic HD models in human cells. Incorporation of a rapid antibody-based screening approach to measure recombination provides a powerful method to determine relative efficiency of genome editing for modeling polyglutamine diseases or understanding factors that modulate CRISPR/Cas9 HR.

  14. Bridge approach slabs for Missouri DOT field evaluation of alternative and cost efficient bridge approach slabs.

    DOT National Transportation Integrated Search

    2013-05-01

    Based on a recent study on cost efficient alternative bridge approach slab (BAS) designs (Thiagarajan et : al. 2010) has recommended three new BAS designs for possible implementation by MoDOT namely a) 20 feet cast-inplace : slab with sleeper slab (C...

  15. Silicon wafer-based tandem cells: The ultimate photovoltaic solution?

    NASA Astrophysics Data System (ADS)

    Green, Martin A.

    2014-03-01

    Recent large price reductions with wafer-based cells have increased the difficulty of dislodging silicon solar cell technology from its dominant market position. With market leaders expected to be manufacturing modules above 16% efficiency at 0.36/Watt by 2017, even the cost per unit area (60-70/m2) will be difficult for any thin-film photovoltaic technology to significantly undercut. This may make dislodgement likely only by appreciably higher energy conversion efficiency approaches. A silicon wafer-based cell able to capitalize on on-going cost reductions within the mainstream industry, but with an appreciably higher than present efficiency, might therefore provide the ultimate PV solution. With average selling prices of 156 mm quasi-square monocrystalline Si photovoltaic wafers recently approaching 1 (per wafer), wafers now provide clean, low cost templates for overgrowth of thin, wider bandgap high performance cells, nearly doubling silicon's ultimate efficiency potential. The range of possible Si-based tandem approaches is reviewed together with recent results and ultimate prospects.

  16. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    PubMed

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Tensorial dynamic time warping with articulation index representation for efficient audio-template learning.

    PubMed

    Le, Long N; Jones, Douglas L

    2018-03-01

    Audio classification techniques often depend on the availability of a large labeled training dataset for successful performance. However, in many application domains of audio classification (e.g., wildlife monitoring), obtaining labeled data is still a costly and laborious process. Motivated by this observation, a technique is proposed to efficiently learn a clean template from a few labeled, but likely corrupted (by noise and interferences), data samples. This learning can be done efficiently via tensorial dynamic time warping on the articulation index-based time-frequency representations of audio data. The learned template can then be used in audio classification following the standard template-based approach. Experimental results show that the proposed approach outperforms both (1) the recurrent neural network approach and (2) the state-of-the-art in the template-based approach on a wildlife detection application with few training samples.

  18. High-efficiency and flexible generation of vector vortex optical fields by a reflective phase-only spatial light modulator.

    PubMed

    Cai, Meng-Qiang; Wang, Zhou-Xiang; Liang, Juan; Wang, Yan-Kun; Gao, Xu-Zhen; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

    2017-08-01

    The scheme for generating vector optical fields should have not only high efficiency but also flexibility for satisfying the requirements of various applications. However, in general, high efficiency and flexibility are not compatible. Here we present and experimentally demonstrate a solution to directly, flexibly, and efficiently generate vector vortex optical fields (VVOFs) with a reflective phase-only liquid crystal spatial light modulator (LC-SLM) based on optical birefringence of liquid crystal molecules. To generate the VVOFs, this approach needs in principle only a half-wave plate, an LC-SLM, and a quarter-wave plate. This approach has some advantages, including a simple experimental setup, good flexibility, and high efficiency, making the approach very promising in some applications when higher power is need. This approach has a generation efficiency of 44.0%, which is much higher than the 1.1% of the common path interferometric approach.

  19. New Methodology for Known Metabolite Identification in Metabonomics/Metabolomics: Topological Metabolite Identification Carbon Efficiency (tMICE).

    PubMed

    Sanchon-Lopez, Beatriz; Everett, Jeremy R

    2016-09-02

    A new, simple-to-implement and quantitative approach to assessing the confidence in NMR-based identification of known metabolites is introduced. The approach is based on a topological analysis of metabolite identification information available from NMR spectroscopy studies and is a development of the metabolite identification carbon efficiency (MICE) method. New topological metabolite identification indices are introduced, analyzed, and proposed for general use, including topological metabolite identification carbon efficiency (tMICE). Because known metabolite identification is one of the key bottlenecks in either NMR-spectroscopy- or mass spectrometry-based metabonomics/metabolomics studies, and given the fact that there is no current consensus on how to assess metabolite identification confidence, it is hoped that these new approaches and the topological indices will find utility.

  20. Chitosan-based water-propelled micromotors with strong antibacterial activity.

    PubMed

    Delezuk, Jorge A M; Ramírez-Herrera, Doris E; Esteban-Fernández de Ávila, Berta; Wang, Joseph

    2017-02-09

    A rapid and efficient micromotor-based bacteria killing strategy is described. The new antibacterial approach couples the attractive antibacterial properties of chitosan with the efficient water-powered propulsion of magnesium (Mg) micromotors. These Janus micromotors consist of Mg microparticles coated with the biodegradable and biocompatible polymers poly(lactic-co-glycolic acid) (PLGA), alginate (Alg) and chitosan (Chi), with the latter responsible for the antibacterial properties of the micromotor. The distinct speed and efficiency advantages of the new micromotor-based environmentally friendly antibacterial approach have been demonstrated in various control experiments by treating drinking water contaminated with model Escherichia coli (E. coli) bacteria. The new dynamic antibacterial strategy offers dramatic improvements in the antibacterial efficiency, compared to static chitosan-coated microparticles (e.g., 27-fold enhancement), with a 96% killing efficiency within 10 min. Potential real-life applications of these chitosan-based micromotors for environmental remediation have been demonstrated by the efficient treatment of seawater and fresh water samples contaminated with unknown bacteria. Coupling the efficient water-driven propulsion of such biodegradable and biocompatible micromotors with the antibacterial properties of chitosan holds great considerable promise for advanced antimicrobial water treatment operation.

  1. A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations

    PubMed Central

    Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang

    2008-01-01

    Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033

  2. Thin film CdTe based neutron detectors with high thermal neutron efficiency and gamma rejection for security applications

    NASA Astrophysics Data System (ADS)

    Smith, L.; Murphy, J. W.; Kim, J.; Rozhdestvenskyy, S.; Mejia, I.; Park, H.; Allee, D. R.; Quevedo-Lopez, M.; Gnade, B.

    2016-12-01

    Solid-state neutron detectors offer an alternative to 3He based detectors, but suffer from limited neutron efficiencies that make their use in security applications impractical. Solid-state neutron detectors based on single crystal silicon also have relatively high gamma-ray efficiencies that lead to false positives. Thin film polycrystalline CdTe based detectors require less complex processing with significantly lower gamma-ray efficiencies. Advanced geometries can also be implemented to achieve high thermal neutron efficiencies competitive with silicon based technology. This study evaluates these strategies by simulation and experimentation and demonstrates an approach to achieve >10% intrinsic efficiency with <10-6 gamma-ray efficiency.

  3. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.

  4. A Spatiotemporal Indexing Approach for Efficient Processing of Big Array-Based Climate Data with MapReduce

    NASA Technical Reports Server (NTRS)

    Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei

    2016-01-01

    Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.

  5. Field evaluation of alternative and cost efficient bridge approach slabs.

    DOT National Transportation Integrated Search

    2013-11-01

    Based on a recent study on cost efficient alternative bridge approach slab (BAS) designs (Thiagarajan et al. 2010) has recommended : three new BAS designs for possible implementation by MoDOT namely a) 20 feet cast-inplace slab with sleeper slab (CIP...

  6. Protocol for Reliability Assessment of Structural Health Monitoring Systems Incorporating Model-assisted Probability of Detection (MAPOD) Approach

    DTIC Science & Technology

    2011-09-01

    a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range

  7. Proposing a Master's Programme on Participatory Integrated Assessment of Energy Systems to Promote Energy Access and Energy Efficiency in Southern Africa

    ERIC Educational Resources Information Center

    Kiravu, Cheddi; Diaz-Maurin, François; Giampietro, Mario; Brent, Alan C.; Bukkens, Sandra G.F.; Chiguvare, Zivayi; Gasennelwe-Jeffrey, Mandu A.; Gope, Gideon; Kovacic, Zora; Magole, Lapologang; Musango, Josephine Kaviti; Ruiz-Rivas Hernando, Ulpiano; Smit, Suzanne; Vázquez Barquero, Antonio; Yunta Mezquita, Felipe

    2018-01-01

    Purpose: This paper aims to present a new master's programme for promoting energy access and energy efficiency in Southern Africa. Design/methodology/approach: A transdisciplinary approach called "participatory integrated assessment of energy systems" (PARTICIPIA) was used for the development of the curriculum. This approach is based on…

  8. Directional Slack-Based Measure for the Inverse Data Envelopment Analysis

    PubMed Central

    Abu Bakar, Mohd Rizam; Lee, Lai Soon; Jaafar, Azmi B.; Heydar, Maryam

    2014-01-01

    A novel technique has been introduced in this research which lends its basis to the Directional Slack-Based Measure for the inverse Data Envelopment Analysis. In practice, the current research endeavors to elucidate the inverse directional slack-based measure model within a new production possibility set. On one occasion, there is a modification imposed on the output (input) quantities of an efficient decision making unit. In detail, the efficient decision making unit in this method was omitted from the present production possibility set but substituted by the considered efficient decision making unit while its input and output quantities were subsequently modified. The efficiency score of the entire DMUs will be retained in this approach. Also, there would be an improvement in the efficiency score. The proposed approach was investigated in this study with reference to a resource allocation problem. It is possible to simultaneously consider any upsurges (declines) of certain outputs associated with the efficient decision making unit. The significance of the represented model is accentuated by presenting numerical examples. PMID:24883350

  9. Complementarity and Area-Efficiency in the Prioritization of the Global Protected Area Network.

    PubMed

    Kullberg, Peter; Toivonen, Tuuli; Montesino Pouzols, Federico; Lehtomäki, Joona; Di Minin, Enrico; Moilanen, Atte

    2015-01-01

    Complementarity and cost-efficiency are widely used principles for protected area network design. Despite the wide use and robust theoretical underpinnings, their effects on the performance and patterns of priority areas are rarely studied in detail. Here we compare two approaches for identifying the management priority areas inside the global protected area network: 1) a scoring-based approach, used in recently published analysis and 2) a spatial prioritization method, which accounts for complementarity and area-efficiency. Using the same IUCN species distribution data the complementarity method found an equal-area set of priority areas with double the mean species ranges covered compared to the scoring-based approach. The complementarity set also had 72% more species with full ranges covered, and lacked any coverage only for half of the species compared to the scoring approach. Protected areas in our complementarity-based solution were on average smaller and geographically more scattered. The large difference between the two solutions highlights the need for critical thinking about the selected prioritization method. According to our analysis, accounting for complementarity and area-efficiency can lead to considerable improvements when setting management priorities for the global protected area network.

  10. Highly efficient and completely flexible fiber-shaped dye-sensitized solar cell based on TiO2 nanotube array

    NASA Astrophysics Data System (ADS)

    Lv, Zhibin; Yu, Jiefeng; Wu, Hongwei; Shang, Jian; Wang, Dan; Hou, Shaocong; Fu, Yongping; Wu, Kai; Zou, Dechun

    2012-02-01

    A type of highly efficient completely flexible fiber-shaped solar cell based on TiO2 nanotube array is successfully prepared. Under air mass 1.5G (100 mW cm-2) illumination conditions, the photoelectric conversion efficiency of the solar cell approaches 7%, the highest among all fiber-shaped cells based on TiO2 nanotube arrays and the first completely flexible fiber-shaped DSSC. The fiber-shaped solar cell demonstrates good flexibility, which makes it suitable for modularization using weaving technologies.A type of highly efficient completely flexible fiber-shaped solar cell based on TiO2 nanotube array is successfully prepared. Under air mass 1.5G (100 mW cm-2) illumination conditions, the photoelectric conversion efficiency of the solar cell approaches 7%, the highest among all fiber-shaped cells based on TiO2 nanotube arrays and the first completely flexible fiber-shaped DSSC. The fiber-shaped solar cell demonstrates good flexibility, which makes it suitable for modularization using weaving technologies. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr11532h

  11. Increasing the efficiency of designing hemming processes by using an element-based metamodel approach

    NASA Astrophysics Data System (ADS)

    Kaiser, C.; Roll, K.; Volk, W.

    2017-09-01

    In the automotive industry, the manufacturing of automotive outer panels requires hemming processes in which two sheet metal parts are joined together by bending the flange of the outer part over the inner part. Because of decreasing development times and the steadily growing number of vehicle derivatives, an efficient digital product and process validation is necessary. Commonly used simulations, which are based on the finite element method, demand significant modelling effort, which results in disadvantages especially in the early product development phase. To increase the efficiency of designing hemming processes this paper presents a hemming-specific metamodel approach. The approach includes a part analysis in which the outline of the automotive outer panels is initially split into individual segments. By doing a para-metrization of each of the segments and assigning basic geometric shapes, the outline of the part is approximated. Based on this, the hemming parameters such as flange length, roll-in, wrinkling and plastic strains are calculated for each of the geometric basic shapes by performing a meta-model-based segmental product validation. The metamodel is based on an element similar formulation that includes a reference dataset of various geometric basic shapes. A random automotive outer panel can now be analysed and optimized based on the hemming-specific database. By implementing this approach into a planning system, an efficient optimization of designing hemming processes will be enabled. Furthermore, valuable time and cost benefits can be realized in a vehicle’s development process.

  12. Prediction and design of efficient exciplex emitters for high-efficiency, thermally activated delayed-fluorescence organic light-emitting diodes.

    PubMed

    Liu, Xiao-Ke; Chen, Zhan; Zheng, Cai-Jun; Liu, Chuan-Lin; Lee, Chun-Sing; Li, Fan; Ou, Xue-Mei; Zhang, Xiao-Hong

    2015-04-08

    High-efficiency, thermally activated delayed-fluorescence organic light-emitting diodes based on exciplex emitters are demonstrated. The best device, based on a TAPC:DPTPCz emitter, shows a high external quantum efficiency of 15.4%. Strategies for predicting and designing efficient exciplex emitters are also provided. This approach allow prediction and design of efficient exciplex emitters for achieving high-efficiency organic light-emitting diodes, for future use in displays and lighting applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Efficient Online Learning Algorithms Based on LSTM Neural Networks.

    PubMed

    Ergen, Tolga; Kozat, Suleyman Serdar

    2017-09-13

    We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.

  14. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  15. A Market-Based Approach to Multi-factory Scheduling

    NASA Astrophysics Data System (ADS)

    Vytelingum, Perukrishnen; Rogers, Alex; MacBeth, Douglas K.; Dutta, Partha; Stranjak, Armin; Jennings, Nicholas R.

    In this paper, we report on the design of a novel market-based approach for decentralised scheduling across multiple factories. Specifically, because of the limitations of scheduling in a centralised manner - which requires a center to have complete and perfect information for optimality and the truthful revelation of potentially commercially private preferences to that center - we advocate an informationally decentralised approach that is both agile and dynamic. In particular, this work adopts a market-based approach for decentralised scheduling by considering the different stakeholders representing different factories as self-interested, profit-motivated economic agents that trade resources for the scheduling of jobs. The overall schedule of these jobs is then an emergent behaviour of the strategic interaction of these trading agents bidding for resources in a market based on limited information and their own preferences. Using a simple (zero-intelligence) bidding strategy, we empirically demonstrate that our market-based approach achieves a lower bound efficiency of 84%. This represents a trade-off between a reasonable level of efficiency (compared to a centralised approach) and the desirable benefits of a decentralised solution.

  16. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  17. Efficient Testing Combining Design of Experiment and Learn-to-Fly Strategies

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Brandon, Jay M.

    2017-01-01

    Rapid modeling and efficient testing methods are important in a number of aerospace applications. In this study efficient testing strategies were evaluated in a wind tunnel test environment and combined to suggest a promising approach for both ground-based and flight-based experiments. Benefits of using Design of Experiment techniques, well established in scientific, military, and manufacturing applications are evaluated in combination with newly developing methods for global nonlinear modeling. The nonlinear modeling methods, referred to as Learn-to-Fly methods, utilize fuzzy logic and multivariate orthogonal function techniques that have been successfully demonstrated in flight test. The blended approach presented has a focus on experiment design and identifies a sequential testing process with clearly defined completion metrics that produce increased testing efficiency.

  18. Fractal stock markets: International evidence of dynamical (in)efficiency.

    PubMed

    Bianchi, Sergio; Frezza, Massimiliano

    2017-07-01

    The last systemic financial crisis has reawakened the debate on the efficient nature of financial markets, traditionally described as semimartingales. The standard approaches to endow the general notion of efficiency of an empirical content turned out to be somewhat inconclusive and misleading. We propose a topological-based approach to quantify the informational efficiency of a financial time series. The idea is to measure the efficiency by means of the pointwise regularity of a (stochastic) function, given that the signature of a martingale is that its pointwise regularity equals 12. We provide estimates for real financial time series and investigate their (in)efficient behavior by comparing three main stock indexes.

  19. Fractal stock markets: International evidence of dynamical (in)efficiency

    NASA Astrophysics Data System (ADS)

    Bianchi, Sergio; Frezza, Massimiliano

    2017-07-01

    The last systemic financial crisis has reawakened the debate on the efficient nature of financial markets, traditionally described as semimartingales. The standard approaches to endow the general notion of efficiency of an empirical content turned out to be somewhat inconclusive and misleading. We propose a topological-based approach to quantify the informational efficiency of a financial time series. The idea is to measure the efficiency by means of the pointwise regularity of a (stochastic) function, given that the signature of a martingale is that its pointwise regularity equals 1/2 . We provide estimates for real financial time series and investigate their (in)efficient behavior by comparing three main stock indexes.

  20. A FFT-based formulation for efficient mechanical fields computation in isotropic and anisotropic periodic discrete dislocation dynamics

    NASA Astrophysics Data System (ADS)

    Bertin, N.; Upadhyay, M. V.; Pradalier, C.; Capolungo, L.

    2015-09-01

    In this paper, we propose a novel full-field approach based on the fast Fourier transform (FFT) technique to compute mechanical fields in periodic discrete dislocation dynamics (DDD) simulations for anisotropic materials: the DDD-FFT approach. By coupling the FFT-based approach to the discrete continuous model, the present approach benefits from the high computational efficiency of the FFT algorithm, while allowing for a discrete representation of dislocation lines. It is demonstrated that the computational time associated with the new DDD-FFT approach is significantly lower than that of current DDD approaches when large number of dislocation segments are involved for isotropic and anisotropic elasticity, respectively. Furthermore, for fine Fourier grids, the treatment of anisotropic elasticity comes at a similar computational cost to that of isotropic simulation. Thus, the proposed approach paves the way towards achieving scale transition from DDD to mesoscale plasticity, especially due to the method’s ability to incorporate inhomogeneous elasticity.

  1. Efficient clustering aggregation based on data fragments.

    PubMed

    Wu, Ou; Hu, Weiming; Maybank, Stephen J; Zhu, Mingliang; Li, Bing

    2012-06-01

    Clustering aggregation, known as clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a single better clustering. Existing clustering aggregation algorithms are applied directly to data points, in what is referred to as the point-based approach. The algorithms are inefficient if the number of data points is large. We define an efficient approach for clustering aggregation based on data fragments. In this fragment-based approach, a data fragment is any subset of the data that is not split by any of the clustering results. To establish the theoretical bases of the proposed approach, we prove that clustering aggregation can be performed directly on data fragments under two widely used goodness measures for clustering aggregation taken from the literature. Three new clustering aggregation algorithms are described. The experimental results obtained using several public data sets show that the new algorithms have lower computational complexity than three well-known existing point-based clustering aggregation algorithms (Agglomerative, Furthest, and LocalSearch); nevertheless, the new algorithms do not sacrifice the accuracy.

  2. Efficient Personalized Mispronunciation Detection of Taiwanese-Accented English Speech Based on Unsupervised Model Adaptation and Dynamic Sentence Selection

    ERIC Educational Resources Information Center

    Wu, Chung-Hsien; Su, Hung-Yu; Liu, Chao-Hong

    2013-01-01

    This study presents an efficient approach to personalized mispronunciation detection of Taiwanese-accented English. The main goal of this study was to detect frequently occurring mispronunciation patterns of Taiwanese-accented English instead of scoring English pronunciations directly. The proposed approach quickly identifies personalized…

  3. Comparison of anatomy-based, fluence-based and aperture-based treatment planning approaches for VMAT

    NASA Astrophysics Data System (ADS)

    Rao, Min; Cao, Daliang; Chen, Fan; Ye, Jinsong; Mehta, Vivek; Wong, Tony; Shepard, David

    2010-11-01

    Volumetric modulated arc therapy (VMAT) has the potential to reduce treatment times while producing comparable or improved dose distributions relative to fixed-field intensity-modulated radiation therapy. In order to take full advantage of the VMAT delivery technique, one must select a robust inverse planning tool. The purpose of this study was to evaluate the effectiveness and efficiency of VMAT planning techniques of three categories: anatomy-based, fluence-based and aperture-based inverse planning. We have compared these techniques in terms of the plan quality, planning efficiency and delivery efficiency. Fourteen patients were selected for this study including six head-and-neck (HN) cases, and two cases each of prostate, pancreas, lung and partial brain. For each case, three VMAT plans were created. The first VMAT plan was generated based on the anatomical geometry. In the Elekta ERGO++ treatment planning system (TPS), segments were generated based on the beam's eye view (BEV) of the target and the organs at risk. The segment shapes were then exported to Pinnacle3 TPS followed by segment weight optimization and final dose calculation. The second VMAT plan was generated by converting optimized fluence maps (calculated by the Pinnacle3 TPS) into deliverable arcs using an in-house arc sequencer. The third VMAT plan was generated using the Pinnacle3 SmartArc IMRT module which is an aperture-based optimization method. All VMAT plans were delivered using an Elekta Synergy linear accelerator and the plan comparisons were made in terms of plan quality and delivery efficiency. The results show that for cases of little or modest complexity such as prostate, pancreas, lung and brain, the anatomy-based approach provides similar target coverage and critical structure sparing, but less conformal dose distributions as compared to the other two approaches. For more complex HN cases, the anatomy-based approach is not able to provide clinically acceptable VMAT plans while highly conformal dose distributions were obtained using both aperture-based and fluence-based inverse planning techniques. The aperture-based approach provides improved dose conformity than the fluence-based technique in complex cases.

  4. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  5. Feature-based Approach in Product Design with Energy Efficiency Consideration

    NASA Astrophysics Data System (ADS)

    Li, D. D.; Zhang, Y. J.

    2017-10-01

    In this paper, a method to measure the energy efficiency and ecological footprint metrics of features is proposed for product design. First the energy consumption models of various manufacturing features, like cutting feature, welding feature, etc. are studied. Then, the total energy consumption of a product is modeled and estimated according to its features. Finally, feature chains that combined by several sequence features based on the producing operation orders are defined and analyzed to calculate global optimal solution. The corresponding assessment model is also proposed to estimate their energy efficiency and ecological footprint. Finally, an example is given to validate the proposed approach in the improvement of sustainability.

  6. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  7. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    PubMed

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Rational design of aptazyme riboswitches for efficient control of gene expression in mammalian cells

    PubMed Central

    Zhong, Guocai; Wang, Haimin; Bailey, Charles C; Gao, Guangping; Farzan, Michael

    2016-01-01

    Efforts to control mammalian gene expression with ligand-responsive riboswitches have been hindered by lack of a general method for generating efficient switches in mammalian systems. Here we describe a rational-design approach that enables rapid development of efficient cis-acting aptazyme riboswitches. We identified communication-module characteristics associated with aptazyme functionality through analysis of a 32-aptazyme test panel. We then developed a scoring system that predicts an aptazymes’s activity by integrating three characteristics of communication-module bases: hydrogen bonding, base stacking, and distance to the enzymatic core. We validated the power and generality of this approach by designing aptazymes responsive to three distinct ligands, each with markedly wider dynamic ranges than any previously reported. These aptayzmes efficiently regulated adeno-associated virus (AAV)-vectored transgene expression in cultured mammalian cells and mice, highlighting one application of these broadly usable regulatory switches. Our approach enables efficient, protein-independent control of gene expression by a range of small molecules. DOI: http://dx.doi.org/10.7554/eLife.18858.001 PMID:27805569

  9. Amber light-emitting diode comprising a group III-nitride nanowire active region

    DOEpatents

    Wang, George T.; Li, Qiming; Wierer, Jr., Jonathan J.; Koleske, Daniel

    2014-07-22

    A temperature stable (color and efficiency) III-nitride based amber (585 nm) light-emitting diode is based on a novel hybrid nanowire-planar structure. The arrays of GaN nanowires enable radial InGaN/GaN quantum well LED structures with high indium content and high material quality. The high efficiency and temperature stable direct yellow and red phosphor-free emitters enable high efficiency white LEDs based on the RGYB color-mixing approach.

  10. How efficient is sliding-scale insulin therapy? Problems with a 'cookbook' approach in hospitalized patients.

    PubMed

    Katz, C M

    1991-04-01

    Sliding-scale insulin therapy is seldom the best way to treat hospitalized diabetic patients. In the few clinical situations in which it is appropriate, close attention to details and solidly based scientific principles is absolutely necessary. Well-organized alternative approaches to insulin therapy usually offer greater efficiency and effectiveness.

  11. Topochemical approach to efficiently produce main-chain poly(bile acid)s with high molecular weights.

    PubMed

    Li, Weina; Li, Xuesong; Zhu, Wei; Li, Changxu; Xu, Dan; Ju, Yong; Li, Guangtao

    2011-07-21

    Based on a topochemical approach, a strategy for efficiently producing main-chain poly(bile acid)s in the solid state was developed. This strategy allows for facile and scalable synthesis of main-chain poly(bile acid)s not only with high molecular weights, but also with quantitative conversions and yields.

  12. Improving Safety, Quality and Efficiency through the Management of Emerging Processes: The TenarisDalmine Experience

    ERIC Educational Resources Information Center

    Bonometti, Patrizia

    2012-01-01

    Purpose: The aim of this contribution is to describe a new complexity-science-based approach for improving safety, quality and efficiency and the way it was implemented by TenarisDalmine. Design/methodology/approach: This methodology is called "a safety-building community". It consists of a safety-behaviour social self-construction…

  13. Quantum entanglement helps in improving economic efficiency

    NASA Astrophysics Data System (ADS)

    Du, Jiangfeng; Ju, Chenyong; Li, Hui

    2005-02-01

    We propose an economic regulation approach based on quantum game theory for the government to reduce the abuses of oligopolistic competition. Theoretical analysis shows that this approach can help government improve the economic efficiency of the oligopolistic market, and help prevent monopoly due to incorrect information. These advantages are completely attributed to the quantum entanglement, a unique quantum mechanical character.

  14. Implementation of megaprojects for the creation of tourist clusters in Russia based on the concept of energy efficiency and sustainable construction

    NASA Astrophysics Data System (ADS)

    Orlov, Alexandr K.

    2017-10-01

    The article deals with the application of sustainable construction concept within implementation of megaprojects of tourist clusters development using energy saving technologies. The concept of sustainable construction includes the elements of green construction, energy management as well as aspects of the economic efficiency of construction projects implementation. The methodical approach to the implementation of megaprojects for the creation of tourist clusters in Russia based on the concept of energy efficiency and sustainable construction is proved. The conceptual approach to the evaluation of the ecological, social and economic components of the integral indicator of the effectiveness of the megaproject for the development of the tourist cluster is provided. The algorithm for estimation of the efficiency of innovative solutions in green construction is considered.

  15. Efficient computation of PDF-based characteristics from diffusion MR signal.

    PubMed

    Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc

    2008-01-01

    We present a general method for the computation of PDF-based characteristics of the tissue micro-architecture in MR imaging. The approach relies on the approximation of the MR signal by a series expansion based on Spherical Harmonics and Laguerre-Gaussian functions, followed by a simple projection step that is efficiently done in a finite dimensional space. The resulting algorithm is generic, flexible and is able to compute a large set of useful characteristics of the local tissues structure. We illustrate the effectiveness of this approach by showing results on synthetic and real MR datasets acquired in a clinical time-frame.

  16. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  17. Extraction efficiency and implications for absolute quantitation of propranolol in mouse brain, liver and kidney thin tissue sections using droplet-based liquid microjunction surface sampling-HPLC ESI-MS/MS

    DOE PAGES

    Kertesz, Vilmos; Weiskittel, Taylor M.; Vavek, Marissa; ...

    2016-06-22

    Currently, absolute quantitation aspects of droplet-based surface sampling for thin tissue analysis using a fully automated autosampler/HPLC-ESI-MS/MS system are not fully evaluated. Knowledge of extraction efficiency and its reproducibility is required to judge the potential of the method for absolute quantitation of analytes from thin tissue sections. Methods: Adjacent thin tissue sections of propranolol dosed mouse brain (10- μm-thick), kidney (10- μm-thick) and liver (8-, 10-, 16- and 24- μm-thick) were obtained. Absolute concentration of propranolol was determined in tissue punches from serial sections using standard bulk tissue extraction protocols and subsequent HPLC separations and tandem mass spectrometric analysis. Thesemore » values were used to determine propranolol extraction efficiency from the tissues with the droplet-based surface sampling approach. Results: Extraction efficiency of propranolol using 10- μm-thick brain, kidney and liver thin tissues using droplet-based surface sampling varied between ~45-63%. Extraction efficiency decreased from ~65% to ~36% with liver thickness increasing from 8 μm to 24 μm. Randomly selecting half of the samples as standards, precision and accuracy of propranolol concentrations obtained for the other half of samples as quality control metrics were determined. Resulting precision ( ±15%) and accuracy ( ±3%) values, respectively, were within acceptable limits. In conclusion, comparative quantitation of adjacent mouse thin tissue sections of different organs and of various thicknesses by droplet-based surface sampling and by bulk extraction of tissue punches showed that extraction efficiency was incomplete using the former method, and that it depended on the organ and tissue thickness. However, once extraction efficiency was determined and applied, the droplet-based approach provided the required quantitation accuracy and precision for assay validations. Furthermore, this means that once the extraction efficiency was calibrated for a given tissue type and drug, the droplet-based approach provides a non-labor intensive and high-throughput means to acquire spatially resolved quantitative analysis of multiple samples of the same type.« less

  18. 4D Flexible Atom-Pairs: An efficient probabilistic conformational space comparison for ligand-based virtual screening

    PubMed Central

    2011-01-01

    Background The performance of 3D-based virtual screening similarity functions is affected by the applied conformations of compounds. Therefore, the results of 3D approaches are often less robust than 2D approaches. The application of 3D methods on multiple conformer data sets normally reduces this weakness, but entails a significant computational overhead. Therefore, we developed a special conformational space encoding by means of Gaussian mixture models and a similarity function that operates on these models. The application of a model-based encoding allows an efficient comparison of the conformational space of compounds. Results Comparisons of our 4D flexible atom-pair approach with over 15 state-of-the-art 2D- and 3D-based virtual screening similarity functions on the 40 data sets of the Directory of Useful Decoys show a robust performance of our approach. Even 3D-based approaches that operate on multiple conformers yield inferior results. The 4D flexible atom-pair method achieves an averaged AUC value of 0.78 on the filtered Directory of Useful Decoys data sets. The best 2D- and 3D-based approaches of this study yield an AUC value of 0.74 and 0.72, respectively. As a result, the 4D flexible atom-pair approach achieves an average rank of 1.25 with respect to 15 other state-of-the-art similarity functions and four different evaluation metrics. Conclusions Our 4D method yields a robust performance on 40 pharmaceutically relevant targets. The conformational space encoding enables an efficient comparison of the conformational space. Therefore, the weakness of the 3D-based approaches on single conformations is circumvented. With over 100,000 similarity calculations on a single desktop CPU, the utilization of the 4D flexible atom-pair in real-world applications is feasible. PMID:21733172

  19. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    PubMed

    Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem

    2016-01-01

    Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  20. Cognitive Load Theory vs. Constructivist Approaches: Which Best Leads to Efficient, Deep Learning?

    ERIC Educational Resources Information Center

    Vogel-Walcutt, J. J.; Gebrim, J. B.; Bowers, C.; Carper, T. M.; Nicholson, D.

    2011-01-01

    Computer-assisted learning, in the form of simulation-based training, is heavily focused upon by the military. Because computer-based learning offers highly portable, reusable, and cost-efficient training options, the military has dedicated significant resources to the investigation of instructional strategies that improve learning efficiency…

  1. SPHINX--an algorithm for taxonomic binning of metagenomic sequences.

    PubMed

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S

    2011-01-01

    Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.

  2. A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A

    2016-01-01

    This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.

  3. Internet-Based Approaches to Building Stakeholder Networks for Conservation and Natural Resource Management

    EPA Science Inventory

    Social network analysis (SNA) is based on a conceptual network representation of social interactions and is an invaluable tool for conservation professionals to increase collaboration, improve information flow, and increase efficiency. We present two approaches to constructing i...

  4. QUALITY MANAGEMENT DURING SELECTION OF TECHNOLOGIES EXAMPLE SITE MARCH AIR FORCE BASE, USA

    EPA Science Inventory

    This paper describes the remedial approach, organizational structure and key elements facilitating effective and efficient remediation of contaminated sites at March Air Force Base (AFB), California. The U.S. implementation and quality assurance approach to site remediation for ...

  5. Internet-Based Approaches to Building Stakeholder Networks for Conservation and Natural Resource Management.

    EPA Science Inventory

    Social network analysis (SNA) is based on a conceptual network representation of social interactions and is an invaluable tool for conservation professionals to increase collaboration, improve information flow, and increase efficiency. We present two approaches to constructing in...

  6. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  7. The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.

    PubMed

    Liu, Chunping; Laporte, Audrey; Ferguson, Brian S

    2008-09-01

    In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.

  8. An Econometric Approach to Evaluate Navy Advertising Efficiency.

    DTIC Science & Technology

    1996-03-01

    This thesis uses an econometric approach to systematically and comprehensively analyze Navy advertising and recruiting data to determine Navy... advertising cost efficiency in the Navy recruiting process. Current recruiting and advertising cost data are merged into an appropriate data base and...evaluated using multiple regression techniques to find assessments of the relationships between Navy advertising expenditures and recruit contracts attained

  9. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  10. Hierarchical screening for multiple mental disorders.

    PubMed

    Batterham, Philip J; Calear, Alison L; Sunderland, Matthew; Carragher, Natacha; Christensen, Helen; Mackinnon, Andrew J

    2013-10-01

    There is a need for brief, accurate screening when assessing multiple mental disorders. Two-stage hierarchical screening, consisting of brief pre-screening followed by a battery of disorder-specific scales for those who meet diagnostic criteria, may increase the efficiency of screening without sacrificing precision. This study tested whether more efficient screening could be gained using two-stage hierarchical screening than by administering multiple separate tests. Two Australian adult samples (N=1990) with high rates of psychopathology were recruited using Facebook advertising to examine four methods of hierarchical screening for four mental disorders: major depressive disorder, generalised anxiety disorder, panic disorder and social phobia. Using K6 scores to determine whether full screening was required did not increase screening efficiency. However, pre-screening based on two decision tree approaches or item gating led to considerable reductions in the mean number of items presented per disorder screened, with estimated item reductions of up to 54%. The sensitivity of these hierarchical methods approached 100% relative to the full screening battery. Further testing of the hierarchical screening approach based on clinical criteria and in other samples is warranted. The results demonstrate that a two-phase hierarchical approach to screening multiple mental disorders leads to considerable increases efficiency gains without reducing accuracy. Screening programs should take advantage of prescreeners based on gating items or decision trees to reduce the burden on respondents. © 2013 Elsevier B.V. All rights reserved.

  11. Efficiency of Health Care Production in Low-Resource Settings: A Monte-Carlo Simulation to Compare the Performance of Data Envelopment Analysis, Stochastic Distance Functions, and an Ensemble Model

    PubMed Central

    Giorgio, Laura Di; Flaxman, Abraham D.; Moses, Mark W.; Fullman, Nancy; Hanlon, Michael; Conner, Ruben O.; Wollum, Alexandra; Murray, Christopher J. L.

    2016-01-01

    Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings. PMID:26812685

  12. QUALITY MANAGEMENT DURING SELECTION OF TECHNOLOGIES; EXAMPLE SITE MARCH AIR FORCE BASE, USA

    EPA Science Inventory

    This paper describes the remedial approach, organizational structure and key elements facilitating effective and efficient remediation of contaminated sites at March Air Force Base (AFB), California. The U.S. implementation and quality assurance approach to site remediation for a...

  13. Low cost and efficient kurtosis-based deflationary ICA method: application to MRS sources separation problem.

    PubMed

    Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L

    2016-08-01

    Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.

  14. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  15. Efficient anomalous reflection through near-field interactions in metasurfaces

    NASA Astrophysics Data System (ADS)

    Chalabi, H.; Ra'di, Y.; Sounas, D. L.; Alù, A.

    2017-08-01

    Gradient metasurfaces have been extensively used in the past few years for advanced wave manipulation over a thin surface. These metasurfaces have been mostly designed based on the generalized laws of reflection and refraction. However, it was recently revealed that metasurfaces based on this approach tend to suffer from inefficiencies and complex design requirements. We have recently proposed a different approach to the problem of efficient beam steering using a surface, based on bianisotropic particles in a periodic array. Here, we show highly efficient reflective metasurfaces formed by pairs of isotropic dielectric rods, which can offer asymmetrical scattering of normally incident beams with unitary efficiency. Our theory shows that moderately broadband anomalous reflection can be achieved with suitably designed periodic arrays of isotropic nanoparticles. We also demonstrate practical designs using TiO2 cylindrical nanorods to deflect normally incident light toward a desired direction. The proposed structures may pave the way to a broader range of light management opportunities, with applications in energy harvesting, signaling, and communications.

  16. Displacement Based Multilevel Structural Optimization

    NASA Technical Reports Server (NTRS)

    Sobieszezanski-Sobieski, J.; Striz, A. G.

    1996-01-01

    In the complex environment of true multidisciplinary design optimization (MDO), efficiency is one of the most desirable attributes of any approach. In the present research, a new and highly efficient methodology for the MDO subset of structural optimization is proposed and detailed, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures is performed. In the system level optimization, the design variables are the coefficients of assumed polynomially based global displacement functions, and the load unbalance resulting from the solution of the global stiffness equations is minimized. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. The approach is expected to prove very efficient since the design task is broken down into a large number of small and efficient subtasks, each with a small number of variables, which are amenable to parallel computing.

  17. Comparison and combination of "direct" and fragment based local correlation methods: Cluster in molecules and domain based local pair natural orbital perturbation and coupled cluster theories

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Becker, Ute; Neese, Frank

    2018-03-01

    Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.

  18. Semi-automating the manual literature search for systematic reviews increases efficiency.

    PubMed

    Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald

    2010-03-01

    To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.

  19. Shapley value-based multi-objective data envelopment analysis application for assessing academic efficiency of university departments

    NASA Astrophysics Data System (ADS)

    Abing, Stephen Lloyd N.; Barton, Mercie Grace L.; Dumdum, Michael Gerard M.; Bongo, Miriam F.; Ocampo, Lanndon A.

    2018-02-01

    This paper adopts a modified approach of data envelopment analysis (DEA) to measure the academic efficiency of university departments. In real-world case studies, conventional DEA models often identify too many decision-making units (DMUs) as efficient. This occurs when the number of DMUs under evaluation is not large enough compared to the total number of decision variables. To overcome this limitation and reduce the number of decision variables, multi-objective data envelopment analysis (MODEA) approach previously presented in the literature is applied. The MODEA approach applies Shapley value as a cooperative game to determine the appropriate weights and efficiency score of each category of inputs. To illustrate the performance of the adopted approach, a case study is conducted in a university in the Philippines. The input variables are academic staff, non-academic staff, classrooms, laboratories, research grants, and department expenditures, while the output variables are the number of graduates and publications. The results of the case study revealed that all DMUs are inefficient. DMUs with efficiency scores close to the ideal efficiency score may be emulated by other DMUs with least efficiency scores.

  20. Learning Efficiency of Two ICT-Based Instructional Strategies in Greek Sheep Farmers

    ERIC Educational Resources Information Center

    Bellos, Georgios; Mikropoulos, Tassos A.; Deligeorgis, Stylianos; Kominakis, Antonis

    2016-01-01

    Purpose: The objective of the present study was to compare the learning efficiency of two information and communications technology (ICT)-based instructional strategies (multimedia presentation (MP) and concept mapping) in a sample (n = 187) of Greek sheep farmers operating mainly in Western Greece. Design/methodology/approach: In total, 15…

  1. Efficiency-Based Funding for Public Four-Year Colleges and Universities

    ERIC Educational Resources Information Center

    Sexton, Thomas R.; Comunale, Christie L.; Gara, Stephen C.

    2012-01-01

    We propose an efficiency-based mechanism for state funding of public colleges and universities using data envelopment analysis. We describe the philosophy and the mathematics that underlie the approach and apply\\break the proposed model to data from 362 U.S. public four-year colleges and universities. The model provides incentives to institution…

  2. Improved regional-scale Brazilian cropping systems' mapping based on a semi-automatic object-based clustering approach

    NASA Astrophysics Data System (ADS)

    Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha

    2018-06-01

    Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.

  3. Efficient matrix approach to optical wave propagation and Linear Canonical Transforms.

    PubMed

    Shakir, Sami A; Fried, David L; Pease, Edwin A; Brennan, Terry J; Dolash, Thomas M

    2015-10-05

    The Fresnel diffraction integral form of optical wave propagation and the more general Linear Canonical Transforms (LCT) are cast into a matrix transformation form. Taking advantage of recent efficient matrix multiply algorithms, this approach promises an efficient computational and analytical tool that is competitive with FFT based methods but offers better behavior in terms of aliasing, transparent boundary condition, and flexibility in number of sampling points and computational window sizes of the input and output planes being independent. This flexibility makes the method significantly faster than FFT based propagators when only a single point, as in Strehl metrics, or a limited number of points, as in power-in-the-bucket metrics, are needed in the output observation plane.

  4. Probe molecules (PrM) approach in adverse outcome pathway (AOP) based high throughput screening (HTS): in vivo discovery for developing in vitro target methods

    EPA Science Inventory

    Efficient and accurate adverse outcome pathway (AOP) based high-throughput screening (HTS) methods use a systems biology based approach to computationally model in vitro cellular and molecular data for rapid chemical prioritization; however, not all HTS assays are grounded by rel...

  5. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  6. Fast globally optimal segmentation of cells in fluorescence microscopy images.

    PubMed

    Bergeest, Jan-Philip; Rohr, Karl

    2011-01-01

    Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.

  7. Evaluation of particle-based flow characteristics using novel Eulerian indices

    NASA Astrophysics Data System (ADS)

    Cho, Youngmoon; Kang, Seongwon

    2017-11-01

    The main objective of this study is to evaluate flow characteristics in complex particle-laden flows efficiently using novel Eulerian indices. For flows with a large number of particles, a Lagrangian approach leads to accurate yet inefficient prediction in many engineering problems. We propose a technique based on Eulerian transport equation and ensemble-averaged particle properties, which enables efficient evaluation of various particle-based flow characteristics such as the residence time, accumulated travel distance, mean radial force, etc. As a verification study, we compare the developed Eulerian indices with those using Lagrangian approaches for laminar flows with and without a swirling motion and density ratio. The results show satisfactory agreement between two approaches. The accumulated travel distance is modified to analyze flow motions inside IC engines and, when applied to flow bench cases, it can predict swirling and tumbling motions successfully. For flows inside a cyclone separator, the mean radial force is applied to predict the separation of particles and is shown to have a high correlation to the separation efficiency for various working conditions. In conclusion, the proposed Eulerian indices are shown to be useful tools to analyze complex particle-based flow characteristics. Corresponding author.

  8. Ergonomics and simulation-based approach in improving facility layout

    NASA Astrophysics Data System (ADS)

    Abad, Jocelyn D.

    2018-02-01

    The use of the simulation-based technique in facility layout has been a choice in the industry due to its convenience and efficient generation of results. Nevertheless, the solutions generated are not capable of addressing delays due to worker's health and safety which significantly impact overall operational efficiency. It is, therefore, critical to incorporate ergonomics in facility design. In this study, workstation analysis was incorporated into Promodel simulation to improve the facility layout of a garment manufacturing. To test the effectiveness of the method, existing and improved facility designs were measured using comprehensive risk level, efficiency, and productivity. Results indicated that the improved facility layout generated a decrease in comprehensive risk level and rapid upper limb assessment score; an increase of 78% in efficiency and 194% increase in productivity compared to existing design and thus proved that the approach is effective in attaining overall facility design improvement.

  9. Spectrum splitting using multi-layer dielectric meta-surfaces for efficient solar energy harvesting

    NASA Astrophysics Data System (ADS)

    Yao, Yuhan; Liu, He; Wu, Wei

    2014-06-01

    We designed a high-efficiency dispersive mirror based on multi-layer dielectric meta-surfaces. By replacing the secondary mirror of a dome solar concentrator with this dispersive mirror, the solar concentrator can be converted into a spectrum-splitting photovoltaic system with higher energy harvesting efficiency and potentially lower cost. The meta-surfaces are consisted of high-index contrast gratings (HCG). The structures and parameters of the dispersive mirror (i.e. stacked HCG) are optimized based on finite-difference time-domain and rigorous coupled-wave analysis method. Our numerical study shows that the dispersive mirror can direct light with different wavelengths into different angles in the entire solar spectrum, maintaining very low energy loss. Our approach will not only improve the energy harvesting efficiency, but also lower the cost by using single junction cells instead of multi-layer tandem solar cells. Moreover, this approach has the minimal disruption to the existing solar concentrator infrastructures.

  10. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  12. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  13. A Distributed Approach to System-Level Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  14. Using Consumer Preference Information to Increase the Reach and Impact of Media-Based Parenting Interventions in a Public Health Approach to Parenting Support

    ERIC Educational Resources Information Center

    Metzler, Carol W.; Sanders, Matthew R.; Rusby, Julie C.; Crowley, Ryann N.

    2012-01-01

    Within a public health approach to improving parenting, the mass media offer a potentially more efficient and affordable format for directly reaching a large number of parents with evidence-based parenting information than do traditional approaches to parenting interventions that require delivery by a practitioner. Little is known, however, about…

  15. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  16. Identifying Interacting Genetic Variations by Fish-Swarm Logic Regression

    PubMed Central

    Yang, Aiyuan; Yan, Chunxia; Zhu, Feng; Zhao, Zhongmeng; Cao, Zhi

    2013-01-01

    Understanding associations between genotypes and complex traits is a fundamental problem in human genetics. A major open problem in mapping phenotypes is that of identifying a set of interacting genetic variants, which might contribute to complex traits. Logic regression (LR) is a powerful multivariant association tool. Several LR-based approaches have been successfully applied to different datasets. However, these approaches are not adequate with regard to accuracy and efficiency. In this paper, we propose a new LR-based approach, called fish-swarm logic regression (FSLR), which improves the logic regression process by incorporating swarm optimization. In our approach, a school of fish agents are conducted in parallel. Each fish agent holds a regression model, while the school searches for better models through various preset behaviors. A swarm algorithm improves the accuracy and the efficiency by speeding up the convergence and preventing it from dropping into local optimums. We apply our approach on a real screening dataset and a series of simulation scenarios. Compared to three existing LR-based approaches, our approach outperforms them by having lower type I and type II error rates, being able to identify more preset causal sites, and performing at faster speeds. PMID:23984382

  17. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  18. Sequencing small genomic targets with high efficiency and extreme accuracy

    PubMed Central

    Schmitt, Michael W.; Fox, Edward J.; Prindle, Marc J.; Reid-Bayliss, Kate S.; True, Lawrence D.; Radich, Jerald P.; Loeb, Lawrence A.

    2015-01-01

    The detection of minority variants in mixed samples demands methods for enrichment and accurate sequencing of small genomic intervals. We describe an efficient approach based on sequential rounds of hybridization with biotinylated oligonucleotides, enabling more than one-million fold enrichment of genomic regions of interest. In conjunction with error correcting double-stranded molecular tags, our approach enables the quantification of mutations in individual DNA molecules. PMID:25849638

  19. Reformulating Constraints for Compilability and Efficiency

    NASA Technical Reports Server (NTRS)

    Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin

    1992-01-01

    KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.

  20. Machine-Learning Approach for Design of Nanomagnetic-Based Antennas

    NASA Astrophysics Data System (ADS)

    Gianfagna, Carmine; Yu, Huan; Swaminathan, Madhavan; Pulugurtha, Raj; Tummala, Rao; Antonini, Giulio

    2017-08-01

    We propose a machine-learning approach for design of planar inverted-F antennas with a magneto-dielectric nanocomposite substrate. It is shown that machine-learning techniques can be efficiently used to characterize nanomagnetic-based antennas by accurately mapping the particle radius and volume fraction of the nanomagnetic material to antenna parameters such as gain, bandwidth, radiation efficiency, and resonant frequency. A modified mixing rule model is also presented. In addition, the inverse problem is addressed through machine learning as well, where given the antenna parameters, the corresponding design space of possible material parameters is identified.

  1. Smart Sectors

    EPA Pesticide Factsheets

    EPA is taking a sector based approach to environmental protection to improve environmental performance through better-informed rulemakings, reduced burden, and more efficient, effective, and consensus-based solutions to environmental problems.

  2. Efficiency of Using a Web-Based Approach to Teach Reading Strategies to Iranian EFL Learners

    ERIC Educational Resources Information Center

    Dehghanpour, Elham; Hashemian, Mahmood

    2015-01-01

    Applying new technologies with their effective potentials have changed education and, consequently, the L2 teacher role. Coping with online materials imposes the necessity of employing Web-based approaches in L2 instruction. The ability to use reading strategies in a Web-based condition needs sufficient skill which will be fulfilled if it is…

  3. Novel approach for solid state cryocoolers.

    PubMed

    Volpi, Azzurra; Di Lieto, Alberto; Tonelli, Mauro

    2015-04-06

    Laser cooling in solids is based on anti-Stokes luminescence, via the annihilation of lattice phonons needed to compensate the energy of emitted photons, higher than absorbed ones. Usually the anti-Stokes process is obtained using a rare-earth active ion, like Yb. In this work we demonstrate a novel approach for optical cooling based not only to Yb anti-Stokes cycle but also to virtuous energy-transfer processes from the active ion, obtaining an increase of the cooling efficiency of a single crystal LiYF(4) (YLF) doped Yb at 5at.% with a controlled co-doping of 0.0016% Thulium ions. A model for efficiency enhancement based on Yb-Tm energy transfer is also suggested.

  4. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  5. The Vector-Ballot Approach for Online Voting Procedures

    NASA Astrophysics Data System (ADS)

    Kiayias, Aggelos; Yung, Moti

    Looking at current cryptographic-based e-voting protocols, one can distinguish three basic design paradigms (or approaches): (a) Mix-Networks based, (b) Homomorphic Encryption based, and (c) Blind Signatures based. Each of the three possesses different advantages and disadvantages w.r.t. the basic properties of (i) efficient tallying, (ii) universal verifiability, and (iii) allowing write-in ballot capability (in addition to predetermined candidates). In fact, none of the approaches results in a scheme that simultaneously achieves all three. This is unfortunate, since the three basic properties are crucial for efficiency, integrity and versatility (flexibility), respectively. Further, one can argue that a serious business offering of voting technology should offer a flexible technology that achieves various election goals with a single user interface. This motivates our goal, which is to suggest a new "vector-ballot" based approach for secret-ballot e-voting that is based on three new notions: Provably Consistent Vector Ballot Encodings, Shrink-and-Mix Networks and Punch-Hole-Vector-Ballots. At the heart of our approach is the combination of mix networks and homomorphic encryption under a single user interface; given this, it is rather surprising that it achieves much more than any of the previous approaches for e-voting achieved in terms of the basic properties. Our approach is presented in two generic designs called "homomorphic vector-ballots with write-in votes" and "multi-candidate punch-hole vector-ballots"; both of our designs can be instantiated over any homomorphic encryption function.

  6. Determination of aerodynamic sensitivity coefficients based on the three-dimensional full potential equation

    NASA Technical Reports Server (NTRS)

    Elbanna, Hesham M.; Carlson, Leland A.

    1992-01-01

    The quasi-analytical approach is applied to the three-dimensional full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. Results are compared to those obtained by the direct finite difference approach and both methods are evaluated to determine their computational accuracy and efficiency. The quasi-analytical approach is shown to be accurate and efficient for large aerodynamic systems.

  7. Time-based analysis of total cost of patient episodes: a case study of hip replacement.

    PubMed

    Peltokorpi, Antti; Kujala, Jaakko

    2006-01-01

    Healthcare in the public and private sectors is facing increasing pressure to become more cost-effective. Time-based competition and work-in-progress have been used successfully to measure and improve the efficiency of industrial manufacturing. Seeks to address this issue. Presents a framework for time based management of the total cost of a patient episode and apply it to the six sigma DMAIC-process development approach. The framework is used to analyse hip replacement patient episodes in Päijät-Häme Hospital District in Finland, which has a catchment area of 210,000 inhabitants and performs an average of 230 hip replacements per year. The work-in-progress concept is applicable to healthcare--notably that the DMAIC-process development approach can be used to analyse the total cost of patient episodes. Concludes that a framework, which combines the patient-in-process and the DMAIC development approach, can be used not only to analyse the total cost of patient episode but also to improve patient process efficiency. Presents a framework that combines patient-in-process and DMAIC-process development approaches, which can be used to analyse the total cost of a patient episode in order to improve patient process efficiency.

  8. Maximum efficiency of ideal heat engines based on a small system: correction to the Carnot efficiency at the nanoscale.

    PubMed

    Quan, H T

    2014-06-01

    We study the maximum efficiency of a heat engine based on a small system. It is revealed that due to the finiteness of the system, irreversibility may arise when the working substance contacts with a heat reservoir. As a result, there is a working-substance-dependent correction to the Carnot efficiency. We derive a general and simple expression for the maximum efficiency of a Carnot cycle heat engine in terms of the relative entropy. This maximum efficiency approaches the Carnot efficiency asymptotically when the size of the working substance increases to the thermodynamic limit. Our study extends Carnot's result of the maximum efficiency to an arbitrary working substance and elucidates the subtlety of thermodynamic laws in small systems.

  9. A Novel Energy-Efficient Approach for Human Activity Recognition.

    PubMed

    Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Peng, Ao; Tang, Biyu; Lu, Hai; Shi, Haibin; Zheng, Huiru

    2017-09-08

    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper.

  10. Enabling smart personalized healthcare: a hybrid mobile-cloud approach for ECG telemonitoring.

    PubMed

    Wang, Xiaoliang; Gui, Qiong; Liu, Bingwei; Jin, Zhanpeng; Chen, Yu

    2014-05-01

    The severe challenges of the skyrocketing healthcare expenditure and the fast aging population highlight the needs for innovative solutions supporting more accurate, affordable, flexible, and personalized medical diagnosis and treatment. Recent advances of mobile technologies have made mobile devices a promising tool to manage patients' own health status through services like telemedicine. However, the inherent limitations of mobile devices make them less effective in computation- or data-intensive tasks such as medical monitoring. In this study, we propose a new hybrid mobile-cloud computational solution to enable more effective personalized medical monitoring. To demonstrate the efficacy and efficiency of the proposed approach, we present a case study of mobile-cloud based electrocardiograph monitoring and analysis and develop a mobile-cloud prototype. The experimental results show that the proposed approach can significantly enhance the conventional mobile-based medical monitoring in terms of diagnostic accuracy, execution efficiency, and energy efficiency, and holds the potential in addressing future large-scale data analysis in personalized healthcare.

  11. GPU-Accelerated Forward and Back-Projections with Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction.

    PubMed

    Ha, S; Matej, S; Ispiryan, M; Mueller, K

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  12. GPU-Accelerated Forward and Back-Projections With Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction

    NASA Astrophysics Data System (ADS)

    Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  13. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  14. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  15. Gyrotron multistage depressed collector based on E × B drift concept using azimuthal electric field. I. Basic design

    NASA Astrophysics Data System (ADS)

    Wu, Chuanren; Pagonakis, Ioannis Gr.; Avramidis, Konstantinos A.; Gantenbein, Gerd; Illy, Stefan; Thumm, Manfred; Jelonnek, John

    2018-03-01

    Multistage Depressed Collectors (MDCs) are widely used in vacuum tubes to regain energy from the depleted electron beam. However, the design of an MDC for gyrotrons, especially for those deployed in fusion experiments and future power plants, is not trivial. Since gyrotrons require relatively high magnetic fields, their hollow annular electron beam is magnetically confined in the collector. In such a moderate magnetic field, the MDC concept based on E × B drift is very promising. Several concrete design approaches based on the E × B concept have been proposed. This paper presents a realizable design of a two-stage depressed collector based on the E × B concept. A collector efficiency of 77% is achievable, which will be able to increase the total gyrotron efficiency from currently 50% to more than 60%. Secondary electrons reduce the efficiency only by 1%. Moreover, the collector efficiency is resilient to the change of beam current (i.e., space charge repulsion) and beam misalignment as well as magnetic field perturbations. Therefore, compared to other E × B conceptual designs, this design approach is promising and fairly feasible.

  16. Efficient Fingercode Classification

    NASA Astrophysics Data System (ADS)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  17. Efficient integration method for fictitious domain approaches

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2015-10-01

    In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.

  18. Theory and implementation of H-matrix based iterative and direct solvers for Helmholtz and elastodynamic oscillatory kernels

    NASA Astrophysics Data System (ADS)

    Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick

    2017-12-01

    In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.

  19. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  20. Towards efficient next generation light sources: combined solution processed and evaporated layers for OLEDs

    NASA Astrophysics Data System (ADS)

    Hartmann, D.; Sarfert, W.; Meier, S.; Bolink, H.; García Santamaría, S.; Wecker, J.

    2010-05-01

    Typically high efficient OLED device structures are based on a multitude of stacked thin organic layers prepared by thermal evaporation. For lighting applications these efficient device stacks have to be up-scaled to large areas which is clearly challenging in terms of high through-put processing at low-cost. One promising approach to meet cost-efficiency, high through-put and high light output is the combination of solution and evaporation processing. Moreover, the objective is to substitute as many thermally evaporated layers as possible by solution processing without sacrificing the device performance. Hence, starting from the anode side, evaporated layers of an efficient white light emitting OLED stack are stepwise replaced by solution processable polymer and small molecule layers. In doing so different solutionprocessable hole injection layers (= polymer HILs) are integrated into small molecule devices and evaluated with regard to their electro-optical performance as well as to their planarizing properties, meaning the ability to cover ITO spikes, defects and dust particles. Thereby two approaches are followed whereas in case of the "single HIL" approach only one polymer HIL is coated and in case of the "combined HIL" concept the coated polymer HIL is combined with a thin evaporated HIL. These HIL architectures are studied in unipolar as well as bipolar devices. As a result the combined HIL approach facilitates a better control over the hole current, an improved device stability as well as an improved current and power efficiency compared to a single HIL as well as pure small molecule based OLED stacks. Furthermore, emitting layers based on guest/host small molecules are fabricated from solution and integrated into a white hybrid stack (WHS). Up to three evaporated layers were successfully replaced by solution-processing showing comparable white light emission spectra like an evaporated small molecule reference stack and lifetime values of several 100 h.

  1. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.

  2. Highly efficient and completely flexible fiber-shaped dye-sensitized solar cell based on TiO2 nanotube array.

    PubMed

    Lv, Zhibin; Yu, Jiefeng; Wu, Hongwei; Shang, Jian; Wang, Dan; Hou, Shaocong; Fu, Yongping; Wu, Kai; Zou, Dechun

    2012-02-21

    A type of highly efficient completely flexible fiber-shaped solar cell based on TiO(2) nanotube array is successfully prepared. Under air mass 1.5G (100 mW cm(-2)) illumination conditions, the photoelectric conversion efficiency of the solar cell approaches 7%, the highest among all fiber-shaped cells based on TiO(2) nanotube arrays and the first completely flexible fiber-shaped DSSC. The fiber-shaped solar cell demonstrates good flexibility, which makes it suitable for modularization using weaving technologies. This journal is © The Royal Society of Chemistry 2012

  3. Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.

    PubMed

    Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G

    2014-07-01

    It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.

  4. An Adjoint-Based Approach to Study a Flexible Flapping Wing in Pitching-Rolling Motion

    NASA Astrophysics Data System (ADS)

    Jia, Kun; Wei, Mingjun; Xu, Min; Li, Chengyu; Dong, Haibo

    2017-11-01

    Flapping-wing aerodynamics, with advantages in agility, efficiency, and hovering capability, has been the choice of many flyers in nature. However, the study of bio-inspired flapping-wing propulsion is often hindered by the problem's large control space with different wing kinematics and deformation. The adjoint-based approach reduces largely the computational cost to a feasible level by solving an inverse problem. Facing the complication from moving boundaries, non-cylindrical calculus provides an easy extension of traditional adjoint-based approach to handle the optimization involving moving boundaries. The improved adjoint method with non-cylindrical calculus for boundary treatment is first applied on a rigid pitching-rolling plate, then extended to a flexible one with active deformation to further increase its propulsion efficiency. The comparison of flow dynamics with the initial and optimal kinematics and deformation provides a unique opportunity to understand the flapping-wing mechanism. Supported by AFOSR and ARL.

  5. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    PubMed Central

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  6. The wave-based substructuring approach for the efficient description of interface dynamics in substructuring

    NASA Astrophysics Data System (ADS)

    Donders, S.; Pluymers, B.; Ragnarsson, P.; Hadjit, R.; Desmet, W.

    2010-04-01

    In the vehicle design process, design decisions are more and more based on virtual prototypes. Due to competitive and regulatory pressure, vehicle manufacturers are forced to improve product quality, to reduce time-to-market and to launch an increasing number of design variants on the global market. To speed up the design iteration process, substructuring and component mode synthesis (CMS) methods are commonly used, involving the analysis of substructure models and the synthesis of the substructure analysis results. Substructuring and CMS enable efficient decentralized collaboration across departments and allow to benefit from the availability of parallel computing environments. However, traditional CMS methods become prohibitively inefficient when substructures are coupled along large interfaces, i.e. with a large number of degrees of freedom (DOFs) at the interface between substructures. The reason is that the analysis of substructures involves the calculation of a number of enrichment vectors, one for each interface degree of freedom (DOF). Since large interfaces are common in vehicles (e.g. the continuous line connections to connect the body with the windshield, roof or floor), this interface bottleneck poses a clear limitation in the vehicle noise, vibration and harshness (NVH) design process. Therefore there is a need to describe the interface dynamics more efficiently. This paper presents a wave-based substructuring (WBS) approach, which allows reducing the interface representation between substructures in an assembly by expressing the interface DOFs in terms of a limited set of basis functions ("waves"). As the number of basis functions can be much lower than the number of interface DOFs, this greatly facilitates the substructure analysis procedure and results in faster design predictions. The waves are calculated once from a full nominal assembly analysis, but these nominal waves can be re-used for the assembly of modified components. The WBS approach thus enables efficient structural modification predictions of the global modes, so that efficient vibro-acoustic design modification, optimization and robust design become possible. The results show that wave-based substructuring offers a clear benefit for vehicle design modifications, by improving both the speed of component reduction processes and the efficiency and accuracy of design iteration predictions, as compared to conventional substructuring approaches.

  7. Silicon solar cells: Past, present and the future

    NASA Astrophysics Data System (ADS)

    Lee, Youn-Jung; Kim, Byung-Sung; Ifitiquar, S. M.; Park, Cheolmin; Yi, Junsin

    2014-08-01

    There has been a great demand for renewable energy for the last few years. However, the solar cell industry is currently experiencing a temporary plateau due to a sluggish economy and an oversupply of low-quality cells. The current situation can be overcome by reducing the production cost and by improving the cell is conversion efficiency. New materials such as compound semiconductor thin films have been explored to reduce the fabrication cost, and structural changes have been explored to improve the cell's efficiency. Although a record efficiency of 24.7% is held by a PERL — structured silicon solar cell and 13.44% has been realized using a thin silicon film, the mass production of these cells is still too expensive. Crystalline and amorphous silicon — based solar cells have led the solar industry and have occupied more than half of the market so far. They will remain so in the future photovoltaic (PV) market by playing a pivotal role in the solar industry. In this paper, we discuss two primary approaches that may boost the silicon — based solar cell market; one is a high efficiency approach and the other is a low cost approach. We also discuss the future prospects of various solar cells.

  8. Improving the efficiency of a chemotherapy day unit: applying a business approach to oncology.

    PubMed

    van Lent, Wineke A M; Goedbloed, N; van Harten, W H

    2009-03-01

    To improve the efficiency of a hospital-based chemotherapy day unit (CDU). The CDU was benchmarked with two other CDUs to identify their attainable performance levels for efficiency, and causes for differences. Furthermore, an in-depth analysis using a business approach, called lean thinking, was performed. An integrated set of interventions was implemented, among them a new planning system. The results were evaluated using pre- and post-measurements. We observed 24% growth of treatments and bed utilisation, a 12% increase of staff member productivity and an 81% reduction of overtime. The used method improved process design and led to increased efficiency and a more timely delivery of care. Thus, the business approaches, which were adapted for healthcare, were successfully applied. The method may serve as an example for other oncology settings with problems concerning waiting times, patient flow or lack of beds.

  9. Variable Mach number design approach for a parallel waverider with a wide-speed range based on the osculating cone theory

    NASA Astrophysics Data System (ADS)

    Zhao, Zhen-tao; Huang, Wei; Li, Shi-Bin; Zhang, Tian-Tian; Yan, Li

    2018-06-01

    In the current study, a variable Mach number waverider design approach has been proposed based on the osculating cone theory. The design Mach number of the osculating cone constant Mach number waverider with the same volumetric efficiency of the osculating cone variable Mach number waverider has been determined by writing a program for calculating the volumetric efficiencies of waveriders. The CFD approach has been utilized to verify the effectiveness of the proposed approach. At the same time, through the comparative analysis of the aerodynamic performance, the performance advantage of the osculating cone variable Mach number waverider is studied. The obtained results show that the osculating cone variable Mach number waverider owns higher lift-to-drag ratio throughout the flight profile when compared with the osculating cone constant Mach number waverider, and it has superior low-speed aerodynamic performance while maintaining nearly the same high-speed aerodynamic performance.

  10. An integrative multi-criteria decision making techniques for supplier evaluation problem with its application

    NASA Astrophysics Data System (ADS)

    Fatrias, D.; Kamil, I.; Meilani, D.

    2018-03-01

    Coordinating business operation with suppliers becomes increasingly important to survive and prosper under the dynamic business environment. A good partnership with suppliers not only increase efficiency, but also strengthen corporate competitiveness. Associated with such concern, this study aims to develop a practical approach of multi-criteria supplier evaluation using combined methods of Taguchi loss function (TLF), best-worst method (BWM) and VIse Kriterijumska Optimizacija kompromisno Resenje (VIKOR). A new framework of integrative approach adopting these methods is our main contribution for supplier evaluation in literature. In this integrated approach, a compromised supplier ranking list based on the loss score of suppliers is obtained using efficient steps of a pairwise comparison based decision making process. Implemetation to the case problem with real data from crumb rubber industry shows the usefulness of the proposed approach. Finally, a suitable managerial implication is presented.

  11. An efficient method for the fusion of light field refocused images

    NASA Astrophysics Data System (ADS)

    Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei

    2018-04-01

    Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.

  12. Game theoretic sensor management for target tracking

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Chen, Genshe; Blasch, Erik; Pham, Khanh; Douville, Philip; Yang, Chun; Kadar, Ivan

    2010-04-01

    This paper develops and evaluates a game-theoretic approach to distributed sensor-network management for target tracking via sensor-based negotiation. We present a distributed sensor-based negotiation game model for sensor management for multi-sensor multi-target tacking situations. In our negotiation framework, each negotiation agent represents a sensor and each sensor maximizes their utility using a game approach. The greediness of each sensor is limited by the fact that the sensor-to-target assignment efficiency will decrease if too many sensor resources are assigned to a same target. It is similar to the market concept in real world, such as agreements between buyers and sellers in an auction market. Sensors are willing to switch targets so that they can obtain their highest utility and the most efficient way of applying their resources. Our sub-game perfect equilibrium-based negotiation strategies dynamically and distributedly assign sensors to targets. Numerical simulations are performed to demonstrate our sensor-based negotiation approach for distributed sensor management.

  13. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  14. Learner Performance in Multimedia Learning Arrangements: An Analysis across Instructional Approaches

    ERIC Educational Resources Information Center

    Eysink, Tessa H. S.; de Jong, Ton; Berthold, Kirsten; Kolloffel, Bas; Opfermann, Maria; Wouters, Pieter

    2009-01-01

    In this study, the authors compared four multimedia learning arrangements differing in instructional approach on effectiveness and efficiency for learning: (a) hypermedia learning, (b) observational learning, (c) self-explanation-based learning, and (d) inquiry learning. The approaches all advocate learners' active attitude toward the learning…

  15. Action Spotting and Recognition Based on a Spatiotemporal Orientation Analysis.

    PubMed

    Derpanis, Konstantinos G; Sizintsev, Mikhail; Cannons, Kevin J; Wildes, Richard P

    2013-03-01

    This paper provides a unified framework for the interrelated topics of action spotting, the spatiotemporal detection and localization of human actions in video, and action recognition, the classification of a given video into one of several predefined categories. A novel compact local descriptor of video dynamics in the context of action spotting and recognition is introduced based on visual spacetime oriented energy measurements. This descriptor is efficiently computed directly from raw image intensity data and thereby forgoes the problems typically associated with flow-based features. Importantly, the descriptor allows for the comparison of the underlying dynamics of two spacetime video segments irrespective of spatial appearance, such as differences induced by clothing, and with robustness to clutter. An associated similarity measure is introduced that admits efficient exhaustive search for an action template, derived from a single exemplar video, across candidate video sequences. The general approach presented for action spotting and recognition is amenable to efficient implementation, which is deemed critical for many important applications. For action spotting, details of a real-time GPU-based instantiation of the proposed approach are provided. Empirical evaluation of both action spotting and action recognition on challenging datasets suggests the efficacy of the proposed approach, with state-of-the-art performance documented on standard datasets.

  16. An efficient approach to the deployment of complex open source information systems

    PubMed Central

    Cong, Truong Van Chi; Groeneveld, Eildert

    2011-01-01

    Complex open source information systems are usually implemented as component-based software to inherit the available functionality of existing software packages developed by third parties. Consequently, the deployment of these systems not only requires the installation of operating system, application framework and the configuration of services but also needs to resolve the dependencies among components. The problem becomes more challenging when the application must be installed and used on different platforms such as Linux and Windows. To address this, an efficient approach using the virtualization technology is suggested and discussed in this paper. The approach has been applied in our project to deploy a web-based integrated information system in molecular genetics labs. It is a low-cost solution to benefit both software developers and end-users. PMID:22102770

  17. Algorithmic design of a noise-resistant and efficient closed-loop deep brain stimulation system: A computational approach.

    PubMed

    Karamintziou, Sofia D; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G; Tagaris, George A; Sakas, Damianos E; Polychronaki, Georgia E; Tsirogiannis, George L; David, Olivier; Nikita, Konstantina S

    2017-01-01

    Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.

  18. Statistical Techniques to Explore the Quality of Constraints in Constraint-Based Modeling Environments

    ERIC Educational Resources Information Center

    Gálvez, Jaime; Conejo, Ricardo; Guzmán, Eduardo

    2013-01-01

    One of the most popular student modeling approaches is Constraint-Based Modeling (CBM). It is an efficient approach that can be easily applied inside an Intelligent Tutoring System (ITS). Even with these characteristics, building new ITSs requires carefully designing the domain model to be taught because different sources of errors could affect…

  19. A new distributed systems scheduling algorithm: a swarm intelligence approach

    NASA Astrophysics Data System (ADS)

    Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi

    2011-12-01

    The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.

  20. An Efficient Buyer-Seller Watermarking Protocol Based on Chameleon Encryption

    NASA Astrophysics Data System (ADS)

    Poh, Geong Sen; Martin, Keith M.

    Buyer-seller watermarking protocols are designed to deter clients from illegally distributing copies of digital content. This is achieved by allowing a distributor to insert a unique watermark into content in such a way that the distributor does not know the final watermarked copy that is given to the client. This protects both the client and distributor from attempts by one to falsely accuse the other of misuse. Buyer-seller watermarking protocols are normally based on asymmetric cryptographic primitives known as homomorphic encryption schemes. However, the computational and communication overhead of this conventional approach is high. In this paper we propose a different approach, based on the symmetric Chameleon encryption scheme. We show that this leads to significant gains in computational and operational efficiency.

  1. High-Quality (CH3NH3)3Bi2I9 Film-Based Solar Cells: Pushing Efficiency up to 1.64.

    PubMed

    Zhang, Zheng; Li, Xiaowei; Xia, Xiaohong; Wang, Zhuo; Huang, Zhongbing; Lei, Binglong; Gao, Yun

    2017-09-07

    Bismuth-based solar cells have exhibited some advantages over lead perovskite solar cells for nontoxicity and superior stability, which are currently two main concerns in the photovoltaic community. As for the perovskite-related compound (CH 3 NH 3 ) 3 Bi 2 I 9 applied for solar cells, the conversion efficiency is severely restricted by the unsatisfactory photoactive film quality. Herein we report a novel two-step approach- high-vacuum BiI 3 deposition and low-vacuum homogeneous transformation of BiI 3 to (CH 3 NH 3 ) 3 Bi 2 I 9 -for highly compact, pinhole-free, large-grained films, which are characterized with absorption coefficient, trap density of states, and charge diffusion length comparable to those of some lead perovskite analogues. Accordingly, the solar cells have realized a record power conversion of efficiency of 1.64% and also a high external quantum efficiency approaching 60%. Our work demonstrates the potential of (CH 3 NH 3 ) 3 Bi 2 I 9 for highly efficient and long-term stable solar cells.

  2. Gallium arsenide solar cell efficiency: Problems and potential

    NASA Technical Reports Server (NTRS)

    Weizer, V. G.; Godlewski, M. P.

    1985-01-01

    Under ideal conditions the GaAs solar cell should be able to operate at an AMO efficiency exceeding 27 percent, whereas to date the best measured efficiencies barely exceed 19 percent. Of more concern is the fact that there has been no improvement in the past half decade, despite the expenditure of considerable effort. State-of-the-art GaAs efficiency is analyzed in an attempt to determine the feasibility of improving on the status quo. The possible gains to be had in the planar cell. An attempt is also made to predict the efficiency levels that could be achieved with a grating geometry. Both the N-base and the P-base BaAs cells in their planar configurations have the potential to operate at AMO efficiencies between 23 and 24 percent. For the former the enabling technology is essentially in hand, while for the latter the problem of passivating the emitter surface remains to be solved. In the dot grating configuration, P-base efficiencies approaching 26 percent are possible with minor improvements in existing technology. N-base grating cell efficiencies comparable to those predicted for the P-base cell are achievable if the N surface can be sufficiently passivated.

  3. Factors limiting device efficiency in organic photovoltaics.

    PubMed

    Janssen, René A J; Nelson, Jenny

    2013-04-04

    The power conversion efficiency of the most efficient organic photovoltaic (OPV) cells has recently increased to over 10%. To enable further increases, the factors limiting the device efficiency in OPV must be identified. In this review, the operational mechanism of OPV cells is explained and the detailed balance limit to photovoltaic energy conversion, as developed by Shockley and Queisser, is outlined. The various approaches that have been developed to estimate the maximum practically achievable efficiency in OPV are then discussed, based on empirical knowledge of organic semiconductor materials. Subsequently, approaches made to adapt the detailed balance theory to incorporate some of the fundamentally different processes in organic solar cells that originate from using a combination of two complementary, donor and acceptor, organic semiconductors using thermodynamic and kinetic approaches are described. The more empirical formulations to the efficiency limits provide estimates of 10-12%, but the more fundamental descriptions suggest limits of 20-24% to be reachable in single junctions, similar to the highest efficiencies obtained for crystalline silicon p-n junction solar cells. Closing this gap sets the stage for future materials research and development of OPV. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Advancing Efficient All-Electron Electronic Structure Methods Based on Numeric Atom-Centered Orbitals for Energy Related Materials

    NASA Astrophysics Data System (ADS)

    Blum, Volker

    This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.

  5. Increasing efficiency of CO2 uptake by combined land-ocean sink

    NASA Astrophysics Data System (ADS)

    van Marle, M.; van Wees, D.; Houghton, R. A.; Nassikas, A.; van der Werf, G.

    2017-12-01

    Carbon-climate feedbacks are one of the key uncertainties in predicting future climate change. Such a feedback could originate from carbon sinks losing their efficiency, for example due to saturation of the CO2 fertilization effect or ocean warming. An indirect approach to estimate how the combined land and ocean sink responds to climate change and growing fossil fuel emissions is based on assessing the trends in the airborne fraction of CO2 emissions from fossil fuel and land use change. One key limitation with this approach has been the large uncertainty in quantifying land use change emissions. We have re-assessed those emissions in a more data-driven approach by combining estimates coming from a bookkeeping model with visibility-based land use change emissions available for the Arc of Deforestation and Equatorial Asia, two key regions with large land use change emissions. The advantage of the visibility-based dataset is that the emissions are observation-based and this dataset provides more detailed information about interannual variability than previous estimates. Based on our estimates we provide evidence that land use and land cover change emissions have increased more rapidly than previously thought, implying that the airborne fraction has decreased since the start of CO2 measurements in 1959. This finding is surprising because it means that the combined land and ocean sink has become more efficient while the opposite is expected.

  6. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  7. Accelerated Enveloping Distribution Sampling: Enabling Sampling of Multiple End States while Preserving Local Energy Minima.

    PubMed

    Perthold, Jan Walther; Oostenbrink, Chris

    2018-05-17

    Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.

  8. Modelling and analysis of solar cell efficiency distributions

    NASA Astrophysics Data System (ADS)

    Wasmer, Sven; Greulich, Johannes

    2017-08-01

    We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.

  9. Dispersion of speckle suppression efficiency for binary DOE structures: spectral domain and coherent matrix approaches.

    PubMed

    Lapchuk, Anatoliy; Prygun, Olexandr; Fu, Minglei; Le, Zichun; Xiong, Qiyuan; Kryuchyn, Andriy

    2017-06-26

    We present the first general theoretical description of speckle suppression efficiency based on an active diffractive optical element (DOE). The approach is based on spectral analysis of diffracted beams and a coherent matrix. Analytical formulae are obtained for the dispersion of speckle suppression efficiency using different DOE structures and different DOE activation methods. We show that a one-sided 2D DOE structure has smaller speckle suppression range than a two-sided 1D DOE structure. Both DOE structures have sufficient speckle suppression range to suppress low-order speckles in the entire visible range, but only the two-sided 1D DOE can suppress higher-order speckles. We also show that a linear shift 2D DOE in a laser projector with a large numerical aperture has higher effective speckle suppression efficiency than the method using switching or step-wise shift DOE structures. The generalized theoretical models elucidate the mechanism and practical realization of speckle suppression.

  10. Scalable and efficient separation of hydrogen isotopes using graphene-based electrochemical pumping

    NASA Astrophysics Data System (ADS)

    Lozada-Hidalgo, M.; Zhang, S.; Hu, S.; Esfandiar, A.; Grigorieva, I. V.; Geim, A. K.

    2017-05-01

    Thousands of tons of isotopic mixtures are processed annually for heavy-water production and tritium decontamination. The existing technologies remain extremely energy intensive and require large capital investments. New approaches are needed to reduce the industry's footprint. Recently, micrometre-size crystals of graphene are shown to act as efficient sieves for hydrogen isotopes pumped through graphene electrochemically. Here we report a fully-scalable approach, using graphene obtained by chemical vapour deposition, which allows a proton-deuteron separation factor of around 8, despite cracks and imperfections. The energy consumption is projected to be orders of magnitude smaller with respect to existing technologies. A membrane based on 30 m2 of graphene, a readily accessible amount, could provide a heavy-water output comparable to that of modern plants. Even higher efficiency is expected for tritium separation. With no fundamental obstacles for scaling up, the technology's simplicity, efficiency and green credentials call for consideration by the nuclear and related industries.

  11. Estimating Origin-Destination Matrices Using AN Efficient Moth Flame-Based Spatial Clustering Approach

    NASA Astrophysics Data System (ADS)

    Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali

    2017-09-01

    Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.

  12. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  13. LightForce Photon-Pressure Collision Avoidance: Updated Efficiency Analysis Utilizing a Highly Parallel Simulation Approach

    NASA Technical Reports Server (NTRS)

    Stupl, Jan; Faber, Nicolas; Foster, Cyrus; Yang, Fan Yang; Nelson, Bron; Aziz, Jonathan; Nuttall, Andrew; Henze, Chris; Levit, Creon

    2014-01-01

    This paper provides an updated efficiency analysis of the LightForce space debris collision avoidance scheme. LightForce aims to prevent collisions on warning by utilizing photon pressure from ground based, commercial off the shelf lasers. Past research has shown that a few ground-based systems consisting of 10 kilowatt class lasers directed by 1.5 meter telescopes with adaptive optics could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. Our simulation approach utilizes the entire Two Line Element (TLE) catalogue in LEO for a given day as initial input. Least-squares fitting of a TLE time series is used for an improved orbit estimate. We then calculate the probability of collision for all LEO objects in the catalogue for a time step of the simulation. The conjunctions that exceed a threshold probability of collision are then engaged by a simulated network of laser ground stations. After those engagements, the perturbed orbits are used to re-assess the probability of collision and evaluate the efficiency of the system. This paper describes new simulations with three updated aspects: 1) By utilizing a highly parallel simulation approach employing hundreds of processors, we have extended our analysis to a much broader dataset. The simulation time is extended to one year. 2) We analyze not only the efficiency of LightForce on conjunctions that naturally occur, but also take into account conjunctions caused by orbit perturbations due to LightForce engagements. 3) We use a new simulation approach that is regularly updating the LightForce engagement strategy, as it would be during actual operations. In this paper we present our simulation approach to parallelize the efficiency analysis, its computational performance and the resulting expected efficiency of the LightForce collision avoidance system. Results indicate that utilizing a network of four LightForce stations with 20 kilowatt lasers, 85% of all conjunctions with a probability of collision Pc > 10 (sup -6) can be mitigated.

  14. Efficient conceptual design for LED-based pixel light vehicle headlamps

    NASA Astrophysics Data System (ADS)

    Held, Marcel Philipp; Lachmayer, Roland

    2017-12-01

    High-resolution vehicle headlamps represent a future-oriented technology that can be used to increase traffic safety and driving comfort. As a further development to the current Matrix Beam headlamps, LED-based pixel light systems enable ideal lighting functions (e.g. projection of navigation information onto the road) to be activated in any given driving scenario. Moreover, compared to other light-modulating elements such as DMDs and LCDs, instantaneous LED on-off toggling provides a decisive advantage in efficiency. To generate highly individualized light distributions for automotive applications, a number of approaches using an LED array may be pursued. One approach is to vary the LED density in the array so as to output the desired light distribution. Another notable approach makes use of an equidistant arrangement of the individual LEDs together with distortion optics to formulate the desired light distribution. The optical system adjusts the light distribution in a manner that improves resolution and increases luminous intensity of the desired area. An efficient setup for pixel generation calls for one lens per LED. Taking into consideration the limited space requirements of the system, this implies that the luminous flux, efficiency and resolution image parameters are primarily controlled by the lens dimensions. In this paper a concept for an equidistant LED array arrangement utilizing distortion optics is presented. The paper is divided into two parts. The first part discusses the influence of lens geometry on the system efficiency whereas the second part investigates the correlation between resolution and luminous flux based on the lens dimensions.

  15. In vivo gene correction with targeted sequence substitution through microhomology-mediated end joining.

    PubMed

    Shin, Jeong Hong; Jung, Soobin; Ramakrishna, Suresh; Kim, Hyongbum Henry; Lee, Junwon

    2018-07-07

    Genome editing technology using programmable nucleases has rapidly evolved in recent years. The primary mechanism to achieve precise integration of a transgene is mainly based on homology-directed repair (HDR). However, an HDR-based genome-editing approach is less efficient than non-homologous end-joining (NHEJ). Recently, a microhomology-mediated end-joining (MMEJ)-based transgene integration approach was developed, showing feasibility both in vitro and in vivo. We expanded this method to achieve targeted sequence substitution (TSS) of mutated sequences with normal sequences using double-guide RNAs (gRNAs), and a donor template flanking the microhomologies and target sequence of the gRNAs in vitro and in vivo. Our method could realize more efficient sequence substitution than the HDR-based method in vitro using a reporter cell line, and led to the survival of a hereditary tyrosinemia mouse model in vivo. The proposed MMEJ-based TSS approach could provide a novel therapeutic strategy, in addition to HDR, to achieve gene correction from a mutated sequence to a normal sequence. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. First- and second-order sensitivity analysis of linear and nonlinear structures

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Mroz, Z.

    1986-01-01

    This paper employs the principle of virtual work to derive sensitivity derivatives of structural response with respect to stiffness parameters using both direct and adjoint approaches. The computations required are based on additional load conditions characterized by imposed initial strains, body forces, or surface tractions. As such, they are equally applicable to numerical or analytical solution techniques. The relative efficiency of various approaches for calculating first and second derivatives is assessed. It is shown that for the evaluation of second derivatives the most efficient approach is one that makes use of both the first-order sensitivities and adjoint vectors. Two example problems are used for demonstrating the various approaches.

  17. Effect of inlet modelling on surface drainage in coupled urban flood simulation

    NASA Astrophysics Data System (ADS)

    Jang, Jiun-Huei; Chang, Tien-Hao; Chen, Wei-Bo

    2018-07-01

    For a highly developed urban area with complete drainage systems, flood simulation is necessary for describing the flow dynamics from rainfall, to surface runoff, and to sewer flow. In this study, a coupled flood model based on diffusion wave equations was proposed to simulate one-dimensional sewer flow and two-dimensional overland flow simultaneously. The overland flow model provides details on the rainfall-runoff process to estimate the excess runoff that enters the sewer system through street inlets for sewer flow routing. Three types of inlet modelling are considered in this study, including the manhole-based approach that ignores the street inlets by draining surface water directly into manholes, the inlet-manhole approach that drains surface water into manholes that are each connected to multiple inlets, and the inlet-node approach that drains surface water into sewer nodes that are connected to individual inlets. The simulation results were compared with a high-intensity rainstorm event that occurred in 2015 in Taipei City. In the verification of the maximum flood extent, the two approaches that considered street inlets performed considerably better than that without street inlets. When considering the aforementioned models in terms of temporal flood variation, using manholes as receivers leads to an overall inefficient draining of the surface water either by the manhole-based approach or by the inlet-manhole approach. Using the inlet-node approach is more reasonable than using the inlet-manhole approach because the inlet-node approach greatly reduces the fluctuation of the sewer water level. The inlet-node approach is more efficient in draining surface water by reducing flood volume by 13% compared with the inlet-manhole approach and by 41% compared with the manhole-based approach. The results show that inlet modeling has a strong influence on drainage efficiency in coupled flood simulation.

  18. [Ethics versus economics in public health? On the integration of economic rationality in a discourse of public health ethics].

    PubMed

    Rothgang, H; Staber, J

    2009-05-01

    In the course of establishing the discourse of public health ethics in Germany, we discuss whether economic efficiency should be part of public health ethics and, if necessary, how efficiency should be conceptualized. Based on the welfare economics theory, we build a theoretical framework that demands an integration of economic rationality in public health ethics. Furthermore, we consider the possible implementation of welfare efficiency against the background of current practice in an economic evaluation of health care in Germany. The indifference of the welfare efficiency criterion with respect to distribution leads to the conclusion that efficiency must not be the only criteria of public health ethics. Therefore, an ethical approach of principles should be chosen for public health ethics. Possible conflicts between principles of such an approach are outlined.

  19. Fast Reduction Method in Dominance-Based Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  20. Value-based Proposition for a Dedicated Interventional Pulmonology Suite: an Adaptable Business Model.

    PubMed

    Desai, Neeraj R; French, Kim D; Diamond, Edward; Kovitz, Kevin L

    2018-05-31

    Value-based care is evolving with a focus on improving efficiency, reducing cost, and enhancing the patient experience. Interventional pulmonology has the opportunity to lead an effective value-based care model. This model is supported by the relatively low cost of pulmonary procedures and has the potential to improve efficiencies in thoracic care. We discuss key strategies to evaluate and improve efficiency in Interventional Pulmonology practice and describe our experience in developing an interventional pulmonology suite. Such a model can be adapted to other specialty areas and may encourage a more coordinated approach to specialty care. Copyright © 2018. Published by Elsevier Inc.

  1. Factorization-based texture segmentation

    DOE PAGES

    Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.

    2015-06-17

    This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less

  2. Vector quantization for efficient coding of upper subbands

    NASA Technical Reports Server (NTRS)

    Zeng, W. J.; Huang, Y. F.

    1994-01-01

    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.

  3. A hybrid computational approach for efficient Alzheimer's disease classification based on heterogeneous data.

    PubMed

    Ding, Xuemei; Bucholc, Magda; Wang, Haiying; Glass, David H; Wang, Hui; Clarke, Dave H; Bjourson, Anthony John; Dowey, Le Roy C; O'Kane, Maurice; Prasad, Girijesh; Maguire, Liam; Wong-Lin, KongFatt

    2018-06-27

    There is currently a lack of an efficient, objective and systemic approach towards the classification of Alzheimer's disease (AD), due to its complex etiology and pathogenesis. As AD is inherently dynamic, it is also not clear how the relationships among AD indicators vary over time. To address these issues, we propose a hybrid computational approach for AD classification and evaluate it on the heterogeneous longitudinal AIBL dataset. Specifically, using clinical dementia rating as an index of AD severity, the most important indicators (mini-mental state examination, logical memory recall, grey matter and cerebrospinal volumes from MRI and active voxels from PiB-PET brain scans, ApoE, and age) can be automatically identified from parallel data mining algorithms. In this work, Bayesian network modelling across different time points is used to identify and visualize time-varying relationships among the significant features, and importantly, in an efficient way using only coarse-grained data. Crucially, our approach suggests key data features and their appropriate combinations that are relevant for AD severity classification with high accuracy. Overall, our study provides insights into AD developments and demonstrates the potential of our approach in supporting efficient AD diagnosis.

  4. Sustainable Mobility | Transportation Research | NREL

    Science.gov Websites

    both safety and energy efficiency. Sustainable Mobility Initiative Takes Systems-Based Approach to of its Sustainable Mobility Initiative, approaching sustainable transportation as an intelligent Transportation Sector Initiative and DOE's Transportation Energy Futures project identify emerging and disruptive

  5. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.

    PubMed

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-07-08

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.

  6. Models of evaluating efficiency and risks on integration of cloud-base IT-services of the machine-building enterprise: a system approach

    NASA Astrophysics Data System (ADS)

    Razumnikov, S.; Kurmanbay, A.

    2016-04-01

    The present paper suggests a system approach to evaluation of the effectiveness and risks resulted from the integration of cloud-based services in a machine-building enterprise. This approach makes it possible to estimate a set of enterprise IT applications and choose the applications to be migrated to the cloud with regard to specific business requirements, a technological strategy and willingness to risk.

  7. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  8. National Infrastructure Protection Plan

    DTIC Science & Technology

    2006-01-01

    effective and efficient CI/KR protection; and • Provide a system for continuous measurement and improvement of CI/KR...information- based core processes, a top-down system -, network-, or function- based approach may be more appropri- ate. A bottom-up approach normally... e - commerce , e -mail, and R&D systems . • Control Systems : Cyber systems used within many infrastructure and industries to monitor and

  9. Students' Perception of a Flipped Classroom Approach to Facilitating Online Project-Based Learning in Marketing Research Courses

    ERIC Educational Resources Information Center

    Shih, Wen-Ling; Tsai, Chun-Yen

    2017-01-01

    This study investigated students' perception of a flipped classroom approach to facilitating online project-based learning (FC-OPBL) in a marketing research course at a technical university. This combined strategy was aimed at improving teaching quality and learning efficiency. Sixty-seven students taking a marketing research course were surveyed.…

  10. A Novel Energy-Efficient Approach for Human Activity Recognition

    PubMed Central

    Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Tang, Biyu; Lu, Hai; Shi, Haibin

    2017-01-01

    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper. PMID:28885560

  11. A similarity score-based two-phase heuristic approach to solve the dynamic cellular facility layout for manufacturing systems

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Singh, Surya Prakash

    2017-11-01

    The dynamic cellular facility layout problem (DCFLP) is a well-known NP-hard problem. It has been estimated that the efficient design of DCFLP reduces the manufacturing cost of products by maintaining the minimum material flow among all machines in all cells, as the material flow contributes around 10-30% of the total product cost. However, being NP hard, solving the DCFLP optimally is very difficult in reasonable time. Therefore, this article proposes a novel similarity score-based two-phase heuristic approach to solve the DCFLP optimally considering multiple products in multiple times to be manufactured in the manufacturing layout. In the first phase of the proposed heuristic, a machine-cell cluster is created based on similarity scores between machines. This is provided as an input to the second phase to minimize inter/intracell material handling costs and rearrangement costs over the entire planning period. The solution methodology of the proposed approach is demonstrated. To show the efficiency of the two-phase heuristic approach, 21 instances are generated and solved using the optimization software package LINGO. The results show that the proposed approach can optimally solve the DCFLP in reasonable time.

  12. Dynamic programming-based hot spot identification approach for pedestrian crashes.

    PubMed

    Medury, Aditya; Grembek, Offer

    2016-08-01

    Network screening techniques are widely used by state agencies to identify locations with high collision concentration, also referred to as hot spots. However, most of the research in this regard has focused on identifying highway segments that are of concern to automobile collisions. In comparison, pedestrian hot spot detection has typically focused on analyzing pedestrian crashes in specific locations, such as at/near intersections, mid-blocks, and/or other crossings, as opposed to long stretches of roadway. In this context, the efficiency of the some of the widely used network screening methods has not been tested. Hence, in order to address this issue, a dynamic programming-based hot spot identification approach is proposed which provides efficient hot spot definitions for pedestrian crashes. The proposed approach is compared with the sliding window method and an intersection buffer-based approach. The results reveal that the dynamic programming method generates more hot spots with a higher number of crashes, while providing small hot spot segment lengths. In comparison, the sliding window method is shown to suffer from shortcomings due to a first-come-first-serve approach vis-à-vis hot spot identification and a fixed hot spot window length assumption. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  14. Highly parallel single-molecule amplification approach based on agarose droplet polymerase chain reaction for efficient and cost-effective aptamer selection.

    PubMed

    Zhang, Wei Yun; Zhang, Wenhua; Liu, Zhiyuan; Li, Cong; Zhu, Zhi; Yang, Chaoyong James

    2012-01-03

    We have developed a novel method for efficiently screening affinity ligands (aptamers) from a complex single-stranded DNA (ssDNA) library by employing single-molecule emulsion polymerase chain reaction (PCR) based on the agarose droplet microfluidic technology. In a typical systematic evolution of ligands by exponential enrichment (SELEX) process, the enriched library is sequenced first, and tens to hundreds of aptamer candidates are analyzed via a bioinformatic approach. Possible candidates are then chemically synthesized, and their binding affinities are measured individually. Such a process is time-consuming, labor-intensive, inefficient, and expensive. To address these problems, we have developed a highly efficient single-molecule approach for aptamer screening using our agarose droplet microfluidic technology. Statistically diluted ssDNA of the pre-enriched library evolved through conventional SELEX against cancer biomarker Shp2 protein was encapsulated into individual uniform agarose droplets for droplet PCR to generate clonal agarose beads. The binding capacity of amplified ssDNA from each clonal bead was then screened via high-throughput fluorescence cytometry. DNA clones with high binding capacity and low K(d) were chosen as the aptamer and can be directly used for downstream biomedical applications. We have identified an ssDNA aptamer that selectively recognizes Shp2 with a K(d) of 24.9 nM. Compared to a conventional sequencing-chemical synthesis-screening work flow, our approach avoids large-scale DNA sequencing and expensive, time-consuming DNA synthesis of large populations of DNA candidates. The agarose droplet microfluidic approach is thus highly efficient and cost-effective for molecular evolution approaches and will find wide application in molecular evolution technologies, including mRNA display, phage display, and so on. © 2011 American Chemical Society

  15. An efficient approach for surveillance of childhood diabetes by type derived from electronic health record data: the SEARCH for Diabetes in Youth Study

    PubMed Central

    Zhong, Victor W; Obeid, Jihad S; Craig, Jean B; Pfaff, Emily R; Thomas, Joan; Jaacks, Lindsay M; Beavers, Daniel P; Carey, Timothy S; Lawrence, Jean M; Dabelea, Dana; Hamman, Richard F; Bowlby, Deborah A; Pihoker, Catherine; Saydah, Sharon H

    2016-01-01

    Objective To develop an efficient surveillance approach for childhood diabetes by type across 2 large US health care systems, using phenotyping algorithms derived from electronic health record (EHR) data. Materials and Methods Presumptive diabetes cases <20 years of age from 2 large independent health care systems were identified as those having ≥1 of the 5 indicators in the past 3.5 years, including elevated HbA1c, elevated blood glucose, diabetes-related billing codes, patient problem list, and outpatient anti-diabetic medications. EHRs of all the presumptive cases were manually reviewed, and true diabetes status and diabetes type were determined. Algorithms for identifying diabetes cases overall and classifying diabetes type were either prespecified or derived from classification and regression tree analysis. Surveillance approach was developed based on the best algorithms identified. Results We developed a stepwise surveillance approach using billing code–based prespecified algorithms and targeted manual EHR review, which efficiently and accurately ascertained and classified diabetes cases by type, in both health care systems. The sensitivity and positive predictive values in both systems were approximately ≥90% for ascertaining diabetes cases overall and classifying cases with type 1 or type 2 diabetes. About 80% of the cases with “other” type were also correctly classified. This stepwise surveillance approach resulted in a >70% reduction in the number of cases requiring manual validation compared to traditional surveillance methods. Conclusion EHR data may be used to establish an efficient approach for large-scale surveillance for childhood diabetes by type, although some manual effort is still needed. PMID:27107449

  16. An improved input shaping design for an efficient sway control of a nonlinear 3D overhead crane with friction

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mohammad Javad; Mohamed, Z.; Sudin, S.; Buyamin, S.; Jaafar, H. I.; Ahmad, S. M.

    2017-08-01

    This paper proposes an improved input shaping scheme for an efficient sway control of a nonlinear three dimensional (3D) overhead crane with friction using the particle swarm optimization (PSO) algorithm. Using this approach, a higher payload sway reduction is obtained as the input shaper is designed based on a complete nonlinear model, as compared to the analytical-based input shaping scheme derived using a linear second order model. Zero Vibration (ZV) and Distributed Zero Vibration (DZV) shapers are designed using both analytical and PSO approaches for sway control of rail and trolley movements. To test the effectiveness of the proposed approach, MATLAB simulations and experiments on a laboratory 3D overhead crane are performed under various conditions involving different cable lengths and sway frequencies. Their performances are studied based on a maximum residual of payload sway and Integrated Absolute Error (IAE) values which indicate total payload sway of the crane. With experiments, the superiority of the proposed approach over the analytical-based is shown by 30-50% reductions of the IAE values for rail and trolley movements, for both ZV and DZV shapers. In addition, simulations results show higher sway reductions with the proposed approach. It is revealed that the proposed PSO-based input shaping design provides higher payload sway reductions of a 3D overhead crane with friction as compared to the commonly designed input shapers.

  17. A Swarm Optimization approach for clinical knowledge mining.

    PubMed

    Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A

    2015-10-01

    Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    PubMed

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  19. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  20. Self-optimisation and model-based design of experiments for developing a C-H activation flow process.

    PubMed

    Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei

    2017-01-01

    A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  1. Improving real-time efficiency of case-based reasoning for medical diagnosis.

    PubMed

    Park, Yoon-Joo

    2014-01-01

    Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.

  2. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.

  3. Algebraic model checking for Boolean gene regulatory networks.

    PubMed

    Tran, Quoc-Nam

    2011-01-01

    We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.

  4. Algorithm for evaluating the effectiveness of a high-rise development project based on current yield

    NASA Astrophysics Data System (ADS)

    Soboleva, Elena

    2018-03-01

    The article is aimed at the issues of operational evaluation of development project efficiency in high-rise construction under the current economic conditions in Russia. The author touches the following issues: problems of implementing development projects, the influence of the operational evaluation quality of high-rise construction projects on general efficiency, assessing the influence of the project's external environment on the effectiveness of project activities under crisis conditions and the quality of project management. The article proposes the algorithm and the methodological approach to the quality management of the developer project efficiency based on operational evaluation of the current yield efficiency. The methodology for calculating the current efficiency of a development project for high-rise construction has been updated.

  5. An efficient linear-scaling CCSD(T) method based on local natural orbitals.

    PubMed

    Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály

    2013-09-07

    An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.

  6. An Efficient Model-Based Image Understanding Method for an Autonomous Vehicle.

    DTIC Science & Technology

    1997-09-01

    The problem discussed in this dissertation is the development of an efficient method for visual navigation of autonomous vehicles . The approach is to... autonomous vehicles . Thus the new method is implemented as a component of the image-understanding system in the autonomous mobile robot Yamabico-11 at

  7. Genome wide association analyses based on a multiple trait approach for modeling feed efficiency

    USDA-ARS?s Scientific Manuscript database

    Genome wide association (GWA) of feed efficiency (FE) could help target important genomic regions influencing FE. Data provided by an international dairy FE research consortium consisted of phenotypic records on dry matter intakes (DMI), milk energy (MILKE), and metabolic body weight (MBW) on 6,937 ...

  8. g-force induced giant efficiency of nanoparticles internalization into living cells

    PubMed Central

    Ocampo, Sandra M.; Rodriguez, Vanessa; de la Cueva, Leonor; Salas, Gorka; Carrascosa, Jose. L.; Josefa Rodríguez, María; García-Romero, Noemí; Luis, Jose; Cuñado, F.; Camarero, Julio; Miranda, Rodolfo; Belda-Iniesta, Cristobal; Ayuso-Sacido, Angel

    2015-01-01

    Nanotechnology plays an increasingly important role in the biomedical arena. Iron oxide nanoparticles (IONPs)-labelled cells is one of the most promising approaches for a fast and reliable evaluation of grafted cells in both preclinical studies and clinical trials. Current procedures to label living cells with IONPs are based on direct incubation or physical approaches based on magnetic or electrical fields, which always display very low cellular uptake efficiencies. Here we show that centrifugation-mediated internalization (CMI) promotes a high uptake of IONPs in glioblastoma tumour cells, just in a few minutes, and via clathrin-independent endocytosis pathway. CMI results in controllable cellular uptake efficiencies at least three orders of magnitude larger than current procedures. Similar trends are found in human mesenchymal stem cells, thereby demonstrating the general feasibility of the methodology, which is easily transferable to any laboratory with great potential for the development of improved biomedical applications. PMID:26477718

  9. Resource recovery from residual household waste: An application of exergy flow analysis and exergetic life cycle assessment.

    PubMed

    Laner, David; Rechberger, Helmut; De Soete, Wouter; De Meester, Steven; Astrup, Thomas F

    2015-12-01

    Exergy is based on the Second Law of thermodynamics and can be used to express physical and chemical potential and provides a unified measure for resource accounting. In this study, exergy analysis was applied to four residual household waste management scenarios with focus on the achieved resource recovery efficiencies. The calculated exergy efficiencies were used to compare the scenarios and to evaluate the applicability of exergy-based measures for expressing resource quality and for optimizing resource recovery. Exergy efficiencies were determined based on two approaches: (i) exergy flow analysis of the waste treatment system under investigation and (ii) exergetic life cycle assessment (LCA) using the Cumulative Exergy Extraction from the Natural Environment (CEENE) as a method for resource accounting. Scenario efficiencies of around 17-27% were found based on the exergy flow analysis (higher efficiencies were associated with high levels of material recycling), while the scenario efficiencies based on the exergetic LCA lay in a narrow range around 14%. Metal recovery was beneficial in both types of analyses, but had more influence on the overall efficiency in the exergetic LCA approach, as avoided burdens associated with primary metal production were much more important than the exergy content of the recovered metals. On the other hand, plastic recovery was highly beneficial in the exergy flow analysis, but rather insignificant in exergetic LCA. The two approaches thereby offered different quantitative results as well as conclusions regarding material recovery. With respect to resource quality, the main challenge for the exergy flow analysis is the use of exergy content and exergy losses as a proxy for resource quality and resource losses, as exergy content is not per se correlated with the functionality of a material. In addition, the definition of appropriate waste system boundaries is critical for the exergy efficiencies derived from the flow analysis, as it is constrained by limited information available about the composition of flows in the system as well as about secondary production processes and their interaction with primary or traditional production chains. In the exergetic LCA, resource quality could be reflected by the savings achieved by product substitution and the consideration of the waste's upstream burden allowed for an evaluation of the waste's resource potential. For a comprehensive assessment of resource efficiency in waste LCA, the sensitivity of accounting for product substitution should be carefully analyzed and cumulative exergy consumption measures should be complimented by other impact categories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. AN EFFICIENT HIGHER-ORDER FAST MULTIPOLE BOUNDARY ELEMENT SOLUTION FOR POISSON-BOLTZMANN BASED MOLECULAR ELECTROSTATICS

    PubMed Central

    Bajaj, Chandrajit; Chen, Shun-Chuan; Rand, Alexander

    2011-01-01

    In order to compute polarization energy of biomolecules, we describe a boundary element approach to solving the linearized Poisson-Boltzmann equation. Our approach combines several important features including the derivative boundary formulation of the problem and a smooth approximation of the molecular surface based on the algebraic spline molecular surface. State of the art software for numerical linear algebra and the kernel independent fast multipole method is used for both simplicity and efficiency of our implementation. We perform a variety of computational experiments, testing our method on a number of actual proteins involved in molecular docking and demonstrating the effectiveness of our solver for computing molecular polarization energy. PMID:21660123

  11. Efficiency of Pm-147 direct charge radioisotope battery.

    PubMed

    Kavetskiy, A; Yakubova, G; Yousaf, S M; Bower, K; Robertson, J D; Garnov, A

    2011-05-01

    A theoretical analysis is presented here of the efficiency of direct charge radioisotope batteries based on the efficiency of the radioactive source, the system geometry, electrostatic repulsion of beta particles from the collector, the secondary electron emission, and backscattered beta particles from the collector. Efficiency of various design batteries using Pm-147 sources was experimentally measured and found to be in good agreement with calculations. The present approach can be used for predicting the efficiency for different designs of direct charge radioisotope batteries. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Algorithmic design of a noise-resistant and efficient closed-loop deep brain stimulation system: A computational approach

    PubMed Central

    Karamintziou, Sofia D.; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G.; Tagaris, George A.; Sakas, Damianos E.; Polychronaki, Georgia E.; Tsirogiannis, George L.; David, Olivier; Nikita, Konstantina S.

    2017-01-01

    Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson’s disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications. PMID:28222198

  13. Extended Importance Sampling for Reliability Analysis under Evidence Theory

    NASA Astrophysics Data System (ADS)

    Yuan, X. K.; Chen, B.; Zhang, B. Q.

    2018-05-01

    In early engineering practice, the lack of data and information makes uncertainty difficult to deal with. However, evidence theory has been proposed to handle uncertainty with limited information as an alternative way to traditional probability theory. In this contribution, a simulation-based approach, called ‘Extended importance sampling’, is proposed based on evidence theory to handle problems with epistemic uncertainty. The proposed approach stems from the traditional importance sampling for reliability analysis under probability theory, and is developed to handle the problem with epistemic uncertainty. It first introduces a nominal instrumental probability density function (PDF) for every epistemic uncertainty variable, and thus an ‘equivalent’ reliability problem under probability theory is obtained. Then the samples of these variables are generated in a way of importance sampling. Based on these samples, the plausibility and belief (upper and lower bounds of probability) can be estimated. It is more efficient than direct Monte Carlo simulation. Numerical and engineering examples are given to illustrate the efficiency and feasible of the proposed approach.

  14. Toward topology-based characterization of small-scale mixing in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Suman, Sawan; Girimaji, Sharath

    2011-11-01

    Turbulent mixing rate at small scales of motion (molecular mixing) is governed by the steepness of the scalar-gradient field which in turn is dependent upon the prevailing velocity gradients. Thus motivated, we propose a velocity-gradient topology-based approach for characterizing small-scale mixing in compressible turbulence. We define a mixing efficiency metric that is dependent upon the topology of the solenoidal and dilatational deformation rates of a fluid element. The mixing characteristics of solenoidal and dilatational velocity fluctuations are clearly delineated. We validate this new approach by employing mixing data from direct numerical simulations (DNS) of compressible decaying turbulence with passive scalar. For each velocity-gradient topology, we compare the mixing efficiency predicted by the topology-based model with the corresponding conditional scalar variance obtained from DNS. The new mixing metric accurately distinguishes good and poor mixing topologies and indeed reasonably captures the numerical values. The results clearly demonstrate the viability of the proposed approach for characterizing and predicting mixing in compressible flows.

  15. Cost-of-illness studies based on massive data: a prevalence-based, top-down regression approach.

    PubMed

    Stollenwerk, Björn; Welchowski, Thomas; Vogl, Matthias; Stock, Stephanie

    2016-04-01

    Despite the increasing availability of routine data, no analysis method has yet been presented for cost-of-illness (COI) studies based on massive data. We aim, first, to present such a method and, second, to assess the relevance of the associated gain in numerical efficiency. We propose a prevalence-based, top-down regression approach consisting of five steps: aggregating the data; fitting a generalized additive model (GAM); predicting costs via the fitted GAM; comparing predicted costs between prevalent and non-prevalent subjects; and quantifying the stochastic uncertainty via error propagation. To demonstrate the method, it was applied to aggregated data in the context of chronic lung disease to German sickness funds data (from 1999), covering over 7.3 million insured. To assess the gain in numerical efficiency, the computational time of the innovative approach has been compared with corresponding GAMs applied to simulated individual-level data. Furthermore, the probability of model failure was modeled via logistic regression. Applying the innovative method was reasonably fast (19 min). In contrast, regarding patient-level data, computational time increased disproportionately by sample size. Furthermore, using patient-level data was accompanied by a substantial risk of model failure (about 80 % for 6 million subjects). The gain in computational efficiency of the innovative COI method seems to be of practical relevance. Furthermore, it may yield more precise cost estimates.

  16. Safe Maneuvering Envelope Estimation Based on a Physical Approach

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas J. J.; Schuet, Stefan R.; Wheeler, Kevin R.; Acosta, Diana; Kaneshige, John T.

    2013-01-01

    This paper discusses a computationally efficient algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. This approach differs from others since it is physically inspired. This more transparent approach allows interpreting data in each step, and it is assumed that these physical models based upon flight dynamics theory will therefore facilitate certification for future real life applications.

  17. Photovoltaic technology for sustainability: An investigation of the distributed utility concept as a policy framework

    NASA Astrophysics Data System (ADS)

    Letendre, Steven Emery

    The U.S. electric utility sector in its current configuration is unsustainable. The majority of electricity in the United States is produced using finite fossil fuels. In addition, significant potential exists to improve the nation's efficient use of energy. A sustainable electric utility sector will be characterized by increased use of renewable energy sources and high levels of end-use efficiency. This dissertation analyzes two alternative policy approaches designed to move the U.S. electric utility sector toward sustainability. One approach is labeled incremental which involves maintaining the centralized structure of the electric utility sector but facilitating the introduction of renewable energy and efficiency into the electrical system through the pricing mechanism. A second policy approach was described in which structural changes are encouraged based on the emerging distributed utility (DU) concept. A structural policy orientation attempts to capture the unique localized benefits that distributed renewable resources and energy efficiency offer to electric utility companies and their customers. A market penetration analysis of PV in centralized energy supply and distributed peak-shaving applications is conducted for a case-study electric utility company. Sensitivity analysis was performed based on incremental and structural policy orientations. The analysis provides compelling evidence which suggests that policies designed to bring about structural change in the electric utility sector are needed to move the industry toward sustainability. Specifically, the analysis demonstrates that PV technology, a key renewable energy option likely to play an important role in a renewable energy future, will begin to penetrate the electrical system in distributed peak-shaving applications long before the technology is introduced as a centralized energy supply option. Most policies to date, which I term incremental, attempt to encourage energy efficiency and renewables through the pricing system. Based on past policy experience, it is unlikely that such an approach would allow PV to compete in Delaware as an energy supply option in the next ten to twenty years. Alternatively, a market-based, or green pricing, approach will not create significant market opportunities for PV as a centralized energy supply option. However, structural policies designed to encourage the explicit recognition of the localized benefits of distributed resources could result in PV being introduced into the electrical system early in the next century.

  18. Vapor Grown Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Abdussamad Abbas, Hisham

    Perovskite solar cells has been the fastest growing solar cell material till date with verified efficiencies of over 22%. Most groups in the world focuses their research on solution based devices that has residual solvent in the material bulk. This work focuses extensively on the fabrication and properties of vapor based perovskite devices that is devoid of solvents. The initial part of my work focuses on the detailed fabrication of high efficiency consistent sequential vapor NIP devices made using P3HT as P-type Type II heterojunction. The sequential vapor devices experiences device anomalies like voltage evolution and IV hysteresis owing to charge trapping in TiO2. Hence, sequential PIN devices were fabricated using doped Type-II heterojunctions that had no device anomalies. The sequential PIN devices has processing restriction, as organic Type-II heterojunction materials cannot withstand high processing temperature, hence limiting device efficiency. Thereby bringing the need of co-evaporation for fabricating high efficiency consistent PIN devices, the approach has no-restriction on substrates and offers stoichiometric control. A comprehensive description of the fabrication, Co-evaporator setup and how to build it is described. The results of Co-evaporated devices clearly show that grain size, stoichiometry and doped transport layers are all critical for eliminating device anomalies and in fabricating high efficiency devices. Finally, Formamidinium based perovskite were fabricated using sequential approach. A thermal degradation study was conducted on Methyl Ammonium Vs. Formamidinium based perovskite films, Formamidinium based perovskites were found to be more stable. Lastly, inorganic films such as CdS and Nickel oxide were developed in this work.

  19. A New Strategy to Evaluate Technical Efficiency in Hospitals Using Homogeneous Groups of Casemix : How to Evaluate When There is Not DRGs?

    PubMed

    Villalobos-Cid, Manuel; Chacón, Max; Zitko, Pedro; Instroza-Ponta, Mario

    2016-04-01

    The public health system has restricted economic resources. Because of that, it is necessary to know how the resources are being used and if they are properly distributed. Several works have applied classical approaches based in Data Envelopment Analysis (DEA) and Stochastic Frontier Analysis (SFA) for this purpose. However, if we have hospitals with different casemix, this is not the best approach. In order to avoid biases in the comparisons, other works have recommended the use of hospital production data corrected by the weights from Diagnosis Related Groups (DRGs), to adjust the casemix of hospitals. However, not all countries have this tool fully implemented, which limits the efficiency evaluation. This paper proposes a new approach for evaluating the efficiency of hospitals. It uses a graph-based clustering algorithm to find groups of hospitals that have similar production profiles. Then, DEA is used to evaluate the technical efficiency of each group. The proposed approach is tested using the production data from 2014 of 193 Chilean public hospitals. The results allowed to identify different performance profiles of each group, that differs from other studies that employs data from partially implemented DRGs. Our results are able to deliver a better description of the resource management of the different groups of hospitals. We have created a website with the results ( bioinformatic.diinf.usach.cl/publichealth ). Data can be requested to the authors.

  20. Integrated fiber-coupled launcher for slow plasmon-polariton waves.

    PubMed

    Della Valle, Giuseppe; Longhi, Stefano

    2012-01-30

    We propose and numerically demonstrate an integrated fiber-coupled launcher for slow surface plasmon-polaritons. The device is based on a novel plasmonic mode-converter providing efficient power transfer from the fast to the slow modes of a metallic nanostripe. Total coupling efficiency with standard single-mode fiber approaching 30% (including ohmic losses) has been numerically predicted for a 25-µm long gold-based device operating at 1.55 µm telecom wavelength.

  1. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  2. Energy conversion approaches and materials for high-efficiency photovoltaics.

    PubMed

    Green, Martin A; Bremner, Stephen P

    2016-12-20

    The past five years have seen significant cost reductions in photovoltaics and a correspondingly strong increase in uptake, with photovoltaics now positioned to provide one of the lowest-cost options for future electricity generation. What is becoming clear as the industry develops is that area-related costs, such as costs of encapsulation and field-installation, are increasingly important components of the total costs of photovoltaic electricity generation, with this trend expected to continue. Improved energy-conversion efficiency directly reduces such costs, with increased manufacturing volume likely to drive down the additional costs associated with implementing higher efficiencies. This suggests the industry will evolve beyond the standard single-junction solar cells that currently dominate commercial production, where energy-conversion efficiencies are fundamentally constrained by Shockley-Queisser limits to practical values below 30%. This Review assesses the overall prospects for a range of approaches that can potentially exceed these limits, based on ultimate efficiency prospects, material requirements and developmental outlook.

  3. Thermodynamic analysis of engineering solutions aimed at raising the efficiency of integrated gasification combined cycle

    NASA Astrophysics Data System (ADS)

    Gordeev, S. I.; Bogatova, T. F.; Ryzhkov, A. F.

    2017-11-01

    Raising the efficiency and environmental friendliness of electric power generation from coal is the aim of numerous research groups today. The traditional approach based on the steam power cycle has reached its efficiency limit, prompted by materials development and maneuverability performance. The rival approach based on the combined cycle is also drawing nearer to its efficiency limit. However, there is a reserve for efficiency increase of the integrated gasification combined cycle, which has the energy efficiency at the level of modern steam-turbine power units. The limit of increase in efficiency is the efficiency of NGCC. One of the main problems of the IGCC is higher costs of receiving and preparing fuel gas for GTU. It would be reasonable to decrease the necessary amount of fuel gas in the power unit to minimize the costs. The effect can be reached by raising of the heat value of fuel gas, its heat content and the heat content of cycle air. On the example of the process flowsheet of the IGCC with a power of 500 MW, running on Kuznetsk bituminous coal, by means of software Thermoflex, the influence of the developed technical solutions on the efficiency of the power plant is considered. It is received that rise in steam-air blast temperature to 900°C leads to an increase in conversion efficiency up to 84.2%. An increase in temperature levels of fuel gas clean-up to 900°C leads to an increase in the IGCC efficiency gross/net by 3.42%. Cycle air heating reduces the need for fuel gas by 40% and raises the IGCC efficiency gross/net by 0.85-1.22%. The offered solutions for IGCC allow to exceed net efficiency of analogous plants by 1.8-2.3%.

  4. A Risk-Based Approach for Aerothermal/TPS Analysis and Testing

    DTIC Science & Technology

    2007-07-01

    RTO-EN-AVT-142 17 - 1 A Risk-Based Approach for Aerothermal/ TPS Analysis and Testing Michael J. Wright∗ and Jay H. Grinstead† NASA Ames...of the thermal protection system ( TPS ) is to protect the payload (crew, cargo, or science) from this entry heating environment. The performance of...the TPS is determined by the efficiency and reliability of this system, typically measured

  5. Efficient expansion of global protected areas requires simultaneous planning for species and ecosystems

    PubMed Central

    Polak, Tal; Watson, James E. M.; Fuller, Richard A.; Joseph, Liana N.; Martin, Tara G.; Possingham, Hugh P.; Venter, Oscar; Carwardine, Josie

    2015-01-01

    The Convention on Biological Diversity (CBD)'s strategic plan advocates the use of environmental surrogates, such as ecosystems, as a basis for planning where new protected areas should be placed. However, the efficiency and effectiveness of this ecosystem-based planning approach to adequately capture threatened species in protected area networks is unknown. We tested the application of this approach in Australia according to the nation's CBD-inspired goals for expansion of the national protected area system. We set targets for ecosystems (10% of the extent of each ecosystem) and threatened species (variable extents based on persistence requirements for each species) and then measured the total land area required and opportunity cost of meeting those targets independently, sequentially and simultaneously. We discover that an ecosystem-based approach will not ensure the adequate representation of threatened species in protected areas. Planning simultaneously for species and ecosystem targets delivered the most efficient outcomes for both sets of targets, while planning first for ecosystems and then filling the gaps to meet species targets was the most inefficient conservation strategy. Our analysis highlights the pitfalls of pursuing goals for species and ecosystems non-cooperatively and has significant implications for nations aiming to meet their CBD mandated protected area obligations. PMID:26064645

  6. Reduced-order modeling of piezoelectric energy harvesters with nonlinear circuits under complex conditions

    NASA Astrophysics Data System (ADS)

    Xiang, Hong-Jun; Zhang, Zhi-Wei; Shi, Zhi-Fei; Li, Hong

    2018-04-01

    A fully coupled modeling approach is developed for piezoelectric energy harvesters in this work based on the use of available robust finite element packages and efficient reducing order modeling techniques. At first, the harvester is modeled using finite element packages. The dynamic equilibrium equations of harvesters are rebuilt by extracting system matrices from the finite element model using built-in commands without any additional tools. A Krylov subspace-based scheme is then applied to obtain a reduced-order model for improving simulation efficiency but preserving the key features of harvesters. Co-simulation of the reduced-order model with nonlinear energy harvesting circuits is achieved in a system level. Several examples in both cases of harmonic response and transient response analysis are conducted to validate the present approach. The proposed approach allows to improve the simulation efficiency by several orders of magnitude. Moreover, the parameters used in the equivalent circuit model can be conveniently obtained by the proposed eigenvector-based model order reduction technique. More importantly, this work establishes a methodology for modeling of piezoelectric energy harvesters with any complicated mechanical geometries and nonlinear circuits. The input load may be more complex also. The method can be employed by harvester designers to optimal mechanical structures or by circuit designers to develop novel energy harvesting circuits.

  7. A novel way to establish fertilization recommendations based on agronomic efficiency and a sustainable yield index for rice crops.

    PubMed

    Liu, Chuang; Liu, Yi; Li, Zhiguo; Zhang, Guoshi; Chen, Fang

    2017-04-24

    A simpler approach for establishing fertilizer recommendations for major crops is urgently required to improve the application efficiency of commercial fertilizers in China. To address this need, we developed a method based on field data drawn from the China Program of the International Plant Nutrition Institute (IPNI) rice experiments and investigations carried out in southeastern China during 2001 to 2012. Our results show that, using agronomic efficiencies and a sustainable yield index (SYI), this new method for establishing fertilizer recommendations robustly estimated the mean rice yield (7.6 t/ha) and mean nutrient supply capacities (186, 60, and 96 kg/ha of N, P 2 O 5 , and K 2 O, respectively) of fertilizers in the study region. In addition, there were significant differences in rice yield response, economic cost/benefit ratio, and nutrient-use efficiencies associated with agronomic efficiencies ranked as high, medium and low. Thus, ranking agronomic efficiency could strengthen linear models relating rice yields and SYI. Our results also indicate that the new method provides better recommendations in terms of rice yield, SYI, and profitability than previous methods. Hence, we believe it is an effective approach for improving recommended applications of commercial fertilizers to rice (and potentially other crops).

  8. Constructing Adverse Outcome Pathways: a Demonstration of an Ontology-based Semantics Mapping Approach

    EPA Science Inventory

    Adverse outcome pathway (AOP) provides a conceptual framework to evaluate and integrate chemical toxicity and its effects across the levels of biological organization. As such, it is essential to develop a resource-efficient and effective approach to extend molecular initiating ...

  9. A Systems Biology Approach to Toxicology Research with Small Fish Models

    EPA Science Inventory

    Increasing use of mechanistically-based molecular and biochemical endpoints and in vitro assays is being advocated as a more efficient and cost-effective approach for generating chemical hazard data. However, development of effective assays and application of the resulting data i...

  10. PoMo: An Allele Frequency-Based Approach for Species Tree Estimation

    PubMed Central

    De Maio, Nicola; Schrempf, Dominik; Kosiol, Carolin

    2015-01-01

    Incomplete lineage sorting can cause incongruencies of the overall species-level phylogenetic tree with the phylogenetic trees for individual genes or genomic segments. If these incongruencies are not accounted for, it is possible to incur several biases in species tree estimation. Here, we present a simple maximum likelihood approach that accounts for ancestral variation and incomplete lineage sorting. We use a POlymorphisms-aware phylogenetic MOdel (PoMo) that we have recently shown to efficiently estimate mutation rates and fixation biases from within and between-species variation data. We extend this model to perform efficient estimation of species trees. We test the performance of PoMo in several different scenarios of incomplete lineage sorting using simulations and compare it with existing methods both in accuracy and computational speed. In contrast to other approaches, our model does not use coalescent theory but is allele frequency based. We show that PoMo is well suited for genome-wide species tree estimation and that on such data it is more accurate than previous approaches. PMID:26209413

  11. Incorporating extrinsic noise into the stochastic simulation of biochemical reactions: A comparison of approaches

    NASA Astrophysics Data System (ADS)

    Thanh, Vo Hong; Marchetti, Luca; Reali, Federico; Priami, Corrado

    2018-02-01

    The stochastic simulation algorithm (SSA) has been widely used for simulating biochemical reaction networks. SSA is able to capture the inherently intrinsic noise of the biological system, which is due to the discreteness of species population and to the randomness of their reciprocal interactions. However, SSA does not consider other sources of heterogeneity in biochemical reaction systems, which are referred to as extrinsic noise. Here, we extend two simulation approaches, namely, the integration-based method and the rejection-based method, to take extrinsic noise into account by allowing the reaction propensities to vary in time and state dependent manner. For both methods, new efficient implementations are introduced and their efficiency and applicability to biological models are investigated. Our numerical results suggest that the rejection-based method performs better than the integration-based method when the extrinsic noise is considered.

  12. Total internal reflection-based planar waveguide solar concentrator with symmetric air prisms as couplers.

    PubMed

    Xie, Peng; Lin, Huichuan; Liu, Yong; Li, Baojun

    2014-10-20

    We present a waveguide coupling approach for planar waveguide solar concentrator. In this approach, total internal reflection (TIR)-based symmetric air prisms are used as couplers to increase the coupler reflectivity and to maximize the optical efficiency. The proposed concentrator consists of a line focusing cylindrical lens array over a planar waveguide. The TIR-based couplers are located at the focal line of each lens to couple the focused sunlight into the waveguide. The optical system was modeled and simulated with a commercial ray tracing software (Zemax). Results show that the system used with optimized TIR-based couplers can achieve 70% optical efficiency at 50 × geometrical concentration ratio, resulting in a flux concentration ratio of 35 without additional secondary concentrator. An acceptance angle of ± 7.5° is achieved in the x-z plane due to the use of cylindrical lens array as the primary concentrator.

  13. Weaknesses in Applying a Process Approach in Industry Enterprises

    NASA Astrophysics Data System (ADS)

    Kučerová, Marta; Mĺkva, Miroslava; Fidlerová, Helena

    2012-12-01

    The paper deals with a process approach as one of the main principles of the quality management. Quality management systems based on process approach currently represents one of a proofed ways how to manage an organization. The volume of sales, costs and profit levels are influenced by quality of processes and efficient process flow. As results of the research project showed, there are some weaknesses in applying of the process approach in the industrial routine and it has been often only a formal change of the functional management to process management in many organizations in Slovakia. For efficient process management it is essential that companies take attention to the way how to organize their processes and seek for their continuous improvement.

  14. A 25.5 percent AMO gallium arsenide grating solar cell

    NASA Technical Reports Server (NTRS)

    Weizer, V. G.; Godlewski, M. P.

    1985-01-01

    Recent calculations have shown that significant open circuit voltage gains are possible with a dot grating junction geometry. The feasibility of applying the dot geometry to the GaAs cell was investigated. This geometry is shown to result in voltages approach 1.120 V and efficiencies well over 25 percent (AMO) if good collection efficiency can be maintained. The latter is shown to be possible if one chooses the proper base resistivity and cell thickness. The above advances in efficiency are shown to be possible in the P-base cell with only minor improvements in existing technology.

  15. A 25.5 percent AM0 gallium arsenide grating solar cell

    NASA Technical Reports Server (NTRS)

    Weizer, V. G.; Godlewski, M. P.

    1985-01-01

    Recent calculations have shown that significant open circuit voltage gains are possible with a dot grating junction geometry. The feasibility of applying the dot geometry to the GaAs cell was investigated. This geometry is shown to result in voltage approach 1.120 V and efficiencies well over 25 percent (AM0) if good collection efficiency can be maintained. The latter is shown to be possible if one chooses the proper base resistivity and cell thickness. The above advances in efficiency are shown to be possible in the P-base cell with only minor improvements in existing technology.

  16. Vote Stuffing Control in IPTV-based Recommender Systems

    NASA Astrophysics Data System (ADS)

    Bhatt, Rajen

    Vote stuffing is a general problem in the functioning of the content rating-based recommender systems. Currently IPTV viewers browse various contents based on the program ratings. In this paper, we propose a fuzzy clustering-based approach to remove the effects of vote stuffing and consider only the genuine ratings for the programs over multiple genres. The approach requires only one authentic rating, which is generally available from recommendation system administrators or program broadcasters. The entire process is automated using fuzzy c-means clustering. Computational experiments performed over one real-world program rating database shows that the proposed approach is very efficient for controlling vote stuffing.

  17. Enhanced DEA model with undesirable output and interval data for rice growing farmers performance assessment

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2015-12-01

    Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.

  18. A simple and efficient method to visualize and quantify the efficiency of chromosomal mutations from genome editing

    PubMed Central

    Fu, Liezhen; Wen, Luan; Luu, Nga; Shi, Yun-Bo

    2016-01-01

    Genome editing with designer nucleases such as TALEN and CRISPR/Cas enzymes has broad applications. Delivery of these designer nucleases into organisms induces various genetic mutations including deletions, insertions and nucleotide substitutions. Characterizing those mutations is critical for evaluating the efficacy and specificity of targeted genome editing. While a number of methods have been developed to identify the mutations, none other than sequencing allows the identification of the most desired mutations, i.e., out-of-frame insertions/deletions that disrupt genes. Here we report a simple and efficient method to visualize and quantify the efficiency of genomic mutations induced by genome-editing. Our approach is based on the expression of a two-color fusion protein in a vector that allows the insertion of the edited region in the genome in between the two color moieties. We show that our approach not only easily identifies developing animals with desired mutations but also efficiently quantifies the mutation rate in vivo. Furthermore, by using LacZα and GFP as the color moieties, our approach can even eliminate the need for a fluorescent microscope, allowing the analysis with simple bright field visualization. Such an approach will greatly simplify the screen for effective genome-editing enzymes and identify the desired mutant cells/animals. PMID:27748423

  19. Adopting Problem-Based Learning Model for AN Electrical Engineering Curriculum

    NASA Astrophysics Data System (ADS)

    Khan, Mohamed Khan Aftab Ahmed; Sinnadurai, Rajendran; Amudha, M.; Elamvazuthi, I.; Vasant, P.

    2010-06-01

    The shortage of highly qualified academicians in a knowledge-based economy and potential benefits of Problem-Based Learning (PBL) approach has necessitated the adoption of PBL in many areas of education. This paper discusses a PBL experience for an electrical engineering undergraduate course. Some preliminary experiences of implementing them are described and discussed. It was found that PBL approach seem to be an efficient strategy not only for undergraduate engineering education but also for instilling lifelong learning.

  20. Predicting efficiency of solar cells based on transparent conducting electrodes

    NASA Astrophysics Data System (ADS)

    Kumar, Ankush

    2017-01-01

    Efficiency of a solar cell is directly correlated with the performance of its transparent conducting electrodes (TCEs) which dictates its two core processes, viz., absorption and collection efficiencies. Emerging designs of a TCE involve active networks of carbon nanotubes, silver nanowires and various template-based techniques providing diverse structures; here, voids are transparent for optical transmittance while the conducting network acts as a charge collector. However, it is still not well understood as to which kind of network structure leads to an optimum solar cell performance; therefore, mostly an arbitrary network is chosen as a solar cell electrode. Herein, we propose a new generic approach for understanding the role of TCEs in determining the solar cell efficiency based on analysis of shadowing and recombination losses. A random network of wires encloses void regions of different sizes and shapes which permit light transmission; two terms, void fraction and equivalent radius, are defined to represent the TCE transmittance and wire spacings, respectively. The approach has been applied to various literature examples and their solar cell performance has been compared. To obtain high-efficiency solar cells, optimum density of the wires and their aspect ratio as well as active layer thickness are calculated. Our findings show that a TCE well suitable for one solar cell may not be suitable for another. For high diffusion length based solar cells, the void fraction of the network should be low while for low diffusion length based solar cells, the equivalent radius should be lower. The network with less wire spacing compared to the diffusion length behaves similar to continuous film based TCEs (such as indium tin oxide). The present work will be useful for architectural as well as material engineering of transparent electrodes for improvisation of solar cell performance.

  1. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models.

    PubMed

    Shah, A A; Xing, W W; Triantafyllidis, V

    2017-04-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.

  2. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models

    PubMed Central

    Xing, W. W.; Triantafyllidis, V.

    2017-01-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327

  3. Partition method and experimental validation for impact dynamics of flexible multibody system

    NASA Astrophysics Data System (ADS)

    Wang, J. Y.; Liu, Z. Y.; Hong, J. Z.

    2018-06-01

    The impact problem of a flexible multibody system is a non-smooth, high-transient, and strong-nonlinear dynamic process with variable boundary. How to model the contact/impact process accurately and efficiently is one of the main difficulties in many engineering applications. The numerical approaches being used widely in impact analysis are mainly from two fields: multibody system dynamics (MBS) and computational solid mechanics (CSM). Approaches based on MBS provide a more efficient yet less accurate analysis of the contact/impact problems, while approaches based on CSM are well suited for particularly high accuracy needs, yet require very high computational effort. To bridge the gap between accuracy and efficiency in the dynamic simulation of a flexible multibody system with contacts/impacts, a partition method is presented considering that the contact body is divided into two parts, an impact region and a non-impact region. The impact region is modeled using the finite element method to guarantee the local accuracy, while the non-impact region is modeled using the modal reduction approach to raise the global efficiency. A three-dimensional rod-plate impact experiment is designed and performed to validate the numerical results. The principle for how to partition the contact bodies is proposed: the maximum radius of the impact region can be estimated by an analytical method, and the modal truncation orders of the non-impact region can be estimated by the highest frequency of the signal measured. The simulation results using the presented method are in good agreement with the experimental results. It shows that this method is an effective formulation considering both accuracy and efficiency. Moreover, a more complicated multibody impact problem of a crank slider mechanism is investigated to strengthen this conclusion.

  4. Endovascular Electrodes for Electrical Stimulation of Blood Vessels for Vasoconstriction - a Finite Element Simulation Study

    NASA Astrophysics Data System (ADS)

    Kezurer, Noa; Farah, Nairouz; Mandel, Yossi

    2016-08-01

    Hemorrhagic shock accounts for 30-40 percent of trauma mortality, as bleeding may sometimes be hard to control. Application of short electrical pulses on blood vessels was recently shown to elicit robust vasoconstriction and reduction of blood loss following vascular injury. In this study we present a novel approach for vasoconstriction based on endovascular application of electrical pulses for situations where access to the vessel is limited. In addition to ease of access, we hypothesize that this novel approach will result in a localized and efficient vasoconstriction. Using computer modeling (COMSOL Multiphysics, Electric Currents Module), we studied the effect of endovascular pulsed electrical treatment on abdominal aorta of pigs, and compared the efficiency of different electrodes configurations on the electric field amplitude, homogeneity and locality when applied on a blood vessel wall. Results reveal that the optimal configuration is the endovascular approach where four electrodes are used, spaced 13 mm apart. Furthermore, computer based temperature investigations (bio-heat model, COMSOL Multiphysics) show that the maximum expected temperature rise is of 1.2 degrees; highlighting the safety of the four endovascular electrodes configuration. These results can aid in planning the application of endovascular pulsed electrical treatment as an efficient and safe vasoconstriction approach.

  5. Endovascular Electrodes for Electrical Stimulation of Blood Vessels for Vasoconstriction – a Finite Element Simulation Study

    PubMed Central

    Kezurer, Noa; Farah, Nairouz; Mandel, Yossi

    2016-01-01

    Hemorrhagic shock accounts for 30–40 percent of trauma mortality, as bleeding may sometimes be hard to control. Application of short electrical pulses on blood vessels was recently shown to elicit robust vasoconstriction and reduction of blood loss following vascular injury. In this study we present a novel approach for vasoconstriction based on endovascular application of electrical pulses for situations where access to the vessel is limited. In addition to ease of access, we hypothesize that this novel approach will result in a localized and efficient vasoconstriction. Using computer modeling (COMSOL Multiphysics, Electric Currents Module), we studied the effect of endovascular pulsed electrical treatment on abdominal aorta of pigs, and compared the efficiency of different electrodes configurations on the electric field amplitude, homogeneity and locality when applied on a blood vessel wall. Results reveal that the optimal configuration is the endovascular approach where four electrodes are used, spaced 13 mm apart. Furthermore, computer based temperature investigations (bio-heat model, COMSOL Multiphysics) show that the maximum expected temperature rise is of 1.2 degrees; highlighting the safety of the four endovascular electrodes configuration. These results can aid in planning the application of endovascular pulsed electrical treatment as an efficient and safe vasoconstriction approach. PMID:27534438

  6. Spherical Harmonic-based Random Fields Based on Real Particle 3D Data: Improved Numerical Algorithm and Quantitative Comparison to Real Particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    X Liu; E Garboczi; m Grigoriu

    Many parameters affect the cyclone efficiency, and these parameters can have different effects in different flow regimes. Therefore the maximum-efficiency cyclone length is a function of the specific geometry and operating conditions in use. In this study, we obtained a relationship describing the minimum particle diameter or maximum cyclone efficiency by using a theoretical approach based on cyclone geometry and fluid properties. We have compared the empirical predictions with corresponding literature data and observed good agreement. The results address the importance of fluid properties. Inlet and vortex finder cross-sections, cone-apex diameter, inlet Reynolds number and surface roughness are found tomore » be the other important parameters affecting cyclone height. The surface friction coefficient, on the other hand, is difficult to employ in the calculations.We developed a theoretical approach to find the maximum-efficiency heights for cyclones with tangential inlet and we suggested a relation for this height as a function of cyclone geometry and operating parameters. In order to generalize use of the relation, two dimensionless parameters, namely for geometric and operational variables, we defined and results were presented in graphical form such that one can calculate and enter the values of these dimensionless parameters and then can find the maximum efficiency height of his own specific cyclone.« less

  7. Limited Efficiency of Drug Delivery to Specific Intracellular Organelles Using Subcellularly "Targeted" Drug Delivery Systems.

    PubMed

    Maity, Amit Ranjan; Stepensky, David

    2016-01-04

    Many drugs have been designed to act on intracellular targets and to affect intracellular processes inside target cells. For the desired effects to be exerted, these drugs should permeate target cells and reach specific intracellular organelles. This subcellular drug targeting approach has been proposed for enhancement of accumulation of these drugs in target organelles and improved efficiency. This approach is based on drug encapsulation in drug delivery systems (DDSs) and/or their decoration with specific targeting moieties that are intended to enhance the drug/DDS accumulation in the intracellular organelle of interest. During recent years, there has been a constant increase in interest in DDSs targeted to specific intracellular organelles, and many different approaches have been proposed for attaining efficient drug delivery to specific organelles of interest. However, it appears that in many studies insufficient efforts have been devoted to quantitative analysis of the major formulation parameters of the DDSs disposition (efficiency of DDS endocytosis and endosomal escape, intracellular trafficking, and efficiency of DDS delivery to the target organelle) and of the resulting pharmacological effects. Thus, in many cases, claims regarding efficient delivery of drug/DDS to a specific organelle and efficient subcellular targeting appear to be exaggerated. On the basis of the available experimental data, it appears that drugs/DDS decoration with specific targeting residues can affect their intracellular fate and result in preferential drug accumulation within an organelle of interest. However, it is not clear whether these approaches will be efficient in in vivo settings and be translated into preclinical and clinical applications. Studies that quantitatively assess the mechanisms, barriers, and efficiencies of subcellular drug delivery and of the associated toxic effects are required to determine the therapeutic potential of subcellular DDS targeting.

  8. Integrating structure-based and ligand-based approaches for computational drug design.

    PubMed

    Wilson, Gregory L; Lill, Markus A

    2011-04-01

    Methods utilized in computer-aided drug design can be classified into two major categories: structure based and ligand based, using information on the structure of the protein or on the biological and physicochemical properties of bound ligands, respectively. In recent years there has been a trend towards integrating these two methods in order to enhance the reliability and efficiency of computer-aided drug-design approaches by combining information from both the ligand and the protein. This trend resulted in a variety of methods that include: pseudoreceptor methods, pharmacophore methods, fingerprint methods and approaches integrating docking with similarity-based methods. In this article, we will describe the concepts behind each method and selected applications.

  9. Instructional Efficiency of Tutoring in an Outreach Gene Technology Laboratory

    ERIC Educational Resources Information Center

    Scharfenberg, Franz-Josef; Bogner, Franz X.

    2013-01-01

    Our research objective focused on examining the instructional efficiency of tutoring as a form of instructional change as opposed to a non-tutoring approach in an outreach laboratory. We designed our laboratory based on cognitive load (CL) theory. Altogether, 269 twelfth-graders participated in our day-long module "Genetic Fingerprinting." In a…

  10. A convergent diffusion and social marketing approach for disseminating proven approaches to physical activity promotion.

    PubMed

    Dearing, James W; Maibach, Edward W; Buller, David B

    2006-10-01

    Approaches from diffusion of innovations and social marketing are used here to propose efficient means to promote and enhance the dissemination of evidence-based physical activity programs. While both approaches have traditionally been conceptualized as top-down, center-to-periphery, centralized efforts at social change, their operational methods have usually differed. The operational methods of diffusion theory have a strong relational emphasis, while the operational methods of social marketing have a strong transactional emphasis. Here, we argue for a convergence of diffusion of innovation and social marketing principles to stimulate the efficient dissemination of proven-effective programs. In general terms, we are encouraging a focus on societal sectors as a logical and efficient means for enhancing the impact of dissemination efforts. This requires an understanding of complex organizations and the functional roles played by different individuals in such organizations. In specific terms, ten principles are provided for working effectively within societal sectors and enhancing user involvement in the processes of adoption and implementation.

  11. Finding Regions of Interest on Toroidal Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Sinha, Rishi R; Jones, Chad

    2011-02-09

    Fusion promises to provide clean and safe energy, and a considerable amount of research effort is underway to turn this aspiration intoreality. This work focuses on a building block for analyzing data produced from the simulation of microturbulence in magnetic confinementfusion devices: the task of efficiently extracting regions of interest. Like many other simulations where a large amount of data are produced,the careful study of ``interesting'' parts of the data is critical to gain understanding. In this paper, we present an efficient approach forfinding these regions of interest. Our approach takes full advantage of the underlying mesh structure in magneticmore » coordinates to produce acompact representation of the mesh points inside the regions and an efficient connected component labeling algorithm for constructingregions from points. This approach scales linearly with the surface area of the regions of interest instead of the volume as shown with bothcomputational complexity analysis and experimental measurements. Furthermore, this new approach is 100s of times faster than a recentlypublished method based on Cartesian coordinates.« less

  12. Efficient anharmonic vibrational spectroscopy for large molecules using local-mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Xiaolu; Steele, Ryan P., E-mail: ryan.steele@utah.edu

    This article presents a general computational approach for efficient simulations of anharmonic vibrational spectra in chemical systems. An automated local-mode vibrational approach is presented, which borrows techniques from localized molecular orbitals in electronic structure theory. This approach generates spatially localized vibrational modes, in contrast to the delocalization exhibited by canonical normal modes. The method is rigorously tested across a series of chemical systems, ranging from small molecules to large water clusters and a protonated dipeptide. It is interfaced with exact, grid-based approaches, as well as vibrational self-consistent field methods. Most significantly, this new set of reference coordinates exhibits a well-behavedmore » spatial decay of mode couplings, which allows for a systematic, a priori truncation of mode couplings and increased computational efficiency. Convergence can typically be reached by including modes within only about 4 Å. The local nature of this truncation suggests particular promise for the ab initio simulation of anharmonic vibrational motion in large systems, where connection to experimental spectra is currently most challenging.« less

  13. Efficient load rating and quantification of life-cycle damage of Indiana bridges due to overweight loads.

    DOT National Transportation Integrated Search

    2016-02-01

    In this study, a computational approach for conducting durability analysis of bridges using detailed finite element models is developed. The underlying approach adopted is based on the hypothesis that the two main factors affecting the life of a brid...

  14. Towards a Viscous Wall Model for Immersed Boundary Methods

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Immersed boundary methods are frequently employed for simulating flows at low Reynolds numbers or for applications where viscous boundary layer effects can be neglected. The primary shortcoming of Cartesian mesh immersed boundary methods is the inability of efficiently resolving thin turbulent boundary layers in high-Reynolds number flow application. The inefficiency of resolving the thin boundary is associated with the use of constant aspect ratio Cartesian grid cells. Conventional CFD approaches can efficiently resolve the large wall normal gradients by utilizing large aspect ratio cells near the wall. This paper presents different approaches for immersed boundary methods to account for the viscous boundary layer interaction with the flow-field away from the walls. Different wall modeling approaches proposed in previous research studies are addressed and compared to a new integral boundary layer based approach. In contrast to common wall-modeling approaches that usually only utilize local flow information, the integral boundary layer based approach keeps the streamwise history of the boundary layer. This allows the method to remain effective at much larger y+ values than local wall modeling approaches. After a theoretical discussion of the different approaches, the method is applied to increasingly more challenging flow fields including fully attached, separated, and shock-induced separated (laminar and turbulent) flows.

  15. Developments in photonic and mm-wave component technology for fiber radio

    NASA Astrophysics Data System (ADS)

    Iezekiel, Stavros

    2013-01-01

    A review of photonic component technology for fiber radio applications at 60 GHz will be given. We will focus on two architectures: (i) baseband-over-fiber and (ii) RF-over-fiber. In the first approach, up-conversion to 60 GHz is performed at the picocell base stations, with data being transported over fiber, while in the second both the data and rum­ wave carrier are transported over fiber. For the baseband-over-fiber scheme, we examine techniques to improve the modulation efficiency of directly­ modulated fiber links. These are based on traveling-wave structures applied to series cascades of lasers. This approach combines the improvement in differential quantum efficiency with the ability to tailor impedance matching as required. In addition, we report on various base station transceiver architectures based on optically-controlled :tvfMIC self­ oscillating mixers, and their application to 60 GHz fiber radio. This approach allows low cost optoelectronic transceivers to be used for the baseband fiber link, whilst minimizing the impact of dispersion. For the RF-over-fiber scheme, we report on schemes for optical generation of 100 GHz. These use modulation of a Mach-Zehnder modulator at Vπ bias in cascade with a Mach-Zehnder driven by 1.25 Gb/s data. One of the issues in RF-over-fiber is dispersion, while reduced modulation efficiency due to the presence of the optical carrier is also problematic. We examine the use of silicon nitride micro-ring resonators for the production of optical single sideband modulation in order to combat dispersion, and for the reduction of optical carrier power in order to improve link modulation efficiency.

  16. Routing in Mobile Wireless Sensor Networks: A Leader-Based Approach.

    PubMed

    Burgos, Unai; Amozarrain, Ugaitz; Gómez-Calzado, Carlos; Lafuente, Alberto

    2017-07-07

    This paper presents a leader-based approach to routing in Mobile Wireless Sensor Networks (MWSN). Using local information from neighbour nodes, a leader election mechanism maintains a spanning tree in order to provide the necessary adaptations for efficient routing upon the connectivity changes resulting from the mobility of sensors or sink nodes. We present two protocols following the leader election approach, which have been implemented using Castalia and OMNeT++. The protocols have been evaluated, besides other reference MWSN routing protocols, to analyse the impact of network size and node velocity on performance, which has demonstrated the validity of our approach.

  17. Robust video copy detection approach based on local tangent space alignment

    NASA Astrophysics Data System (ADS)

    Nie, Xiushan; Qiao, Qianping

    2012-04-01

    We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.

  18. Secure Multiparty Quantum Computation for Summation and Multiplication.

    PubMed

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-21

    As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.

  19. Secure Multiparty Quantum Computation for Summation and Multiplication

    PubMed Central

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics. PMID:26792197

  20. A splay tree-based approach for efficient resource location in P2P networks.

    PubMed

    Zhou, Wei; Tan, Zilong; Yao, Shaowen; Wang, Shipu

    2014-01-01

    Resource location in structured P2P system has a critical influence on the system performance. Existing analytical studies of Chord protocol have shown some potential improvements in performance. In this paper a splay tree-based new Chord structure called SChord is proposed to improve the efficiency of locating resources. We consider a novel implementation of the Chord finger table (routing table) based on the splay tree. This approach extends the Chord finger table with additional routing entries. Adaptive routing algorithm is proposed for implementation, and it can be shown that hop count is significantly minimized without introducing any other protocol overheads. We analyze the hop count of the adaptive routing algorithm, as compared to Chord variants, and demonstrate sharp upper and lower bounds for both worst-case and average case settings. In addition, we theoretically analyze the hop reducing in SChord and derive the fact that SChord can significantly reduce the routing hops as compared to Chord. Several simulations are presented to evaluate the performance of the algorithm and support our analytical findings. The simulation results show the efficiency of SChord.

  1. Midfield wireless powering of subwavelength autonomous devices.

    PubMed

    Kim, Sanghoek; Ho, John S; Poon, Ada S Y

    2013-05-17

    We obtain an analytical bound on the efficiency of wireless power transfer to a weakly coupled device. The optimal source is solved for a multilayer geometry in terms of a representation based on the field equivalence principle. The theory reveals that optimal power transfer exploits the properties of the midfield to achieve efficiencies far greater than conventional coil-based designs. As a physical realization of the source, we present a slot array structure whose performance closely approaches the theoretical bound.

  2. A practical approach to the development of aircraft GTE's noise suppression system on the base of fiber optic sensors

    NASA Astrophysics Data System (ADS)

    Vinogradov, Vasiliy Yu.; Morozov, Oleg G.; Morozov, Gennady A.; Sakhabutdinov, Airat Zh.; Nureev, Ilnur I.; Kuznetsov, Artem A.; Faskhutdinov, Lenar M.; Sarvarova, Lutsia M.

    2017-04-01

    In this paper, we consider a number of different methods that form the modern approach to the development of aircraft GTE's noise suppression systems at service conditions. The herein-presented efficient noise suppression system on the base of fiber optic sensors makes it possible to reduce pulsations at the exhaust nozzle exit and noise levels at the engine outlet section.

  3. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai

    2012-01-01

    Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.

  4. Popularity Modeling for Mobile Apps: A Sequential Approach.

    PubMed

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  5. Practical strategies for increasing efficiency and effectiveness in critical care education.

    PubMed

    Joyce, Maurice F; Berg, Sheri; Bittner, Edward A

    2017-02-04

    Technological advances and evolving demands in medical care have led to challenges in ensuring adequate training for providers of critical care. Reliance on the traditional experience-based training model alone is insufficient for ensuring quality and safety in patient care. This article provides a brief overview of the existing educational practice within the critical care environment. Challenges to education within common daily activities of critical care practice are reviewed. Some practical evidence-based educational approaches are then described which can be incorporated into the daily practice of critical care without disrupting workflow or compromising the quality of patient care. It is hoped that such approaches for improving the efficiency and efficacy of critical care education will be integrated into training programs.

  6. Alkali-templated surface nanopatterning of chalcogenide thin films: a novel approach toward solar cells with enhanced efficiency.

    PubMed

    Reinhard, Patrick; Bissig, Benjamin; Pianezzi, Fabian; Hagendorfer, Harald; Sozzi, Giovanna; Menozzi, Roberto; Gretener, Christina; Nishiwaki, Shiro; Buecheler, Stephan; Tiwari, Ayodhya N

    2015-05-13

    Concepts of localized contacts and junctions through surface passivation layers are already advantageously applied in Si wafer-based photovoltaic technologies. For Cu(In,Ga)Se2 thin film solar cells, such concepts are generally not applied, especially at the heterojunction, because of the lack of a simple method yielding features with the required size and distribution. Here, we show a novel, innovative surface nanopatterning approach to form homogeneously distributed nanostructures (<30 nm) on the faceted, rough surface of polycrystalline chalcogenide thin films. The method, based on selective dissolution of self-assembled and well-defined alkali condensates in water, opens up new research opportunities toward development of thin film solar cells with enhanced efficiency.

  7. A simulation-based approach for solving assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyu

    2017-09-01

    Assembly line balancing problem is directly related to the production efficiency, since the last century, the problem of assembly line balancing was discussed and still a lot of people are studying on this topic. In this paper, the problem of assembly line is studied by establishing the mathematical model and simulation. Firstly, the model of determing the smallest production beat under certain work station number is anysized. Based on this model, the exponential smoothing approach is applied to improve the the algorithm efficiency. After the above basic work, the gas stirling engine assembly line balancing problem is discussed as a case study. Both two algorithms are implemented using the Lingo programming environment and the simulation results demonstrate the validity of the new methods.

  8. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  9. Demonstration of an efficient cooling approach for SBIRS-Low

    NASA Astrophysics Data System (ADS)

    Nieczkoski, S. J.; Myers, E. A.

    2002-05-01

    The Space Based Infrared System-Low (SBIRS-Low) segment is a near-term Air Force program for developing and deploying a constellation of low-earth orbiting observation satellites with gimbaled optics cooled to cryogenic temperatures. The optical system design and requirements present unique challenges that make conventional cooling approaches both complicated and risky. The Cryocooler Interface System (CIS) provides a remote, efficient, and interference-free means of cooling the SBIRS-Low optics. Technology Applications Inc. (TAI), through a two-phase Small Business Innovative Research (SBIR) program with Air Force Research Laboratory (AFRL), has taken the CIS from initial concept feasibility through the design, build, and test of a prototype system. This paper presents the development and demonstration testing of the prototype CIS. Prototype system testing has demonstrated the high efficiency of this cooling approach, making it an attractive option for SBIRS-Low and other sensitive optical and detector systems that require low-impact cryogenic cooling.

  10. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  11. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  12. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors

    PubMed Central

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-01-01

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaisman, Michelle; Fan, Shizhao; Nay Yaung, Kevin

    As single-junction Si solar cells approach their practical efficiency limits, a new pathway is necessary to increase efficiency in order to realize more cost-effective photovoltaics. Integrating III-V cells onto Si in a multijunction architecture is a promising approach that can achieve high efficiency while leveraging the infrastructure already in place for Si and III-V technology. In this Letter, we demonstrate a record 15.3%-efficient 1.7 eV GaAsP top cell on GaP/Si, enabled by recent advances in material quality in conjunction with an improved device design and a high-performance antireflection coating. Furthermore, we present a separate Si bottom cell with a 1.7more » eV GaAsP optical filter to absorb most of the visible light with an efficiency of 6.3%, showing the feasibility of monolithic III-V/Si tandems with >20% efficiency. Through spectral efficiency analysis, we also compare our results to previously published GaAsP and Si devices, projecting tandem GaAsP/Si efficiencies of up to 25.6% based on current state-of-the-art individual subcells. With the aid of modeling, we further illustrate a realistic path toward 30% GaAsP/Si tandems for high-efficiency, monolithically integrated photovoltaics.« less

  14. Achieving high efficiency laminated polymer solar cell with interfacial modified metallic electrode and pressure induced crystallization

    NASA Astrophysics Data System (ADS)

    Yuan, Yongbo; Bi, Yu; Huang, Jinsong

    2011-02-01

    We report efficient laminated organic photovoltaic device with efficiency approach the optimized device by regular method based on Poly(3-hexylthiophene-2,5-diyl) and [6,6]-phenyl-C61-butyric acid methyl ester (PCBM). The high efficiency is mainly attributed to the formation of a concrete polymer/metal interface mechanically and electrically by the use of electronic-glue, and using the highly conductive and flexible silver film as anode to reduce photovoltage loss and modifying its work function for efficiency hole extraction by ultraviolet/ozone treatment, and the pressure induced crystallization of PCBM.

  15. A COMPREHENSIVE APPROACH FOR PHYSIOLOGICALLY BASED PHARMACOKINETIC (PBPK) MODELS USING THE EXPOSURE RELATED DOSE ESTIMATING MODEL (ERDEM) SYSTEM

    EPA Science Inventory

    The implementation of a comprehensive PBPK modeling approach resulted in ERDEM, a complex PBPK modeling system. ERDEM provides a scalable and user-friendly environment that enables researchers to focus on data input values rather than writing program code. ERDEM efficiently m...

  16. 76 FR 34076 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-10

    ... in use without an OMB control number; Title of Information Collection: Medicare Beneficiary and... Satisfaction flows from the proposed sampling approach. While it was feasible to conduct the 9th SOW via... not seem efficient to maintain a telephone only data collection approach. Based on recent literature...

  17. Tribological performance of ultra-low viscosity composite base fluid with bio-derived fluid

    USDA-ARS?s Scientific Manuscript database

    One obvious approach to increase efficiencies in many lubricated systems such as ICE and gearbox is the reduction in viscosity of oil lubricant. Indeed, ultra-low viscosity engine oils are now commercially available. One approach to the development of ultra-low viscosity lubricants without compromis...

  18. Investigating the impact of design characteristics on statistical efficiency within discrete choice experiments: A systematic survey.

    PubMed

    Vanniyasingam, Thuva; Daly, Caitlin; Jin, Xuejing; Zhang, Yuan; Foster, Gary; Cunningham, Charles; Thabane, Lehana

    2018-06-01

    This study reviews simulation studies of discrete choice experiments to determine (i) how survey design features affect statistical efficiency, (ii) and to appraise their reporting quality. Statistical efficiency was measured using relative design (D-) efficiency, D-optimality, or D-error. For this systematic survey, we searched Journal Storage (JSTOR), Since Direct, PubMed, and OVID which included a search within EMBASE. Searches were conducted up to year 2016 for simulation studies investigating the impact of DCE design features on statistical efficiency. Studies were screened and data were extracted independently and in duplicate. Results for each included study were summarized by design characteristic. Previously developed criteria for reporting quality of simulation studies were also adapted and applied to each included study. Of 371 potentially relevant studies, 9 were found to be eligible, with several varying in study objectives. Statistical efficiency improved when increasing the number of choice tasks or alternatives; decreasing the number of attributes, attribute levels; using an unrestricted continuous "manipulator" attribute; using model-based approaches with covariates incorporating response behaviour; using sampling approaches that incorporate previous knowledge of response behaviour; incorporating heterogeneity in a model-based design; correctly specifying Bayesian priors; minimizing parameter prior variances; and using an appropriate method to create the DCE design for the research question. The simulation studies performed well in terms of reporting quality. Improvement is needed in regards to clearly specifying study objectives, number of failures, random number generators, starting seeds, and the software used. These results identify the best approaches to structure a DCE. An investigator can manipulate design characteristics to help reduce response burden and increase statistical efficiency. Since studies varied in their objectives, conclusions were made on several design characteristics, however, the validity of each conclusion was limited. Further research should be conducted to explore all conclusions in various design settings and scenarios. Additional reviews to explore other statistical efficiency outcomes and databases can also be performed to enhance the conclusions identified from this review.

  19. Microfluidic magnetic fluidized bed for DNA analysis in continuous flow mode.

    PubMed

    Hernández-Neuta, Iván; Pereiro, Iago; Ahlford, Annika; Ferraro, Davide; Zhang, Qiongdi; Viovy, Jean-Louis; Descroix, Stéphanie; Nilsson, Mats

    2018-04-15

    Magnetic solid phase substrates for biomolecule manipulation have become a valuable tool for simplification and automation of molecular biology protocols. However, the handling of magnetic particles inside microfluidic chips for miniaturized assays is often challenging due to inefficient mixing, aggregation, and the advanced instrumentation required for effective actuation. Here, we describe the use of a microfluidic magnetic fluidized bed approach that enables dynamic, highly efficient and simplified magnetic bead actuation for DNA analysis in a continuous flow platform with minimal technical requirements. We evaluate the performance of this approach by testing the efficiency of individual steps of a DNA assay based on padlock probes and rolling circle amplification. This assay comprises common nucleic acid analysis principles, such as hybridization, ligation, amplification and restriction digestion. We obtained efficiencies of up to 90% for these reactions with high throughput processing up to 120μL of DNA dilution at flow rates ranging from 1 to 5μL/min without compromising performance. The fluidized bed was 20-50% more efficient than a commercially available solution for microfluidic manipulation of magnetic beads. Moreover, to demonstrate the potential of this approach for integration into micro-total analysis systems, we optimized the production of a low-cost polymer based microarray and tested its analytical performance for integrated single-molecule digital read-out. Finally, we provide the proof-of-concept for a single-chamber microfluidic chip that combines the fluidized bed with the polymer microarray for a highly simplified and integrated magnetic bead-based DNA analyzer, with potential applications in diagnostics. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Skeleton-Controlled pDNA Delivery of Renewable Steroid-Based Cationic Lipids, the Endocytosis Pathway Analysis and Intracellular Localization

    PubMed Central

    Wang, Zhao; Luo, Ting; Cao, Amin; Sun, Jingjing

    2018-01-01

    Using renewable and biocompatible natural-based resources to construct functional biomaterials has attracted great attention in recent years. In this work, we successfully prepared a series of steroid-based cationic lipids by integrating various steroid skeletons/hydrophobes with (l-)-arginine headgroups via facile and efficient synthetic approach. The plasmid DNA (pDNA) binding affinity of the steroid-based cationic lipids, average particle sizes, surface potentials, morphologies and stability of the steroid-based cationic lipids/pDNA lipoplexes were disclosed to depend largely on the steroid skeletons. Cellular evaluation results revealed that cytotoxicity and gene transfection efficiency of the steroid-based cationic lipids in H1299 and HeLa cells strongly relied on the steroid hydrophobes. Interestingly, the steroid lipids/pDNA lipoplexes inclined to enter H1299 cells mainly through caveolae and lipid-raft mediated endocytosis pathways, and an intracellular trafficking route of “lipid-raft-mediated endocytosis→lysosome→cell nucleic localization” was accordingly proposed. The study provided possible approach for developing high-performance steroid-based lipid gene carriers, in which the cytotoxicity, gene transfection capability, endocytosis pathways, and intracellular trafficking/localization manners could be tuned/controlled by introducing proper steroid skeletons/hydrophobes. Noteworthy, among the lipids, Cho-Arg showed remarkably high gene transfection efficacy, even under high serum concentration (50% fetal bovine serum), making it an efficient gene transfection agent for practical application. PMID:29373505

  1. Holographic display system for restoration of sight to the blind

    PubMed Central

    Goetz, G A; Mandel, Y; Manivanh, R; Palanker, D V; Čižmár, T

    2013-01-01

    Objective We present a holographic near-the-eye display system enabling optical approaches for sight restoration to the blind, such as photovoltaic retinal prosthesis, optogenetic and other photoactivation techniques. We compare it with conventional LCD or DLP-based displays in terms of image quality, field of view, optical efficiency and safety. Approach We detail the optical configuration of the holographic display system and its characterization using a phase-only spatial light modulator. Main results We describe approaches to controlling the zero diffraction order and speckle related issues in holographic display systems and assess the image quality of such systems. We show that holographic techniques offer significant advantages in terms of peak irradiance and power efficiency, and enable designs that are inherently safer than LCD or DLP-based systems. We demonstrate the performance of our holographic display system in the assessment of cortical response to alternating gratings projected onto the retinas of rats. Significance We address the issues associated with the design of high brightness, near-the-eye display systems and propose solutions to the efficiency and safety challenges with an optical design which could be miniaturized and mounted onto goggles. PMID:24045579

  2. Coping efficiently with now-relative medical data.

    PubMed

    Stantic, Bela; Terenziani, Paolo; Sattar, Abdul

    2008-11-06

    In Medical Informatics, there is an increasing awareness that temporal information plays a crucial role, so that suitable database approaches are needed to store and support it. Specifically, most clinical data are intrinsically temporal, and a relevant part of them are now-relative (i.e., they are valid at the current time). Even if previous studies indicate that the treatment of now-relative data has a crucial impact on efficiency, current approaches have several limitations. In this paper we propose a novel approach, which is based on a new representation of now, and on query transformations. We also experimentally demonstrate that our approach outperforms its best competitors in the literature to the extent of a factor of more than ten, both in number of disk accesses and of CPU usage.

  3. Promoting a Culture of Tailoring for Systems Engineering Policy Expectations

    NASA Technical Reports Server (NTRS)

    Blankenship, Van A.

    2016-01-01

    NASA's Marshall Space Flight Center (MSFC) has developed an integrated systems engineering approach to promote a culture of tailoring for program and project policy requirements. MSFC's culture encourages and supports tailoring, with an emphasis on risk-based decision making, for enhanced affordability and efficiency. MSFC's policy structure integrates the various Agency requirements into a single, streamlined implementation approach which serves as a "one-stop-shop" for our programs and projects to follow. The engineers gain an enhanced understanding of policy and technical expectations, as well as lesson's learned from MSFC's history of spaceflight and science missions, to enable them to make appropriate, risk-based tailoring recommendations. The tailoring approach utilizes a standard methodology to classify projects into predefined levels using selected mission and programmatic scaling factors related to risk tolerance. Policy requirements are then selectively applied and tailored, with appropriate rationale, and approved by the governing authorities, to support risk-informed decisions to achieve the desired cost and schedule efficiencies. The policy is further augmented by implementation tools and lifecycle planning aids which help promote and support the cultural shift toward more tailoring. The MSFC Customization Tool is an integrated spreadsheet that ties together everything that projects need to understand, navigate, and tailor the policy. It helps them classify their project, understand the intent of the requirements, determine their tailoring approach, and document the necessary governance approvals. It also helps them plan for and conduct technical reviews throughout the lifecycle. Policy tailoring is thus established as a normal part of project execution, with the tools provided to facilitate and enable the tailoring process. MSFC's approach to changing the culture emphasizes risk-based tailoring of policy to achieve increased flexibility, efficiency, and effectiveness in project execution, while maintaining appropriate rigor to ensure mission success.

  4. A parallel finite element procedure for contact-impact problems using edge-based smooth triangular element and GPU

    NASA Astrophysics Data System (ADS)

    Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang

    2018-04-01

    The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.

  5. Using extreme phenotype sampling to identify the rare causal variants of quantitative traits in association studies.

    PubMed

    Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J; Murcray, Cassandra Elizabeth; Conti, David

    2011-12-01

    Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. © 2011 Wiley Periodicals, Inc.

  6. Using Extreme Phenotype Sampling to Identify the Rare Causal Variants of Quantitative Traits in Association Studies

    PubMed Central

    Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J.; Murcray, Cassandra Elizabeth; Conti, David

    2014-01-01

    Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. PMID:21922541

  7. Advances on Propulsion Technology for High-Speed Aircraft. Volume 2

    DTIC Science & Technology

    2007-03-01

    2m.nH 17p VJ +V, The thermal efficiency of either compressor or ram-based engines can be approached as a Brayton cycle and hence its efficiency is...Cambridge, 1964. I II [14] G. Birkhoff. Helmholtz and Taylor instability. Proc. Symp. App. Math. Soc. v. 13, p. 55-76, 1962. [15] K.M. Case. Hydrodynamic

  8. A retrospective likelihood approach for efficient integration of multiple omics factors in case-control association studies.

    PubMed

    Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine

    2015-03-01

    Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.

  9. Highly efficient water-mediated approach to access benzazoles: metal catalyst and base-free synthesis of 2-substituted benzimidazoles, benzoxazoles, and benzothiazoles.

    PubMed

    Bala, Manju; Verma, Praveen Kumar; Sharma, Deepika; Kumar, Neeraj; Singh, Bikram

    2015-05-01

    An efficient water-catalyzed method has been developed for the synthesis of 2-substituted benzimidazoles, benzoxazoles, and benzothiazoles in one step. The present method excludes the usage of toxic metal catalysts and bases to produce benzazoles in good to excellent yields. An efficient and versatile water-mediated method has been established for the synthesis of various 2-arylbenzazoles. The present protocol excludes the usage of any catalyst and additive provided excellent selectivities and yields with high functional group tolerance for the synthesis of 2-arylated benzimidazoles, benzoxazoles, and benzothiazoles. Benzazolones were also synthesized using similar reaction protocol.

  10. Estimation of the Thermodynamic Efficiency of a Solid-State Cooler Based on the Multicaloric Effect

    NASA Astrophysics Data System (ADS)

    Starkov, A. S.; Pakhomov, O. V.; Rodionov, V. V.; Amirov, A. A.; Starkov, I. A.

    2018-03-01

    The thermodynamic efficiency of using the multicaloric effect (μCE) in solid-state cooler systems has been studied in comparison to single-component caloric effects. This approach is illustrated by example of the Brayton cycle for μCE and magnetocaloric effect (MCE). Based on the results of experiments with Fe48Rh52-PbZr0.53Ti0.47O3 two-layer ferroic composite, the temperature dependence of the relative efficiency is determined and the temperature range is estimated in which the μCE is advantageous to MCE. The proposed theory of μCE is compared to experimental data.

  11. High efficiency tantalum-based ceramic composite structures

    NASA Technical Reports Server (NTRS)

    Stewart, David A. (Inventor); Leiser, Daniel B. (Inventor); DiFiore, Robert R. (Inventor); Katvala, Victor W. (Inventor)

    2010-01-01

    Tantalum-based ceramics are suitable for use in thermal protection systems. These composite structures have high efficiency surfaces (low catalytic efficiency and high emittance), thereby reducing heat flux to a spacecraft during planetary re-entry. These ceramics contain tantalum disilicide, molybdenum disilicide and borosilicate glass. The components are milled, along with a processing aid, then applied to a surface of a porous substrate, such as a fibrous silica or carbon substrate. Following application, the coating is then sintered on the substrate. The composite structure is substantially impervious to hot gas penetration and capable of surviving high heat fluxes at temperatures approaching 3000.degree. F. and above.

  12. Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.

    PubMed

    Jiang, Z; Chen, W; Burkhart, C

    2013-11-01

    Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  13. Design and Development of a Rapid Research, Design, and Development Platform for In-Situ Testing of Tools and Concepts for Trajectory-Based Operations

    NASA Technical Reports Server (NTRS)

    Underwood, Matthew C.

    2017-01-01

    To provide justification for equipping a fleet of aircraft with avionics capable of supporting trajectory-based operations, significant flight testing must be accomplished. However, equipping aircraft with these avionics and enabling technologies to communicate the clearances required for trajectory-based operations is cost-challenging using conventional avionics approaches. This paper describes an approach to minimize the costs and risks of flight testing these technologies in-situ, discusses the test-bed platform developed, and highlights results from a proof-of-concept flight test campaign that demonstrates the feasibility and efficiency of this approach.

  14. Concurrency-based approaches to parallel programming

    NASA Technical Reports Server (NTRS)

    Kale, L.V.; Chrisochoides, N.; Kohl, J.; Yelick, K.

    1995-01-01

    The inevitable transition to parallel programming can be facilitated by appropriate tools, including languages and libraries. After describing the needs of applications developers, this paper presents three specific approaches aimed at development of efficient and reusable parallel software for irregular and dynamic-structured problems. A salient feature of all three approaches in their exploitation of concurrency within a processor. Benefits of individual approaches such as these can be leveraged by an interoperability environment which permits modules written using different approaches to co-exist in single applications.

  15. “Direct cloning in Lactobacillus plantarum: Electroporation with non-methylated plasmid DNA enhances transformation efficiency and makes shuttle vectors obsolete”

    PubMed Central

    2012-01-01

    Background Lactic acid bacteria (LAB) play an important role in agricultural as well as industrial biotechnology. Development of improved LAB strains using e.g. library approaches is often limited by low transformation efficiencies wherefore one reason could be differences in the DNA methylation patterns between the Escherichia coli intermediate host for plasmid amplification and the final LAB host. In the present study, we examined the influence of DNA methylation on transformation efficiency in LAB and developed a direct cloning approach for Lactobacillus plantarum CD033. Therefore, we propagated plasmid pCD256 in E. coli strains with different dam/dcm-methylation properties. The obtained plasmid DNA was purified and transformed into three different L. plantarum strains and a selection of other LAB species. Results Best transformation efficiencies were obtained using the strain L. plantarum CD033 and non-methylated plasmid DNA. Thereby we achieved transformation efficiencies of ~ 109 colony forming units/μg DNA in L. plantarum CD033 which is in the range of transformation efficiencies reached with E. coli. Based on these results, we directly transformed recombinant expression vectors received from PCR/ligation reactions into L. plantarum CD033, omitting plasmid amplification in E. coli. Also this approach was successful and yielded a sufficient number of recombinant clones. Conclusions Transformation efficiency of L. plantarum CD033 was drastically increased when non-methylated plasmid DNA was used, providing the possibility to generate expression libraries in this organism. A direct cloning approach, whereby ligated PCR-products where successfully transformed directly into L. plantarum CD033, obviates the construction of shuttle vectors containing E. coli-specific sequences, as e.g. a ColEI origin of replication, and makes amplification of these vectors in E. coli obsolete. Thus, plasmid constructs become much smaller and occasional structural instability or mutagenesis during E. coli propagation is excluded. The results of our study provide new genetic tools for L. plantarum which will allow fast, forward and systems based genetic engineering of this species. PMID:23098256

  16. Regression-Based Approach For Feature Selection In Classification Issues. Application To Breast Cancer Detection And Recurrence

    NASA Astrophysics Data System (ADS)

    Belciug, Smaranda; Serbanescu, Mircea-Sebastian

    2015-09-01

    Feature selection is considered a key factor in classifications/decision problems. It is currently used in designing intelligent decision systems to choose the best features which allow the best performance. This paper proposes a regression-based approach to select the most important predictors to significantly increase the classification performance. Application to breast cancer detection and recurrence using publically available datasets proved the efficiency of this technique.

  17. Curriculum-Based Measurement, Program Development, Graphing Performance and Increasing Efficiency.

    ERIC Educational Resources Information Center

    Deno, Stanley L.; And Others

    1987-01-01

    Four brief articles look at aspects of curriculum based measurement (CBM) for academically handicapped students including procedures of CBM with examples, different approaches to graphing student performance, and solutions to the problem of making time to measure student progress frequently. (DB)

  18. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  19. Cervical Cancer Prevention in HIV-infected Women Using the “See and Treat” Approach in Botswana

    PubMed Central

    Ramogola-Masire, Doreen; de Klerk, Ronny; Monare, Barati; Ratshaa, Bakgaki; Friedman, Harvey M.; Zetola, Nicola M.

    2013-01-01

    Background Cervical cancer is a major public health problem in resource-limited settings, particularly among HIV-infected women. Given the challenges of cytology-based approaches, the efficiency of new screening programs need to be assessed. Setting Community and hospital-based clinics in Gaborone, Botswana. Objective To determine the feasibility, and efficiency of the “See and Treat” approach using Visual Inspection Acetic Acid (VIA) and Enhanced Digital Imaging (EDI) for cervical cancer prevention in HIV-infected women. Methods A two-tier community-based cervical cancer prevention program was implemented. HIV-infected women were screened by nurses at the community using the VIA/EDI approach. Low-grade lesions were treated with cryotherapy on the same visit. Women with complex lesions were referred to our second tier, specialized clinic for evaluation. Weekly quality control assessments were performed by a specialist in collaboration with the nurses on all pictures taken. Results From March 2009 through January 2011, 2,175 patients were screened for cervical cancer at our community-based clinic. 253 (11.6%) were found to have low-grade lesions and received same-day cryotherapy. 1,347 (61.9%) women were considered to have a normal examination and 575 (27.3%) were referred for further evaluation and treatment. Of the 1,347 women initially considered to have normal exams, 267 (19.8%) were recalled based on weekly quality control assessments. 210 (78.6%) of the 267 recalled women and 499 (86.8%) of the 575 referred women were seen at the referral clinic. Of these 709 women, 506 (71.4%) required additional treatment. Overall, 264 CIN stage 2 or 3 were identified and treated, and six micro-invasive cancers identified were referred for further management. Conclusions Our “See and Treat” cervical cancer prevention program using the VIA/EDI approach is a feasible, high-output and high-efficiency program, worthy of considering as an additional cervical cancer screening method in Botswana, especially for women with limited access to the current cytology-based screening services. PMID:22134146

  20. Cervical cancer prevention in HIV-infected women using the "see and treat" approach in Botswana.

    PubMed

    Ramogola-Masire, Doreen; de Klerk, Ronny; Monare, Barati; Ratshaa, Bakgaki; Friedman, Harvey M; Zetola, Nicola M

    2012-03-01

    Cervical cancer is a major public health problem in resource-limited settings, particularly among HIV-infected women. Given the challenges of cytology-based approaches, the efficiency of new screening programs need to be assessed. Community and hospital-based clinics in Gaborone, Botswana. To determine the feasibility and efficiency of the "see and treat" approach using visual inspection acetic acid (VIA) and enhanced digital imaging (EDI) for cervical cancer prevention in HIV-infected women. A 2-tier community-based cervical cancer prevention program was implemented. HIV-infected women were screened by nurses at the community using the VIA/EDI approach. Low-grade lesions were treated with cryotherapy on the same visit. Women with complex lesions were referred to our second tier specialized clinic for evaluation. Weekly quality control assessments were performed by a specialist in collaboration with the nurses on all pictures taken. From March 2009 through January 2011, 2175 patients were screened for cervical cancer at our community-based clinic. Two hundred fifty-three patients (11.6%) were found to have low-grade lesions and received same-day cryotherapy. One thousand three hundred forty-seven (61.9%) women were considered to have a normal examination, and 575 (27.3%) were referred for further evaluation and treatment. Of the 1347 women initially considered to have normal exams, 267 (19.8%) were recalled based on weekly quality control assessments. Two hundred ten (78.6%) of the 267 recalled women, and 499 (86.8%) of the 575 referred women were seen at the referral clinic. Of these 709 women, 506 (71.4%) required additional treatment. Overall, 264 cervical intraepithelial neoplasia stage 2 or 3 were identified and treated, and 6 microinvasive cancers identified were referred for further management. Our "see and treat" cervical cancer prevention program using the VIA/EDI approach is a feasible, high-output and high-efficiency program, worthy of considering as an additional cervical cancer screening method in Botswana, especially for women with limited access to the current cytology-based screening services.

  1. Infrared Thermography Approach for Effective Shielding Area of Field Smoke Based on Background Subtraction and Transmittance Interpolation.

    PubMed

    Tang, Runze; Zhang, Tonglai; Chen, Yongpeng; Liang, Hao; Li, Bingyang; Zhou, Zunning

    2018-05-06

    Effective shielding area is a crucial indicator for the evaluation of the infrared smoke-obscuring effectiveness on the battlefield. The conventional methods for assessing the shielding area of the smoke screen are time-consuming and labor intensive, in addition to lacking precision. Therefore, an efficient and convincing technique for testing the effective shielding area of the smoke screen has great potential benefits in the smoke screen applications in the field trial. In this study, a thermal infrared sensor with a mid-wavelength infrared (MWIR) range of 3 to 5 μm was first used to capture the target scene images through clear as well as obscuring smoke, at regular intervals. The background subtraction in motion detection was then applied to obtain the contour of the smoke cloud at each frame. The smoke transmittance at each pixel within the smoke contour was interpolated based on the data that was collected from the image. Finally, the smoke effective shielding area was calculated, based on the accumulation of the effective shielding pixel points. One advantage of this approach is that it utilizes only one thermal infrared sensor without any other additional equipment in the field trial, which significantly contributes to the efficiency and its convenience. Experiments have been carried out to demonstrate that this approach can determine the effective shielding area of the field infrared smoke both practically and efficiently.

  2. Efficient engineering of chromosomal ribosome binding site libraries in mismatch repair proficient Escherichia coli.

    PubMed

    Oesterle, Sabine; Gerngross, Daniel; Schmitt, Steven; Roberts, Tania Michelle; Panke, Sven

    2017-09-26

    Multiplexed gene expression optimization via modulation of gene translation efficiency through ribosome binding site (RBS) engineering is a valuable approach for optimizing artificial properties in bacteria, ranging from genetic circuits to production pathways. Established algorithms design smart RBS-libraries based on a single partially-degenerate sequence that efficiently samples the entire space of translation initiation rates. However, the sequence space that is accessible when integrating the library by CRISPR/Cas9-based genome editing is severely restricted by DNA mismatch repair (MMR) systems. MMR efficiency depends on the type and length of the mismatch and thus effectively removes potential library members from the pool. Rather than working in MMR-deficient strains, which accumulate off-target mutations, or depending on temporary MMR inactivation, which requires additional steps, we eliminate this limitation by developing a pre-selection rule of genome-library-optimized-sequences (GLOS) that enables introducing large functional diversity into MMR-proficient strains with sequences that are no longer subject to MMR-processing. We implement several GLOS-libraries in Escherichia coli and show that GLOS-libraries indeed retain diversity during genome editing and that such libraries can be used in complex genome editing operations such as concomitant deletions. We argue that this approach allows for stable and efficient fine tuning of chromosomal functions with minimal effort.

  3. Sink-oriented Dynamic Location Service Protocol for Mobile Sinks with an Energy Efficient Grid-Based Approach.

    PubMed

    Jeon, Hyeonjae; Park, Kwangjin; Hwang, Dae-Joon; Choo, Hyunseung

    2009-01-01

    Sensor nodes transmit the sensed information to the sink through wireless sensor networks (WSNs). They have limited power, computational capacities and memory. Portable wireless devices are increasing in popularity. Mechanisms that allow information to be efficiently obtained through mobile WSNs are of significant interest. However, a mobile sink introduces many challenges to data dissemination in large WSNs. For example, it is important to efficiently identify the locations of mobile sinks and disseminate information from multi-source nodes to the multi-mobile sinks. In particular, a stationary dissemination path may no longer be effective in mobile sink applications, due to sink mobility. In this paper, we propose a Sink-oriented Dynamic Location Service (SDLS) approach to handle sink mobility. In SDLS, we propose an Eight-Direction Anchor (EDA) system that acts as a location service server. EDA prevents intensive energy consumption at the border sensor nodes and thus provides energy balancing to all the sensor nodes. Then we propose a Location-based Shortest Relay (LSR) that efficiently forwards (or relays) data from a source node to a sink with minimal delay path. Our results demonstrate that SDLS not only provides an efficient and scalable location service, but also reduces the average data communication overhead in scenarios with multiple and moving sinks and sources.

  4. Identification of inelastic parameters based on deep drawing forming operations using a global-local hybrid Particle Swarm approach

    NASA Astrophysics Data System (ADS)

    Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.

    2016-04-01

    Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.

  5. Mechanism-Based Condition Screening for Sustainable Catalysis in Single-Electron Steps by Cyclic Voltammetry.

    PubMed

    Liedtke, Theresa; Spannring, Peter; Riccardi, Ludovico; Gansäuer, Andreas

    2018-04-23

    A cyclic-voltammetry-based screening method for Cp 2 TiX-catalyzed reactions is introduced. Our mechanism-based approach enables the study of the influence of various additives on the electrochemically generated active catalyst Cp 2 TiX, which is in equilibrium with catalytically inactive [Cp 2 TiX 2 ] - . Thioureas and ureas are most efficient in the generation of Cp 2 TiX in THF. Knowing the precise position of the equilibrium between Cp 2 TiX and [Cp 2 TiX 2 ] - allowed us to identify reaction conditions for the bulk electrolysis of Cp 2 TiX 2 complexes and for Cp 2 TiX-catayzed radical arylations without having to carry out the reactions. Our time- and resource-efficient approach is of general interest for the design of catalytic reactions that proceed in single-electron steps. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A survey on evolutionary algorithm based hybrid intelligence in bioinformatics.

    PubMed

    Li, Shan; Kang, Liying; Zhao, Xing-Ming

    2014-01-01

    With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.

  7. Complementary Approaches to Existing Target Based Drug Discovery for Identifying Novel Drug Targets.

    PubMed

    Vasaikar, Suhas; Bhatia, Pooja; Bhatia, Partap G; Chu Yaiw, Koon

    2016-11-21

    In the past decade, it was observed that the relationship between the emerging New Molecular Entities and the quantum of R&D investment has not been favorable. There might be numerous reasons but few studies stress the introduction of target based drug discovery approach as one of the factors. Although a number of drugs have been developed with an emphasis on a single protein target, yet identification of valid target is complex. The approach focuses on an in vitro single target, which overlooks the complexity of cell and makes process of validation drug targets uncertain. Thus, it is imperative to search for alternatives rather than looking at success stories of target-based drug discovery. It would be beneficial if the drugs were developed to target multiple components. New approaches like reverse engineering and translational research need to take into account both system and target-based approach. This review evaluates the strengths and limitations of known drug discovery approaches and proposes alternative approaches for increasing efficiency against treatment.

  8. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  9. A Novel Addressing Scheme for PMIPv6 Based Global IP-WSNs

    PubMed Central

    Islam, Md. Motaharul; Huh, Eui-Nam

    2011-01-01

    IP based Wireless Sensor Networks (IP-WSNs) are being used in healthcare, home automation, industrial control and agricultural monitoring. In most of these applications global addressing of individual IP-WSN nodes and layer-three routing for mobility enabled IP-WSN with special attention to reliability, energy efficiency and end to end delay minimization are a few of the major issues to be addressed. Most of the routing protocols in WSN are based on layer-two approaches. For reliability and end to end communication enhancement the necessity of layer-three routing for IP-WSNs is generating significant attention among the research community, but due to the hurdle of maintaining routing state and other communication overhead, it was not possible to introduce a layer-three routing protocol for IP-WSNs. To address this issue we propose in this paper a global addressing scheme and layer-three based hierarchical routing protocol. The proposed addressing and routing approach focuses on all the above mentioned issues. Simulation results show that the proposed addressing and routing approach significantly enhances the reliability, energy efficiency and end to end delay minimization. We also present architecture, message formats and different routing scenarios in this paper. PMID:22164084

  10. A novel addressing scheme for PMIPv6 based global IP-WSNs.

    PubMed

    Islam, Md Motaharul; Huh, Eui-Nam

    2011-01-01

    IP based Wireless Sensor Networks (IP-WSNs) are being used in healthcare, home automation, industrial control and agricultural monitoring. In most of these applications global addressing of individual IP-WSN nodes and layer-three routing for mobility enabled IP-WSN with special attention to reliability, energy efficiency and end to end delay minimization are a few of the major issues to be addressed. Most of the routing protocols in WSN are based on layer-two approaches. For reliability and end to end communication enhancement the necessity of layer-three routing for IP-WSNs is generating significant attention among the research community, but due to the hurdle of maintaining routing state and other communication overhead, it was not possible to introduce a layer-three routing protocol for IP-WSNs. To address this issue we propose in this paper a global addressing scheme and layer-three based hierarchical routing protocol. The proposed addressing and routing approach focuses on all the above mentioned issues. Simulation results show that the proposed addressing and routing approach significantly enhances the reliability, energy efficiency and end to end delay minimization. We also present architecture, message formats and different routing scenarios in this paper.

  11. Nested polynomial trends for the improvement of Gaussian process-based predictors

    NASA Astrophysics Data System (ADS)

    Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.

    2017-10-01

    The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.

  12. Efficient mRNA-Based Genetic Engineering of Human NK Cells with High-Affinity CD16 and CCR7 Augments Rituximab-Induced ADCC against Lymphoma and Targets NK Cell Migration toward the Lymph Node-Associated Chemokine CCL19.

    PubMed

    Carlsten, Mattias; Levy, Emily; Karambelkar, Amrita; Li, Linhong; Reger, Robert; Berg, Maria; Peshwa, Madhusudan V; Childs, Richard W

    2016-01-01

    For more than a decade, investigators have pursued methods to genetically engineer natural killer (NK) cells for use in clinical therapy against cancer. Despite considerable advances in viral transduction of hematopoietic stem cells and T cells, transduction efficiencies for NK cells have remained disappointingly low. Here, we show that NK cells can be genetically reprogramed efficiently using a cGMP-compliant mRNA electroporation method that induces rapid and reproducible transgene expression in nearly all transfected cells, without negatively influencing their viability, phenotype, and cytotoxic function. To study its potential therapeutic application, we used this approach to improve key aspects involved in efficient lymphoma targeting by adoptively infused ex vivo-expanded NK cells. Electroporation of NK cells with mRNA coding for the chemokine receptor CCR7 significantly promoted migration toward the lymph node-associated chemokine CCL19. Further, introduction of mRNA coding for the high-affinity antibody-binding receptor CD16 (CD16-158V) substantially augmented NK cell cytotoxicity against rituximab-coated lymphoma cells. Based on these data, we conclude that this approach can be utilized to genetically modify multiple modalities of NK cells in a highly efficient manner with the potential to improve multiple facets of their in vivo tumor targeting, thus, opening a new arena for the development of more efficacious adoptive NK cell-based cancer immunotherapies.

  13. Efficient discovery of risk patterns in medical data.

    PubMed

    Li, Jiuyong; Fu, Ada Wai-chee; Fahey, Paul

    2009-01-01

    This paper studies a problem of efficiently discovering risk patterns in medical data. Risk patterns are defined by a statistical metric, relative risk, which has been widely used in epidemiological research. To avoid fruitless search in the complete exploration of risk patterns, we define optimal risk pattern set to exclude superfluous patterns, i.e. complicated patterns with lower relative risk than their corresponding simpler form patterns. We prove that mining optimal risk pattern sets conforms an anti-monotone property that supports an efficient mining algorithm. We propose an efficient algorithm for mining optimal risk pattern sets based on this property. We also propose a hierarchical structure to present discovered patterns for the easy perusal by domain experts. The proposed approach is compared with two well-known rule discovery methods, decision tree and association rule mining approaches on benchmark data sets and applied to a real world application. The proposed method discovers more and better quality risk patterns than a decision tree approach. The decision tree method is not designed for such applications and is inadequate for pattern exploring. The proposed method does not discover a large number of uninteresting superfluous patterns as an association mining approach does. The proposed method is more efficient than an association rule mining method. A real world case study shows that the method reveals some interesting risk patterns to medical practitioners. The proposed method is an efficient approach to explore risk patterns. It quickly identifies cohorts of patients that are vulnerable to a risk outcome from a large data set. The proposed method is useful for exploratory study on large medical data to generate and refine hypotheses. The method is also useful for designing medical surveillance systems.

  14. Content-based VLE designs improve learning efficiency in constructivist statistics education.

    PubMed

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific-purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a content-based design outperforms the traditional VLE-based design.

  15. Relay discovery and selection for large-scale P2P streaming

    PubMed Central

    Zhang, Chengwei; Wang, Angela Yunxian

    2017-01-01

    In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. PMID:28410384

  16. Relay discovery and selection for large-scale P2P streaming.

    PubMed

    Zhang, Chengwei; Wang, Angela Yunxian; Hei, Xiaojun

    2017-01-01

    In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.

  17. 15.3%-Efficient GaAsP Solar Cells on GaP/Si Templates

    DOE PAGES

    Vaisman, Michelle; Fan, Shizhao; Nay Yaung, Kevin; ...

    2017-07-26

    As single-junction Si solar cells approach their practical efficiency limits, a new pathway is necessary to increase efficiency in order to realize more cost-effective photovoltaics. Integrating III-V cells onto Si in a multijunction architecture is a promising approach that can achieve high efficiency while leveraging the infrastructure already in place for Si and III-V technology. In this Letter, we demonstrate a record 15.3%-efficient 1.7 eV GaAsP top cell on GaP/Si, enabled by recent advances in material quality in conjunction with an improved device design and a high-performance antireflection coating. Furthermore, we present a separate Si bottom cell with a 1.7more » eV GaAsP optical filter to absorb most of the visible light with an efficiency of 6.3%, showing the feasibility of monolithic III-V/Si tandems with >20% efficiency. Through spectral efficiency analysis, we also compare our results to previously published GaAsP and Si devices, projecting tandem GaAsP/Si efficiencies of up to 25.6% based on current state-of-the-art individual subcells. With the aid of modeling, we further illustrate a realistic path toward 30% GaAsP/Si tandems for high-efficiency, monolithically integrated photovoltaics.« less

  18. Semirational Approach for Ultrahigh Poly(3-hydroxybutyrate) Accumulation in Escherichia coli by Combining One-Step Library Construction and High-Throughput Screening.

    PubMed

    Li, Teng; Ye, Jianwen; Shen, Rui; Zong, Yeqing; Zhao, Xuejin; Lou, Chunbo; Chen, Guo-Qiang

    2016-11-18

    As a product of a multistep enzymatic reaction, accumulation of poly(3-hydroxybutyrate) (PHB) in Escherichia coli (E. coli) can be achieved by overexpression of the PHB synthesis pathway from a native producer involving three genes phbC, phbA, and phbB. Pathway optimization by adjusting expression levels of the three genes can influence properties of the final product. Here, we reported a semirational approach for highly efficient PHB pathway optimization in E. coli based on a phbCAB operon cloned from the native producer Ralstonia entropha (R. entropha). Rationally designed ribosomal binding site (RBS) libraries with defined strengths for each of the three genes were constructed based on high or low copy number plasmids in a one-pot reaction by an oligo-linker mediated assembly (OLMA) method. Strains with desired properties were evaluated and selected by three different methodologies, including visual selection, high-throughput screening, and detailed in-depth analysis. Applying this approach, strains accumulating 0%-92% PHB contents in cell dry weight (CDW) were achieved. PHB with various weight-average molecular weights (M w ) of 2.7-6.8 × 10 6 were also efficiently produced in relatively high contents. These results suggest that the semirational approach combining library design, construction, and proper screening is an efficient way to optimize PHB and other multienzyme pathways.

  19. An integrated data envelopment analysis-artificial neural network approach for benchmarking of bank branches

    NASA Astrophysics Data System (ADS)

    Shokrollahpour, Elsa; Hosseinzadeh Lotfi, Farhad; Zandieh, Mostafa

    2016-06-01

    Efficiency and quality of services are crucial to today's banking industries. The competition in this section has become increasingly intense, as a result of fast improvements in Technology. Therefore, performance analysis of the banking sectors attracts more attention these days. Even though data envelopment analysis (DEA) is a pioneer approach in the literature as of an efficiency measurement tool and finding benchmarks, it is on the other hand unable to demonstrate the possible future benchmarks. The drawback to it could be that the benchmarks it provides us with, may still be less efficient compared to the more advanced future benchmarks. To cover for this weakness, artificial neural network is integrated with DEA in this paper to calculate the relative efficiency and more reliable benchmarks of one of the Iranian commercial bank branches. Therefore, each branch could have a strategy to improve the efficiency and eliminate the cause of inefficiencies based on a 5-year time forecast.

  20. Healthcare information system approaches based on middleware concepts.

    PubMed

    Holena, M; Blobel, B

    1997-01-01

    To meet the challenges for efficient and high-level quality, health care systems must implement the "Shared Care" paradigm of distributed co-operating systems. To this end, both the newly developed and legacy applications must be fully integrated into the care process. These requirements can be fulfilled by information systems based on middleware concepts. In the paper, the middleware approaches HL7, DHE, and CORBA are described. The relevance of those approaches to the healthcare domain is documented. The description presented here is complemented through two other papers in this volume, concentrating on the evaluation of the approaches, and on their security threats and solutions.

  1. Efficient approach to the free energy of crystals via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Navascués, G.; Velasco, E.

    2015-08-01

    We present a general approach to compute the absolute free energy of a system of particles with constrained center of mass based on the Monte Carlo thermodynamic coupling integral method. The version of the Frenkel-Ladd approach [J. Chem. Phys. 81, 3188 (1984)], 10.1063/1.448024, which uses a harmonic coupling potential, is recovered. Also, we propose a different choice, based on one-particle square-well coupling potentials, which is much simpler, more accurate, and free from some of the difficulties of the Frenkel-Ladd method. We apply our approach to hard spheres and compare with the standard harmonic method.

  2. A Systems Approach to Nitrogen Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goins, Bobby

    A systems based approach will be used to evaluate the nitrogen delivery process. This approach involves principles found in Lean, Reliability, Systems Thinking, and Requirements. This unique combination of principles and thought process yields a very in depth look into the system to which it is applied. By applying a systems based approach to the nitrogen delivery process there should be improvements in cycle time, efficiency, and a reduction in the required number of personnel needed to sustain the delivery process. This will in turn reduce the amount of demurrage charges that the site incurs. In addition there should bemore » less frustration associated with the delivery process.« less

  3. An analytical probabilistic model of the quality efficiency of a sewer tank

    NASA Astrophysics Data System (ADS)

    Balistrocchi, Matteo; Grossi, Giovanna; Bacchi, Baldassare

    2009-12-01

    The assessment of the efficiency of a storm water storage facility devoted to the sewer overflow control in urban areas strictly depends on the ability to model the main features of the rainfall-runoff routing process and the related wet weather pollution delivery. In this paper the possibility of applying the analytical probabilistic approach for developing a tank design method, whose potentials are similar to the continuous simulations, is proved. In the model derivation the quality issues of such devices were implemented. The formulation is based on a Weibull probabilistic model of the main characteristics of the rainfall process and on a power law describing the relationship between the dimensionless storm water cumulative runoff volume and the dimensionless cumulative pollutograph. Following this approach, efficiency indexes were established. The proposed model was verified by comparing its results to those obtained by continuous simulations; satisfactory agreement is shown for the proposed efficiency indexes.

  4. Efficient single-mode operation of a cladding-pumped ytterbium-doped helical-core fiber laser.

    PubMed

    Wang, P; Cooper, L J; Sahu, J K; Clarkson, W A

    2006-01-15

    A novel approach to achieving robust single-spatial-mode operation of cladding-pumped fiber lasers with multimode cores is reported. The approach is based on the use of a fiber geometry in which the core has a helical trajectory within the inner cladding to suppress laser oscillation on higher-order modes. In a preliminary proof-of-principle study, efficient single-mode operation of a cladding-pumped ytterbium-doped helical-core fiber laser with a 30 microm diameter core and a numerical aperture of 0.087 has been demonstrated. The laser yielded 60.4 W of output at 1043 nm in a beam with M2 < 1.4 for 92.6 W launched pump power from a diode stack at 976 nm. The slope efficiency at pump powers well above threshold was approximately 84%, which compares favorably with the slope efficiencies achievable with conventional straight-core Yb-doped double-clad fiber lasers.

  5. Toyota Prius HEV neurocontrol and diagnostics.

    PubMed

    Prokhorov, Danil V

    2008-01-01

    A neural network controller for improved fuel efficiency of the Toyota Prius hybrid electric vehicle is proposed. A new method to detect and mitigate a battery fault is also presented. The approach is based on recurrent neural networks and includes the extended Kalman filter. The proposed approach is quite general and applicable to other control systems.

  6. High Efficiency, Illumination Quality OLEDs for Lighting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph Shiang; James Cella; Kelly Chichak

    The goal of the program was to demonstrate a 45 lumen per watt white light device based upon the use of multiple emission colors through the use of solution processing. This performance level is a dramatic extension of the team's previous 15 LPW large area illumination device. The fundamental material system was based upon commercial polymer materials. The team was largely able to achieve these goals, and was able to deliver to DOE a 90 lumen illumination source that had an average performance of 34 LPW a 1000 cd/m{sup 2} with peak performances near 40LPW. The average color temperature ismore » 3200K and the calculated CRI 85. The device operated at a brightness of approximately 1000cd/m{sup 2}. The use of multiple emission colors particularly red and blue, provided additional degrees of design flexibility in achieving white light, but also required the use of a multilayered structure to separate the different recombination zones and prevent interconversion of blue emission to red emission. The use of commercial materials had the advantage that improvements by the chemical manufacturers in charge transport efficiency, operating life and material purity could be rapidly incorporated without the expenditure of additional effort. The program was designed to take maximum advantage of the known characteristics of these material and proceeded in seven steps. (1) Identify the most promising materials, (2) assemble them into multi-layer structures to control excitation and transport within the OLED, (3) identify materials development needs that would optimize performance within multilayer structures, (4) build a prototype that demonstrates the potential entitlement of the novel multilayer OLED architecture (5) integrate all of the developments to find the single best materials set to implement the novel multilayer architecture, (6) further optimize the best materials set, (7) make a large area high illumination quality white OLED. A photo of the final deliverable is shown. In 2003, a large area, OLED based illumination source was demonstrated that could provide light with a quality, quantity, and efficiency on par with what can be achieved with traditional light sources. The demonstration source was made by tiling together 16 separate 6-inch x 6-inch blue-emitting OLEDs. The efficiency, total lumen output, and lifetime of the OLED based illumination source were the same as what would be achieved with an 80 watt incandescent bulb. The devices had an average efficacy of 15 LPW and used solution-processed OLEDs. The individual 6-inch x 6-inch devices incorporated three technology strategies developed specifically for OLED lighting -- downconversion for white light generation, scattering for outcoupling efficiency enhancement, and a scalable monolithic series architecture to enable large area devices. The downconversion approach consists of optically coupling a blue-emitting OLED to a set of luminescent layers. The layers are chosen to absorb the blue OLED emission and then luminescence with high efficiency at longer wavelengths. The composition and number of layers are chosen so that the unabsorbed blue emission and the longer wavelength re-emission combine to make white light. A downconversion approach has the advantage of allowing a wide variety of colors to be made from a limited set of blue emitters. In addition, one does not have to carefully tune the emission wavelength of the individual electro-luminescent species within the OLED device in order to achieve white light. The downconversion architecture used to develop the 15LPW large area light source consisted of a polymer-based blue-emitting OLED and three downconversion layers. Two of the layers utilized perylene based dyes from BASF AG of Germany with high quantum efficiency (>98%) and one of the layers consisted of inorganic phosphor particles (Y(Gd)AG:Ce) with a quantum efficiency of {approx}85%. By independently varying the optical density of the downconversion layers, the overall emission spectrum could be adjusted to maximize performance for lighting (e.g. blackbody temperature, color rendering and luminous efficacy) while keeping the properties of the underlying blue OLED constant. The success of the downconversion approach is ultimately based upon the ability to produce efficient emission in the blue. Table 1 presents a comparison of the current performance of the conjugated polymer, dye-doped polymer, and dendrimer approaches to making a solution-processed blue OLED as 2006. Also given is the published state of the art performance of a vapor-deposited blue OLED. One can see that all the approaches to a blue OLED give approximately the same external quantum efficiency at 500 cd/m{sup 2}. However, due to its low operating voltage, the fluorescent conjugated polymer approach yields a superior power efficiency at the same brightness.« less

  7. Irregular large-scale computed tomography on multiple graphics processors improves energy-efficiency metrics for industrial applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Goodman, Eric L.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.

    2014-09-01

    This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.

  8. Geometrical eigen-subspace framework based molecular conformation representation for efficient structure recognition and comparison

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Tian; Yang, Xiao-Bao; Zhao, Yu-Jun

    2017-04-01

    We have developed an extended distance matrix approach to study the molecular geometric configuration through spectral decomposition. It is shown that the positions of all atoms in the eigen-space can be specified precisely by their eigen-coordinates, while the refined atomic eigen-subspace projection array adopted in our approach is demonstrated to be a competent invariant in structure comparison. Furthermore, a visual eigen-subspace projection function (EPF) is derived to characterize the surrounding configuration of an atom naturally. A complete set of atomic EPFs constitute an intrinsic representation of molecular conformation, based on which the interatomic EPF distance and intermolecular EPF distance can be reasonably defined. Exemplified with a few cases, the intermolecular EPF distance shows exceptional rationality and efficiency in structure recognition and comparison.

  9. Functional neural networks of honesty and dishonesty in children: Evidence from graph theory analysis.

    PubMed

    Ding, Xiao Pan; Wu, Si Jia; Liu, Jiangang; Fu, Genyue; Lee, Kang

    2017-09-21

    The present study examined how different brain regions interact with each other during spontaneous honest vs. dishonest communication. More specifically, we took a complex network approach based on the graph-theory to analyze neural response data when children are spontaneously engaged in honest or dishonest acts. Fifty-nine right-handed children between 7 and 12 years of age participated in the study. They lied or told the truth out of their own volition. We found that lying decreased both the global and local efficiencies of children's functional neural network. This finding, for the first time, suggests that lying disrupts the efficiency of children's cortical network functioning. Further, it suggests that the graph theory based network analysis is a viable approach to study the neural development of deception.

  10. A method for development of efficient 3D models for neutronic calculations of ASTRA critical facility using experimental information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balanin, A. L.; Boyarinov, V. F.; Glushkov, E. S.

    The application of experimental information on measured axial distributions of fission reaction rates for development of 3D numerical models of the ASTRA critical facility taking into account azimuthal asymmetry of the assembly simulating a HTGR with annular core is substantiated. Owing to the presence of the bottom reflector and the absence of the top reflector, the application of 2D models based on experimentally determined buckling is impossible for calculation of critical assemblies of the ASTRA facility; therefore, an alternative approach based on the application of the extrapolated assembly height is proposed. This approach is exemplified by the numerical analysis ofmore » experiments on measurement of efficiency of control rods mockups and protection system (CPS).« less

  11. Multi-Layer Approach for the Detection of Selective Forwarding Attacks

    PubMed Central

    Alajmi, Naser; Elleithy, Khaled

    2015-01-01

    Security breaches are a major threat in wireless sensor networks (WSNs). WSNs are increasingly used due to their broad range of important applications in both military and civilian domains. WSNs are prone to several types of security attacks. Sensor nodes have limited capacities and are often deployed in dangerous locations; therefore, they are vulnerable to different types of attacks, including wormhole, sinkhole, and selective forwarding attacks. Security attacks are classified as data traffic and routing attacks. These security attacks could affect the most significant applications of WSNs, namely, military surveillance, traffic monitoring, and healthcare. Therefore, there are different approaches to detecting security attacks on the network layer in WSNs. Reliability, energy efficiency, and scalability are strong constraints on sensor nodes that affect the security of WSNs. Because sensor nodes have limited capabilities in most of these areas, selective forwarding attacks cannot be easily detected in networks. In this paper, we propose an approach to selective forwarding detection (SFD). The approach has three layers: MAC pool IDs, rule-based processing, and anomaly detection. It maintains the safety of data transmission between a source node and base station while detecting selective forwarding attacks. Furthermore, the approach is reliable, energy efficient, and scalable. PMID:26610499

  12. Multi-Layer Approach for the Detection of Selective Forwarding Attacks.

    PubMed

    Alajmi, Naser; Elleithy, Khaled

    2015-11-19

    Security breaches are a major threat in wireless sensor networks (WSNs). WSNs are increasingly used due to their broad range of important applications in both military and civilian domains. WSNs are prone to several types of security attacks. Sensor nodes have limited capacities and are often deployed in dangerous locations; therefore, they are vulnerable to different types of attacks, including wormhole, sinkhole, and selective forwarding attacks. Security attacks are classified as data traffic and routing attacks. These security attacks could affect the most significant applications of WSNs, namely, military surveillance, traffic monitoring, and healthcare. Therefore, there are different approaches to detecting security attacks on the network layer in WSNs. Reliability, energy efficiency, and scalability are strong constraints on sensor nodes that affect the security of WSNs. Because sensor nodes have limited capabilities in most of these areas, selective forwarding attacks cannot be easily detected in networks. In this paper, we propose an approach to selective forwarding detection (SFD). The approach has three layers: MAC pool IDs, rule-based processing, and anomaly detection. It maintains the safety of data transmission between a source node and base station while detecting selective forwarding attacks. Furthermore, the approach is reliable, energy efficient, and scalable.

  13. Signature Verification Based on Handwritten Text Recognition

    NASA Astrophysics Data System (ADS)

    Viriri, Serestina; Tapamo, Jules-R.

    Signatures continue to be an important biometric trait because it remains widely used primarily for authenticating the identity of human beings. This paper presents an efficient text-based directional signature recognition algorithm which verifies signatures, even when they are composed of special unconstrained cursive characters which are superimposed and embellished. This algorithm extends the character-based signature verification technique. The experiments carried out on the GPDS signature database and an additional database created from signatures captured using the ePadInk tablet, show that the approach is effective and efficient, with a positive verification rate of 94.95%.

  14. An approach for configuring space photovoltaic tandem arrays based on cell layer performance

    NASA Technical Reports Server (NTRS)

    Flora, C. S.; Dillard, P. A.

    1991-01-01

    Meeting solar array performance goals of 300 W/Kg requires use of solar cells with orbital efficiencies greater than 20 percent. Only multijunction cells and cell layers operating in tandem produce this required efficiency. An approach for defining solar array design concepts that use tandem cell layers involve the following: transforming cell layer performance at standard test conditions to on-orbit performance; optimizing circuit configuration with tandem cell layers; evaluating circuit sensitivity to cell current mismatch; developing array electrical design around selected circuit; and predicting array orbital performance including seasonal variations.

  15. Demystifying the GMAT: Computer-Based Testing Terms

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.

    2012-01-01

    Computer-based testing can be a powerful means to make all aspects of test administration not only faster and more efficient, but also more accurate and more secure. While the Graduate Management Admission Test (GMAT) exam is a computer adaptive test, there are other approaches. This installment presents a primer of computer-based testing terms.

  16. SABRE: ligand/structure-based virtual screening approach using consensus molecular-shape pattern recognition.

    PubMed

    Wei, Ning-Ning; Hamza, Adel

    2014-01-27

    We present an efficient and rational ligand/structure shape-based virtual screening approach combining our previous ligand shape-based similarity SABRE (shape-approach-based routines enhanced) and the 3D shape of the receptor binding site. Our approach exploits the pharmacological preferences of a number of known active ligands to take advantage of the structural diversities and chemical similarities, using a linear combination of weighted molecular shape density. Furthermore, the algorithm generates a consensus molecular-shape pattern recognition that is used to filter and place the candidate structure into the binding pocket. The descriptor pool used to construct the consensus molecular-shape pattern consists of four dimensional (4D) fingerprints generated from the distribution of conformer states available to a molecule and the 3D shapes of a set of active ligands computed using SABRE software. The virtual screening efficiency of SABRE was validated using the Database of Useful Decoys (DUD) and the filtered version (WOMBAT) of 10 DUD targets. The ligand/structure shape-based similarity SABRE algorithm outperforms several other widely used virtual screening methods which uses the data fusion of multiscreening tools (2D and 3D fingerprints) and demonstrates a superior early retrieval rate of active compounds (EF(0.1%) = 69.0% and EF(1%) = 98.7%) from a large size of ligand database (∼95,000 structures). Therefore, our developed similarity approach can be of particular use for identifying active compounds that are similar to reference molecules and predicting activity against other targets (chemogenomics). An academic license of the SABRE program is available on request.

  17. Systems metabolic engineering strategies for the production of amino acids.

    PubMed

    Ma, Qian; Zhang, Quanwei; Xu, Qingyang; Zhang, Chenglin; Li, Yanjun; Fan, Xiaoguang; Xie, Xixian; Chen, Ning

    2017-06-01

    Systems metabolic engineering is a multidisciplinary area that integrates systems biology, synthetic biology and evolutionary engineering. It is an efficient approach for strain improvement and process optimization, and has been successfully applied in the microbial production of various chemicals including amino acids. In this review, systems metabolic engineering strategies including pathway-focused approaches, systems biology-based approaches, evolutionary approaches and their applications in two major amino acid producing microorganisms: Corynebacterium glutamicum and Escherichia coli, are summarized.

  18. The future of monitoring in clinical research - a holistic approach: linking risk-based monitoring with quality management principles.

    PubMed

    Ansmann, Eva B; Hecht, Arthur; Henn, Doris K; Leptien, Sabine; Stelzer, Hans Günther

    2013-01-01

    Since several years risk-based monitoring is the new "magic bullet" for improvement in clinical research. Lots of authors in clinical research ranging from industry and academia to authorities are keen on demonstrating better monitoring-efficiency by reducing monitoring visits, monitoring time on site, monitoring costs and so on, always arguing with the use of risk-based monitoring principles. Mostly forgotten is the fact, that the use of risk-based monitoring is only adequate if all mandatory prerequisites at site and for the monitor and the sponsor are fulfilled.Based on the relevant chapter in ICH GCP (International Conference on Harmonisation of technical requirements for registration of pharmaceuticals for human use - Good Clinical Practice) this publication takes a holistic approach by identifying and describing the requirements for future monitoring and the use of risk-based monitoring. As the authors are operational managers as well as QA (Quality Assurance) experts, both aspects are represented to come up with efficient and qualitative ways of future monitoring according to ICH GCP.

  19. Competent Systems: Effective, Efficient, Deliverable.

    ERIC Educational Resources Information Center

    Abramson, Bruce

    Recent developments in artificial intelligence and decision analysis suggest reassessing the approaches commonly taken to the design of knowledge-based systems. Competent systems are based on models known as influence diagrams, which graphically capture a domain's basic objects and their interrelationships. Among the benefits offered by influence…

  20. Interoperability in Personalized Adaptive Learning

    ERIC Educational Resources Information Center

    Aroyo, Lora; Dolog, Peter; Houben, Geert-Jan; Kravcik, Milos; Naeve, Ambjorn; Nilsson, Mikael; Wild, Fridolin

    2006-01-01

    Personalized adaptive learning requires semantic-based and context-aware systems to manage the Web knowledge efficiently as well as to achieve semantic interoperability between heterogeneous information resources and services. The technological and conceptual differences can be bridged either by means of standards or via approaches based on the…

  1. [The organizational benefits of the Kaizen approach at the Centre Hospitalier Universitaire de Sherbrooke (CHUS)].

    PubMed

    Comtois, Jonathan; Paris, Yvon; Poder, Thomas G; Chaussé, Sylvain

    2013-01-01

    The purpose of this study was to calculate the cost savings associated with using the kaizen approach in our hospital. Originally developed in Japan, the kaizen approach, based on the idea of continuous improvement, has considerable support in North America, including in the Quebec health care system. This study assessed the first fifteen kaizen projects at the CHUS. Based on an economic evaluation, we showed that using the kaizen approach can result in substantial cost savings. The success of the kaizen approach requires compliance with specific prerequisites. The future of the approach will depend on our ability to comply with these prerequisites. More specifically, such compliance will determine whether the approach is merely a passing fad or a strategy for improving our management style to promote greater efficiency.

  2. Global financial crisis and weak-form efficiency of Islamic sectoral stock markets: An MF-DFA analysis

    NASA Astrophysics Data System (ADS)

    Mensi, Walid; Tiwari, Aviral Kumar; Yoon, Seong-Min

    2017-04-01

    This paper estimates the weak-form efficiency of Islamic stock markets using 10 sectoral stock indices (basic materials, consumer services, consumer goods, energy, financials, health care, industrials, technology, telecommunication, and utilities). The results based on the multifractal detrended fluctuation analysis (MF-DFA) approach show time-varying efficiency for the sectoral stock markets. Moreover, we find that they tend to show high efficiency in the long term but moderate efficiency in the short term, and that these markets become less efficient after the onset of the global financial crisis. These results have several significant implications in terms of asset allocation for investors dealing with Islamic markets.

  3. Multicriteria approaches for a private equity fund

    NASA Astrophysics Data System (ADS)

    Tammer, Christiane; Tannert, Johannes

    2012-09-01

    We develop a new model for a Private Equity Fund based on stochastic differential equations. In order to find efficient strategies for the fund manager we formulate a multicriteria optimization problem for a Private Equity Fund. Using the e-constraint method we solve this multicriteria optimization problem. Furthermore, a genetic algorithm is applied in order to get an approximation of the efficient frontier.

  4. Evolution of Automotive Chopper Circuits Towards Ultra High Efficiency and Power Density

    NASA Astrophysics Data System (ADS)

    Pavlovsky, Martin; Tsuruta, Yukinori; Kawamura, Atsuo

    Automotive industry is considered to be one of the main contributors to environmental pollution and global warming. Therefore, many car manufacturers are in near future planning to introduce hybrid electric vehicles (HEV), fuel cell electric vehicles (FCEV) and pure electric vehicles (EV) to make our cars more environmentally friendly. These new vehicles require highly efficient and small power converters. In recent years, considerable improvements were made in designing such converters. In this paper, an approach based on so called Snubber Assisted Zero Voltage and Zero Current Switching topology otherwise also known as SAZZ is presented. This topology has evolved to be one of the leaders in the field of highly efficient converters with high power densities. Evolution and main features of this topology are briefly discussed. Capabilities of the topology are demonstrated on two case study prototypes based on different design approaches. The prototypes are designed to be fully bi-directional for peak power output of 30kW. Both designs reached efficiencies close to 99% in wide load range. Power densities over 40kW/litre are attainable in the same time. Combination of MOSFET technology and SAZZ topology is shown to be very beneficial to converters designed for EV applications.

  5. Comparing multiple imputation methods for systematically missing subject-level data.

    PubMed

    Kline, David; Andridge, Rebecca; Kaizar, Eloise

    2017-06-01

    When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  7. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    2010-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  8. Predicting ESI/MS Signal Change for Anions in Different Solvents.

    PubMed

    Kruve, Anneli; Kaupmees, Karl

    2017-05-02

    LC/ESI/MS is a technique widely used for qualitative and quantitative analysis in various fields. However, quantification is currently possible only for compounds for which the standard substances are available, as the ionization efficiency of different compounds in ESI source differs by orders of magnitude. In this paper we present an approach for quantitative LC/ESI/MS analysis without standard substances. This approach relies on accurately predicting the ionization efficiencies in ESI source based on a model, which uses physicochemical parameters of analytes. Furthermore, the model has been made transferable between different mobile phases and instrument setups by using a suitable set of calibration compounds. This approach has been validated both in flow injection and chromatographic mode with gradient elution.

  9. Analysis of composite plates by using mechanics of structure genome and comparison with ANSYS

    NASA Astrophysics Data System (ADS)

    Zhao, Banghua

    Motivated by a recently discovered concept, Structure Genome (SG) which is defined as the smallest mathematical building block of a structure, a new approach named Mechanics of Structure Genome (MSG) to model and analyze composite plates is introduced. MSG is implemented in a general-purpose code named SwiftComp(TM), which provides the constitutive models needed in structural analysis by homogenization and pointwise local fields by dehomogenization. To improve the user friendliness of SwiftComp(TM), a simple graphic user interface (GUI) based on ANSYS Mechanical APDL platform, called ANSYS-SwiftComp GUI is developed, which provides a convenient way to create some common SG models or arbitrary customized SG models in ANSYS and invoke SwiftComp(TM) to perform homogenization and dehomogenization. The global structural analysis can also be handled in ANSYS after homogenization, which could predict the global behavior and provide needed inputs for dehomogenization. To demonstrate the accuracy and efficiency of the MSG approach, several numerical cases are studied and compared using both MSG and ANSYS. In the ANSYS approach, 3D solid element models (ANSYS 3D approach) are used as reference models and the 2D shell element models created by ANSYS Composite PrepPost (ACP approach) are compared with the MSG approach. The results of the MSG approach agree well with the ANSYS 3D approach while being as efficient as the ACP approach. Therefore, the MSG approach provides an efficient and accurate new way to model composite plates.

  10. Review of the "AS-BUILT BIM" Approaches

    NASA Astrophysics Data System (ADS)

    Hichri, N.; Stefani, C.; De Luca, L.; Veron, P.

    2013-02-01

    Today, we need 3D models of heritage buildings in order to handle more efficiently projects of restoration, documentation and maintenance. In this context, developing a performing approach, based on a first phase of building survey, is a necessary step in order to build a semantically enriched digital model. For this purpose, the Building Information Modeling is an efficient tool for storing and exchanging knowledge about buildings. In order to create such a model, there are three fundamental steps: acquisition, segmentation and modeling. For these reasons, it is essential to understand and analyze this entire chain that leads to a well- structured and enriched 3D digital model. This paper proposes a survey and an analysis of the existing approaches on these topics and tries to define a new approach of semantic structuring taking into account the complexity of this chain.

  11. A MEMS approach to determine the biochemical oxygen demand (BOD) of wastewaters

    NASA Astrophysics Data System (ADS)

    Recoules, L.; Migaou, A.; Dollat, X.; Thouand, G.; Gue, A. M.; Boukabache, A.

    2017-07-01

    A MEMS approach to obtain an efficient tool for the evaluation of the biochemical oxygen demand (BOD) of wastewaters is introduced. Its operating principle is based on the measurement of oxygen concentration in water samples containing organic pollutants and specific bacteria. The microsystem has been designed to perform multiple and parallel measurements in a poly-wells microfluidic device. The monitoring of the bacterial activity is ensured by optical sensors incorporated in each well of the fluidic network. By using an optode sensor, it is hereby demonstrated that this approach is efficient to measure organic pollutants by testing different Luria Bertani buffer dilutions. These results also show that it is possible to reduce the duration of measurements from 5 d (BOD5) of the standard approach to few hours, typically 3 h-5 h.

  12. Efficient Allocation of Resources for Defense of Spatially Distributed Networks Using Agent-Based Simulation.

    PubMed

    Kroshl, William M; Sarkani, Shahram; Mazzuchi, Thomas A

    2015-09-01

    This article presents ongoing research that focuses on efficient allocation of defense resources to minimize the damage inflicted on a spatially distributed physical network such as a pipeline, water system, or power distribution system from an attack by an active adversary, recognizing the fundamental difference between preparing for natural disasters such as hurricanes, earthquakes, or even accidental systems failures and the problem of allocating resources to defend against an opponent who is aware of, and anticipating, the defender's efforts to mitigate the threat. Our approach is to utilize a combination of integer programming and agent-based modeling to allocate the defensive resources. We conceptualize the problem as a Stackelberg "leader follower" game where the defender first places his assets to defend key areas of the network, and the attacker then seeks to inflict the maximum damage possible within the constraints of resources and network structure. The criticality of arcs in the network is estimated by a deterministic network interdiction formulation, which then informs an evolutionary agent-based simulation. The evolutionary agent-based simulation is used to determine the allocation of resources for attackers and defenders that results in evolutionary stable strategies, where actions by either side alone cannot increase its share of victories. We demonstrate these techniques on an example network, comparing the evolutionary agent-based results to a more traditional, probabilistic risk analysis (PRA) approach. Our results show that the agent-based approach results in a greater percentage of defender victories than does the PRA-based approach. © 2015 Society for Risk Analysis.

  13. Spectral analysis based on fast Fourier transformation (FFT) of surveillance data: the case of scarlet fever in China.

    PubMed

    Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X

    2014-03-01

    Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.

  14. Hercules Single-Stage Reusable Vehicle (HSRV) Operating Base

    NASA Technical Reports Server (NTRS)

    Moon, Michael J.; McCleskey, Carey M.

    2017-01-01

    Conceptual design for the layout of lunar-planetary surface support systems remains an important area needing further master planning. This paper explores a structured approach to organize the layout of a Mars-based site equipped for routinely flying a human-scale reusable taxi system. The proposed Hercules Transportation System requires a surface support capability to sustain its routine, affordable, and dependable operation. The approach organizes a conceptual Hercules operating base through functional station sets. The station set approach will allow follow-on work to trade design approaches and consider technologies for more efficient flow of material, energy, and information at future Mars bases and settlements. The station set requirements at a Mars site point to specific capabilities needed. By drawing from specific Hercules design characteristics, the technology requirements for surface-based systems will come into greater focus. This paper begins a comprehensive process for documenting functional needs, architectural design methods, and analysis techniques necessary for follow-on concept studies.

  15. A partitioned model order reduction approach to rationalise computational expenses in nonlinear fracture mechanics

    PubMed Central

    Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.

    2013-01-01

    We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055

  16. A Unified Approach to Motion Control of Motion Robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1994-01-01

    This paper presents a simple on-line approach for motion control of mobile robots made up of a manipulator arm mounted on a mobile base. The proposed approach is equally applicable to nonholonomic mobile robots, such as rover-mounted manipulators and to holonomic mobile robots such as tracked robots or compound manipulators. The computational efficiency of the proposed control scheme makes it particularly suitable for real-time implementation.

  17. Nontarget approach for environmental monitoring by GC × GC-HRTOFMS in the Tokyo Bay basin.

    PubMed

    Zushi, Yasuyuki; Hashimoto, Shunji; Tanabe, Kiyoshi

    2016-08-01

    In this study, we developed an approach for sequential nontarget and target screening for the rapid and efficient analysis of multiple samples as an environmental monitoring using a comprehensive two-dimensional gas chromatograph coupled to a high resolution time-of-flight mass spectrometer (GC × GC-HRTOFMS). A key feature of the approach was the construction of an accurate mass spectral database learned from the sample via nontarget screening. To enhance the detection power in the nontarget screening, a global spectral deconvolution procedure based on non-negative matrix factorization was applied. The approach was applied to the monitoring of rivers in the Tokyo Bay basin. The majority of the compounds detected by the nontarget screening were alkyl chain-based compounds (55%). In the quantitative target screening based on the output from the nontarget screening, particularly high levels of organophosphorus flame retardants (median concentrations of 31, 116 and 141 ng l(-1) for TDCPP, TCIPP and TBEP, respectively) were observed among the target compounds. Flame retardants used for household furniture and building materials were detected in river basins where buildings and arterial traffic were dominated. The developed GC × GC-HRTOFMS approach was efficient and effective for environmental monitoring and provided valuable new information on various aspects of monitoring in the context of environmental management. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Distributed Damage Estimation for Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2011-01-01

    Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.

  19. Efficient Blockwise Permutation Tests Preserving Exchangeability

    PubMed Central

    Zhou, Chunxiao; Zwilling, Chris E.; Calhoun, Vince D.; Wang, Michelle Y.

    2014-01-01

    In this paper, we present a new blockwise permutation test approach based on the moments of the test statistic. The method is of importance to neuroimaging studies. In order to preserve the exchangeability condition required in permutation tests, we divide the entire set of data into certain exchangeability blocks. In addition, computationally efficient moments-based permutation tests are performed by approximating the permutation distribution of the test statistic with the Pearson distribution series. This involves the calculation of the first four moments of the permutation distribution within each block and then over the entire set of data. The accuracy and efficiency of the proposed method are demonstrated through simulated experiment on the magnetic resonance imaging (MRI) brain data, specifically the multi-site voxel-based morphometry analysis from structural MRI (sMRI). PMID:25289113

  20. Combining module based on coherent polarization beam combining.

    PubMed

    Yang, Yan; Geng, Chao; Li, Feng; Li, Xinyang

    2017-03-01

    A multiaperture receiver with a phased array is an effective approach to overcome the effect of the random optical disturbance in coherent free-space laser communications, in which one of the key technologies is how to efficiently combine the multiple laser beams received by the phased array antenna. A combining module based on coherent polarization beam combining (CPBC), which can combine multiple laser beams to one laser beam with high combining efficiency and output a linearly polarized beam, is proposed in this paper. The principle of the combining module is introduced, the coherent polarization combining efficiency of CPBC is analyzed, and the performance of the combining module is evaluated. Moreover, the feasibility and the expansibility of the proposed combining module are validated in experiments of CPBC based on active phase-locking.

  1. Liborg: a lidar-based robot for efficient 3D mapping

    NASA Astrophysics Data System (ADS)

    Vlaminck, Michiel; Luong, Hiep; Philips, Wilfried

    2017-09-01

    In this work we present Liborg, a spatial mapping and localization system that is able to acquire 3D models on the y using data originated from lidar sensors. The novelty of this work is in the highly efficient way we deal with the tremendous amount of data to guarantee fast execution times while preserving sufficiently high accuracy. The proposed solution is based on a multi-resolution technique based on octrees. The paper discusses and evaluates the main benefits of our approach including its efficiency regarding building and updating the map and its compactness regarding compressing the map. In addition, the paper presents a working prototype consisting of a robot equipped with a Velodyne Lidar Puck (VLP-16) and controlled by a Raspberry Pi serving as an independent acquisition platform.

  2. Engineering high charge transfer n-doping of graphene electrodes and its application to organic electronics.

    PubMed

    Sanders, Simon; Cabrero-Vilatela, Andrea; Kidambi, Piran R; Alexander-Webber, Jack A; Weijtens, Christ; Braeuninger-Weimer, Philipp; Aria, Adrianus I; Qasim, Malik M; Wilkinson, Timothy D; Robertson, John; Hofmann, Stephan; Meyer, Jens

    2015-08-14

    Using thermally evaporated cesium carbonate (Cs2CO3) in an organic matrix, we present a novel strategy for efficient n-doping of monolayer graphene and a ∼90% reduction in its sheet resistance to ∼250 Ohm sq(-1). Photoemission spectroscopy confirms the presence of a large interface dipole of ∼0.9 eV between graphene and the Cs2CO3/organic matrix. This leads to a strong charge transfer based doping of graphene with a Fermi level shift of ∼1.0 eV. Using this approach we demonstrate efficient, standard industrial manufacturing process compatible graphene-based inverted organic light emitting diodes on glass and flexible substrates with efficiencies comparable to those of state-of-the-art ITO based devices.

  3. Hybrid Mo-CT nanowires as highly efficient catalysts for direct dehydrogenation of isobutane.

    PubMed

    Mu, Jiali; Shi, Junjun; France, Liam John; Wu, Yongshan; Zeng, Qiang; Liu, Baoan; Jiang, Lilong; Long, Jinxing; Li, Xuehui

    2018-06-20

    Direct dehydrogenation of isobutane to isobutene has drawn extensive attention for synthesizing various chemicals. The Mo-based catalysts hold promise as an alternative to the toxic CrOx- and scarce Pt-based catalysts. However, the low activity and rapid deactivation of the Mo-based catalysts greatly hinder their practical applications. Herein, we demonstrate a feasible approach towards the development of efficient and non-noble metal dehydrogenation catalysts basing on Mo-CT hybrid nanowires calcined at different temperatures. In particular, the optimal Mo-C700 catalyst exhibits isobutane consumption rate of 3.9 mmol g-1 h-1, and isobutene selectivity of 73% with production rate of 2.8 mmol g-1 h-1. The catalyst maintained 90% of its initial activity after 50 h of reaction. Extensive characterizations reveal that such prominent performance is well-correlated with the adsorption abilities of isobutane and isobutene, and the formation of η-MoC species. By contrast, the generation of β-Mo2C crystalline phase during long-term reaction causes minor decline in activity. Compared to MoO2 and β-Mo2C, η-MoC plays a role more likely in suppressing the cracking reaction. This work demonstrates a feasible approach towards the development of efficient and non-noble metal dehydrogenation catalysts.

  4. Online energy management strategy of fuel cell hybrid electric vehicles based on data fusion approach

    NASA Astrophysics Data System (ADS)

    Zhou, Daming; Al-Durra, Ahmed; Gao, Fei; Ravey, Alexandre; Matraji, Imad; Godoy Simões, Marcelo

    2017-10-01

    Energy management strategy plays a key role for Fuel Cell Hybrid Electric Vehicles (FCHEVs), it directly affects the efficiency and performance of energy storages in FCHEVs. For example, by using a suitable energy distribution controller, the fuel cell system can be maintained in a high efficiency region and thus saving hydrogen consumption. In this paper, an energy management strategy for online driving cycles is proposed based on a combination of the parameters from three offline optimized fuzzy logic controllers using data fusion approach. The fuzzy logic controllers are respectively optimized for three typical driving scenarios: highway, suburban and city in offline. To classify patterns of online driving cycles, a Probabilistic Support Vector Machine (PSVM) is used to provide probabilistic classification results. Based on the classification results of the online driving cycle, the parameters of each offline optimized fuzzy logic controllers are then fused using Dempster-Shafer (DS) evidence theory, in order to calculate the final parameters for the online fuzzy logic controller. Three experimental validations using Hardware-In-the-Loop (HIL) platform with different-sized FCHEVs have been performed. Experimental comparison results show that, the proposed PSVM-DS based online controller can achieve a relatively stable operation and a higher efficiency of fuel cell system in real driving cycles.

  5. Component Cell-Based Restriction of Spectral Conditions and the Impact on CPV Module Power Rating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, Matthew T; Steiner, Marc; Siefer, Gerald

    One approach to consider the prevailing spectral conditions when performing CPV module power ratings according to the standard IEC 62670-3 is based on spectral matching ratios (SMRs) determined by the means of component cell sensors. In this work, an uncertainty analysis of the SMR approach is performed based on a dataset of spectral irradiances created with SMARTS2. Using these illumination spectra, the respective efficiencies of multijunction solar cells with different cell architectures are calculated. These efficiencies were used to analyze the influence of different component cell sensors and SMR filtering methods. The 3 main findings of this work are asmore » follows. First, component cells based on the lattice-matched triple-junction (LM3J) cell are suitable for restricting spectral conditions and are qualified for the standardized power rating of CPV modules - even if the CPV module is using multijunction cells other than LM3J. Second, a filtering of all 3 SMRs with +/-3.0% of unity results in the worst case scenario in an underestimation of -1.7% and overestimation of +2.4% compared to AM1.5d efficiency. Third, there is no benefit in matching the component cells to the module cell in respect to the measurement uncertainty.« less

  6. Designing simulator-based training: an approach integrating cognitive task analysis and four-component instructional design.

    PubMed

    Tjiam, Irene M; Schout, Barbara M A; Hendrikx, Ad J M; Scherpbier, Albert J J M; Witjes, J Alfred; van Merriënboer, Jeroen J G

    2012-01-01

    Most studies of simulator-based surgical skills training have focused on the acquisition of psychomotor skills, but surgical procedures are complex tasks requiring both psychomotor and cognitive skills. As skills training is modelled on expert performance consisting partly of unconscious automatic processes that experts are not always able to explicate, simulator developers should collaborate with educational experts and physicians in developing efficient and effective training programmes. This article presents an approach to designing simulator-based skill training comprising cognitive task analysis integrated with instructional design according to the four-component/instructional design model. This theory-driven approach is illustrated by a description of how it was used in the development of simulator-based training for the nephrostomy procedure.

  7. A tag-based approach for high-throughput analysis of CCWGG methylation.

    PubMed

    Denisova, Oksana V; Chernov, Andrei V; Koledachkina, Tatyana Y; Matvienko, Nicholas I

    2007-10-15

    Non-CpG methylation occurring in the context of CNG sequences is found in plants at a large number of genomic loci. However, there is still little information available about non-CpG methylation in mammals. Efficient methods that would allow detection of scarcely localized methylated sites in small quantities of DNA are required to elucidate the biological role of non-CpG methylation in both plants and animals. In this study, we tested a new whole genome approach to identify sites of CCWGG methylation (W is A or T), a particular case of CNG methylation, in genomic DNA. This technique is based on digestion of DNAs with methylation-sensitive restriction endonucleases EcoRII-C and AjnI. Short DNAs flanking methylated CCWGG sites (tags) are selectively purified and assembled in tandem arrays of up to nine tags. This allows high-throughput sequencing of tags, identification of flanking regions, and their exact positions in the genome. In this study, we tested specificity and efficiency of the approach.

  8. Efficient boundary hunting via vector quantization

    NASA Astrophysics Data System (ADS)

    Diamantini, Claudia; Panti, Maurizio

    2001-03-01

    A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.

  9. An Efficient Numerical Approach for Nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin; Vedula, Prakash

    2009-03-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.

  10. Ontology-Based Gap Analysis for Technology Selection: A Knowledge Management Framework for the Support of Equipment Purchasing Processes

    NASA Astrophysics Data System (ADS)

    Macris, Aristomenis M.; Georgakellos, Dimitrios A.

    Technology selection decisions such as equipment purchasing and supplier selection are decisions of strategic importance to companies. The nature of these decisions usually is complex, unstructured and thus, difficult to be captured in a way that will be efficiently reusable. Knowledge reusability is of paramount importance since it enables users participate actively in process design/redesign activities stimulated by the changing technology selection environment. This paper addresses the technology selection problem through an ontology-based approach that captures and makes reusable the equipment purchasing process and assists in identifying (a) the specifications requested by the users' organization, (b) those offered by various candidate vendors' organizations and (c) in performing specifications gap analysis as a prerequisite for effective and efficient technology selection. This approach has practical appeal, operational simplicity, and the potential for both immediate and long-term strategic impact. An example from the iron and steel industry is also presented to illustrate the approach.

  11. Self-Recirculating Casing Treatment Concept for Enhanced Compressor Performance

    NASA Technical Reports Server (NTRS)

    Hathaway, Michael D.

    2002-01-01

    A state-of-the-art CFD code (APNASA) was employed in a computationally based investigation of the impact of casing bleed and injection on the stability and performance of a moderate speed fan rotor wherein the stalling mass flow is controlled by tip flow field breakdown. The investigation was guided by observed trends in endwall flow characteristics (e.g., increasing endwall aerodynamic blockage) as stall is approached and based on the hypothesis that application of bleed or injection can mitigate these trends. The "best" bleed and injection configurations were then combined to yield a self-recirculating casing treatment concept. The results of this investigation yielded: 1) identification of the fluid mechanisms which precipitate stall of tip critical blade rows, and 2) an approach to recirculated casing treatment which results in increased compressor stall range with minimal or no loss in efficiency. Subsequent application of this approach to a high speed transonic rotor successfully yielded significant improvements in stall range with no loss in compressor efficiency.

  12. Transient effects in π-pulse sequences in MAS solid-state NMR

    NASA Astrophysics Data System (ADS)

    Hellwagner, Johannes; Wili, Nino; Ibáñez, Luis Fábregas; Wittmann, Johannes J.; Meier, Beat H.; Ernst, Matthias

    2018-02-01

    Dipolar recoupling techniques that use isolated rotor-synchronized π pulses are commonly used in solid-state NMR spectroscopy to gain insight into the structure of biological molecules. These sequences excel through their simplicity, stability towards radio-frequency (rf) inhomogeneity, and low rf requirements. For a theoretical understanding of such sequences, we present a Floquet treatment based on an interaction-frame transformation including the chemical-shift offset dependence. This approach is applied to the homonuclear dipolar-recoupling sequence Radio-Frequency Driven Recoupling (RFDR) and the heteronuclear recoupling sequence Rotational Echo Double Resonance (REDOR). Based on the Floquet approach, we show the influence of effective fields caused by pulse transients and discuss the advantages of pulse-transient compensation. We demonstrate experimentally that the transfer efficiency for homonuclear recoupling can be doubled in some cases in model compounds as well as in simple peptides if pulse-transient compensation is applied to the π pulses. Additionally, we discuss the influence of various phase cycles on the recoupling efficiency in order to reduce the magnitude of effective fields. Based on the findings from RFDR, we are able to explain why the REDOR sequence does not suffer in the recoupling efficiency despite the presence of effective fields.

  13. A Dynamic Intrusion Detection System Based on Multivariate Hotelling's T2 Statistics Approach for Network Environments

    PubMed Central

    Avalappampatty Sivasamy, Aneetha; Sundan, Bose

    2015-01-01

    The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668

  14. A Dynamic Intrusion Detection System Based on Multivariate Hotelling's T2 Statistics Approach for Network Environments.

    PubMed

    Sivasamy, Aneetha Avalappampatty; Sundan, Bose

    2015-01-01

    The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T(2) method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T(2) statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better.

  15. Enhancing the Effectiveness of Smoking Treatment Research: Conceptual Bases and Progress

    PubMed Central

    Baker, Timothy B.; Collins, Linda M.; Mermelstein, Robin; Piper, Megan E.; Schlam, Tanya R.; Cook, Jessica W.; Bolt, Daniel M.; Smith, Stevens S.; Jorenby, Douglas E.; Fraser, David; Loh, Wei-Yin; Theobald, Wendy E.; Fiore, Michael C.

    2015-01-01

    Background and aims A chronic care strategy could potentially enhance the reach and effectiveness of smoking treatment by providing effective interventions for all smokers, including those who are initially unwilling to quit. This paper describes the conceptual bases of a National Cancer Institute-funded research program designed to develop an optimized, comprehensive, chronic care smoking treatment. Methods This research is grounded in three methodological approaches: 1) the Phase-Based Model, which guides the selection of intervention components to be experimentally evaluated for the different phases of smoking treatment (motivation, preparation, cessation, and maintenance); 2) the Multiphase Optimization Strategy (MOST), which guides the screening of intervention components via efficient experimental designs and, ultimately, the assembly of promising components into an optimized treatment package; and 3) pragmatic research methods, such as electronic health record recruitment, that facilitate the efficient translation of research findings into clinical practice. Using this foundation and working in primary care clinics, we conducted three factorial experiments (reported in three accompanying articles) to screen 15 motivation, preparation, cessation, and maintenance phase intervention components for possible inclusion in a chronic care smoking treatment program. Results This research identified intervention components with relatively strong evidence of effectiveness at particular phases of smoking treatment and it demonstrated the efficiency of the MOST approach in terms both of the number of intervention components tested and of the richness of the information yielded. Conclusions A new, synthesized research approach efficiently evaluates multiple intervention components to identify promising components for every phase of smoking treatment. Many intervention components interact with one another, supporting the use of factorial experiments in smoking treatment development. PMID:26581974

  16. Stoogiometry: A Cognitive Approach to Teaching Stoichiometry.

    ERIC Educational Resources Information Center

    Krieger, Carla R.

    1997-01-01

    Describes the use of Moe's Mall, a locational device designed to be used by learners, as a simple algorithm for solving mole-based exercises efficiently and accurately using dimensional analysis. (DDR)

  17. An innovative approach to capability-based emergency operations planning

    PubMed Central

    Keim, Mark E

    2013-01-01

    This paper describes the innovative use information technology for assisting disaster planners with an easily-accessible method for writing and improving evidence-based emergency operations plans. This process is used to identify all key objectives of the emergency response according to capabilities of the institution, community or society. The approach then uses a standardized, objective-based format, along with a consensus-based method for drafting capability-based operational-level plans. This information is then integrated within a relational database to allow for ease of access and enhanced functionality to search, sort and filter and emergency operations plan according to user need and technological capacity. This integrated approach is offered as an effective option for integrating best practices of planning with the efficiency, scalability and flexibility of modern information and communication technology. PMID:28228987

  18. An innovative approach to capability-based emergency operations planning.

    PubMed

    Keim, Mark E

    2013-01-01

    This paper describes the innovative use information technology for assisting disaster planners with an easily-accessible method for writing and improving evidence-based emergency operations plans. This process is used to identify all key objectives of the emergency response according to capabilities of the institution, community or society. The approach then uses a standardized, objective-based format, along with a consensus-based method for drafting capability-based operational-level plans. This information is then integrated within a relational database to allow for ease of access and enhanced functionality to search, sort and filter and emergency operations plan according to user need and technological capacity. This integrated approach is offered as an effective option for integrating best practices of planning with the efficiency, scalability and flexibility of modern information and communication technology.

  19. Team-Based Learning Enhances Performance in Introductory Biology

    ERIC Educational Resources Information Center

    Carmichael, Jeffrey

    2009-01-01

    Given the problems associated with the traditional lecture method, the constraints associated with large classes, and the effectiveness of active learning, continued development and testing of efficient student-centered learning approaches are needed. This study explores the effectiveness of team-based learning (TBL) in a large-enrollment…

  20. Budgeting-Based Organization of Internal Control

    ERIC Educational Resources Information Center

    Rogulenko, Tatiana; Ponomareva, Svetlana; Bodiaco, Anna; Mironenko, Valentina; Zelenov, Vladimir

    2016-01-01

    The article suggests methodical approaches to the budgeting-based organization of internal control, determines the tasks and subtasks of control that consist in the construction of an efficient system for the making, implementation, control, and analysis of managerial decisions. The organization of responsibility centers by means of implementing…

  1. Tailored dendritic core-multishell nanocarriers for efficient dermal drug delivery: A systematic top-down approach from synthesis to preclinical testing.

    PubMed

    Hönzke, Stefan; Gerecke, Christian; Elpelt, Anja; Zhang, Nan; Unbehauen, Michael; Kral, Vivian; Fleige, Emanuel; Paulus, Florian; Haag, Rainer; Schäfer-Korting, Monika; Kleuser, Burkhard; Hedtrich, Sarah

    2016-11-28

    Drug loaded dendritic core-multishell (CMS) nanocarriers are of especial interest for the treatment of skin diseases, owing to their striking dermal delivery efficiencies following topical applications. CMS nanocarriers are composed of a polyglycerol core, connected by amide-bonds to an inner alkyl shell and an outer methoxy poly(ethylene glycol) shell. Since topically applied nanocarriers are subjected to biodegradation, the application of conventional amide-based CMS nanocarriers (10-A-18-350) has been limited by the potential production of toxic polyglycerol amines. To circumvent this issue, three tailored ester-based CMS nanocarriers (10-E-12-350, 10-E-15-350, 10-E-18-350) of varying inner alkyl chain length were synthesized and comprehensively characterized in terms of particle size, drug loading, biodegradation and dermal drug delivery efficiency. Dexamethasone (DXM), a potent drug widely used for the treatment of inflammatory skin diseases, was chosen as a therapeutically relevant test compound for the present study. Ester- and amide-based CMS nanocarriers delivered DXM more efficiently into human skin than a commercially available DXM cream. Subsequent in vitro and in vivo toxicity studies identified CMS (10-E-15-350) as the most biocompatible carrier system. The anti-inflammatory potency of DXM-loaded CMS (10-E-15-350) nanocarriers was assessed in TNFα supplemented skin models, where a significant reduction of the pro-inflammatory cytokine IL-8 was seen, with markedly greater efficacy than commercial DXM cream. In summary, we report the rational design and characterization of tailored, biodegradable, ester-based CMS nanocarriers, and their subsequent stepwise screening for biocompatibility, dermal delivery efficiency and therapeutic efficacy in a top-down approach yielding the best carrier system for topical applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  3. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer

    PubMed Central

    Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results. PMID:28467468

  4. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer.

    PubMed

    Rogiers, Bart; Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results.

  5. Scope and Limitations of Fmoc Chemistry SPPS-Based Approaches to the Total Synthesis of Insulin Lispro via Ester Insulin.

    PubMed

    Dhayalan, Balamurugan; Mandal, Kalyaneswar; Rege, Nischay; Weiss, Michael A; Eitel, Simon H; Meier, Thomas; Schoenleber, Ralph O; Kent, Stephen B H

    2017-01-31

    We have systematically explored three approaches based on 9-fluorenylmethoxycarbonyl (Fmoc) chemistry solid phase peptide synthesis (SPPS) for the total chemical synthesis of the key depsipeptide intermediate for the efficient total chemical synthesis of insulin. The approaches used were: stepwise Fmoc chemistry SPPS; the "hybrid method", in which maximally protected peptide segments made by Fmoc chemistry SPPS are condensed in solution; and, native chemical ligation using peptide-thioester segments generated by Fmoc chemistry SPPS. A key building block in all three approaches was a Glu[O-β-(Thr)] ester-linked dipeptide equipped with a set of orthogonal protecting groups compatible with Fmoc chemistry SPPS. The most effective method for the preparation of the 51 residue ester-linked polypeptide chain of ester insulin was the use of unprotected peptide-thioester segments, prepared from peptide-hydrazides synthesized by Fmoc chemistry SPPS, and condensed by native chemical ligation. High-resolution X-ray crystallography confirmed the disulfide pairings and three-dimensional structure of synthetic insulin lispro prepared from ester insulin lispro by this route. Further optimization of these pilot studies could yield an efficient total chemical synthesis of insulin lispro (Humalog) based on peptide synthesis by Fmoc chemistry SPPS. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Optical design of nanowire absorbers for wavelength selective photodetectors

    PubMed Central

    Mokkapati, S.; Saxena, D.; Tan, H. H.; Jagadish, C.

    2015-01-01

    We propose the optical design for the absorptive element of photodetectors to achieve wavelength selective photo response based on resonant guided modes supported in semiconductor nanowires. We show that the waveguiding properties of nanowires result in very high absorption efficiency that can be exploited to reduce the volume of active semiconductor compared to planar photodetectors, without compromising the photocurrent. We present a design based on a group of nanowires with varying diameter for multi-color photodetectors with small footprint. We discuss the effect of a dielectric shell around the nanowires on the absorption efficiency and present a simple approach to optimize the nanowire diameter-dielectric shell thickness for maximizing the absorption efficiency. PMID:26469227

  7. Efficient region-based approach for blotch detection in archived video using texture information

    NASA Astrophysics Data System (ADS)

    Yous, Hamza; Serir, Amina

    2017-03-01

    We propose a method for blotch detection in archived videos by modeling their spatiotemporal properties. We introduce an adaptive spatiotemporal segmentation to extract candidate regions that can be classified as blotches. Then, the similarity between the preselected regions and their corresponding motion-compensated regions in the adjacent frames is assessed by means of motion trajectory estimation and textural information analysis. Perceived ground truth based on just noticeable contrast is employed for the evaluation of our approach against the state-of-the-art, and the reported results show a better performance for our approach.

  8. Application of Ce3+ single-doped complexes as solar spectral downshifters for enhancing photoelectric conversion efficiencies of a-Si-based solar cells

    NASA Astrophysics Data System (ADS)

    Song, Pei; Jiang, Chun

    2013-05-01

    The effect on photoelectric conversion efficiency of an a-Si-based solar cell by applying a solar spectral downshifter of rare earth ion Ce3+ single-doped complexes including yttrium aluminum garnet Y3Al5O12 single crystals, nanostructured ceramics, microstructured ceramics and B2O3-SiO2-Gd2O3-BaO glass is studied. The photoluminescence excitation spectra in the region 360-460 nm convert effectively into photoluminescence emission spectra in the region 450-550 nm where a-Si-based solar cells exhibit a higher spectral response. When these Ce3+ single-doped complexes are placed on the top of an a-Si-based solar cell as precursors for solar spectral downshifting, theoretical relative photoelectric conversion efficiencies of nc-Si:H and a-Si:H solar cells approach 1.09-1.13 and 1.04-1.07, respectively, by means of AMPS-1D numerical modeling, potentially benefiting an a-Si-based solar cell with a photoelectric efficiency improvement.

  9. NaradaBrokering as Middleware Fabric for Grid-based Remote Visualization Services

    NASA Astrophysics Data System (ADS)

    Pallickara, S.; Erlebacher, G.; Yuen, D.; Fox, G.; Pierce, M.

    2003-12-01

    Remote Visualization Services (RVS) have tended to rely on approaches based on the client server paradigm. The simplicity in these approaches is offset by problems such as single-point-of-failures, scaling and availability. Furthermore, as the complexity, scale and scope of the services hosted on this paradigm increase, this approach becomes increasingly unsuitable. We propose a scheme based on top of a distributed brokering infrastructure, NaradaBrokering, which comprises a distributed network of broker nodes. These broker nodes are organized in a cluster-based architecture that can scale to very large sizes. The broker network is resilient to broker failures and efficiently routes interactions to entities that expressed an interest in them. In our approach to RVS, services advertise their capabilities to the broker network, which manages these service advertisements. Among the services considered within our system are those that perform graphic transformations, mediate access to specialized datasets and finally those that manage the execution of specified tasks. There could be multiple instances of each of these services and the system ensures that load for a given service is distributed efficiently over these service instances. Among the features provided in our approach are efficient discovery of services and asynchronous interactions between services and service requestors (which could themselves be other services). Entities need not be online during the execution of the service request. The system also ensures that entities can be notified about task executions, partial results and failures that might have taken place during service execution. The system also facilitates specification of task overrides, distribution of execution results to alternate devices (which were not used to originally request service execution) and to multiple users. These RVS services could of course be either OGSA (Open Grid Services Architecture) based Grid services or traditional Web services. The brokering infrastructure will manage the service advertisements and the invocation of these services. This scheme ensures that the fundamental Grid computing concept is met - provide computing capabilities of those that are willing to provide it to those that seek the same. {[1]} The NaradaBrokering Project: http://www.naradabrokering.org

  10. CORDIC-based digital signal processing (DSP) element for adaptive signal processing

    NASA Astrophysics Data System (ADS)

    Bolstad, Gregory D.; Neeld, Kenneth B.

    1995-04-01

    The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.

  11. A flexible and accurate quantification algorithm for electron probe X-ray microanalysis based on thin-film element yields

    NASA Astrophysics Data System (ADS)

    Schalm, O.; Janssens, K.

    2003-04-01

    Quantitative analysis by means of electron probe X-ray microanalysis (EPXMA) of low Z materials such as silicate glasses can be hampered by the fact that ice or other contaminants build up on the Si(Li) detector beryllium window or (in the case of a windowless detector) on the Si(Li) crystal itself. These layers act as an additional absorber in front of the detector crystal, decreasing the detection efficiency at low energies (<5 keV). Since the layer thickness gradually changes with time, also the detector efficiency in the low energy region is not constant. Using the normal ZAF approach to quantification of EPXMA data is cumbersome in these conditions, because spectra from reference materials and from unknown samples must be acquired within a fairly short period of time in order to avoid the effect of the change in efficiency. To avoid this problem, an alternative approach to quantification of EPXMA data is proposed, following a philosophy often employed in quantitative analysis of X-ray fluorescence (XRF) and proton-induced X-ray emission (PIXE) data. This approach is based on the (experimental) determination of thin-film element yields, rather than starting from infinitely thick and single element calibration standards. These thin-film sensitivity coefficients can also be interpolated to allow quantification of elements for which no suitable standards are available. The change in detector efficiency can be monitored by collecting an X-ray spectrum of one multi-element glass standard. This information is used to adapt the previously determined thin-film sensitivity coefficients to the actual detector efficiency conditions valid on the day that the experiments were carried out. The main advantage of this method is that spectra collected from the standards and from the unknown samples should not be acquired within a short period of time. This new approach is evaluated for glass and metal matrices and is compared with a standard ZAF method.

  12. Equity and efficiency in HIV-treatment in South Africa: the contribution of mathematical programming to priority setting.

    PubMed

    Cleary, Susan; Mooney, Gavin; McIntyre, Di

    2010-10-01

    The HIV-epidemic is one of the greatest public health crises to face South Africa. A health care response to the treatment needs of HIV-positive people is a prime example of the desirability of an economic, rational approach to resource allocation in the face of scarcity. Despite this, almost no input based on economic analysis is currently used in national strategic planning. While cost-utility analysis is theoretically able to establish technical efficiency, in practice this is accomplished by comparing an intervention's ICER to a threshold level representing society's maximum willingness to pay to avoid death and improve health-related quality of life. Such an approach has been criticised for a number of reasons, including that it is inconsistent with a fixed budget for health care and that equity is not taken into account. It is also impractical if no national policy on the threshold exists. As an alternative, this paper proposes a mathematical programming approach that is capable of highlighting technical efficiency, equity, the equity/efficiency trade-off and the affordability of alternative HIV-treatment interventions. Government could use this information to plan an HIV-treatment strategy that best meets equity and efficiency objectives within budget constraints.

  13. A Statistical Ontology-Based Approach to Ranking for Multiword Search

    ERIC Educational Resources Information Center

    Kim, Jinwoo

    2013-01-01

    Keyword search is a prominent data retrieval method for the Web, largely because the simple and efficient nature of keyword processing allows a large amount of information to be searched with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships…

  14. Photochemical approaches to ordered polymers

    NASA Technical Reports Server (NTRS)

    Meador, Michael A.; Abdulaziz, Mahmoud; Meador, Mary Ann B.

    1990-01-01

    The photocyclization of o-benzyloxyphenyl ketone chromophores provides an efficient, high yield route to the synthesis of 2,3-diphenylbenzofurans. The synthesis and solution of photochemistry of a series of polymers containing this chromophore is described. The photocuring of these polymers is a potential new approach to the synthesis of highly conjugated polymers based upon a p-phenylene bisbenzofuran repeat unit.

  15. A Semantic-Oriented Approach for Organizing and Developing Annotation for E-Learning

    ERIC Educational Resources Information Center

    Brut, Mihaela M.; Sedes, Florence; Dumitrescu, Stefan D.

    2011-01-01

    This paper presents a solution to extend the IEEE LOM standard with ontology-based semantic annotations for efficient use of learning objects outside Learning Management Systems. The data model corresponding to this approach is first presented. The proposed indexing technique for this model development in order to acquire a better annotation of…

  16. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  17. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less

  18. Increasing Flexibility in Energy Code Compliance: Performance Packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Rosenberg, Michael I.

    Energy codes and standards have provided significant increases in building efficiency over the last 38 years, since the first national energy code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. As the code matures, the prescriptive path becomes more complicated, and also more restrictive. It is likely that an approach that considers the building as an integrated system will be necessary to achieve the next real gains in building efficiency. Performance code paths are increasing in popularity; however, there remains a significant designmore » team overhead in following the performance path, especially for smaller buildings. This paper focuses on development of one alternative format, prescriptive packages. A method to develop building-specific prescriptive packages is reviewed based on a multiple runs of prototypical building models that are used to develop parametric decision analysis to determines a set of packages with equivalent energy performance. The approach is designed to be cost-effective and flexible for the design team while achieving a desired level of energy efficiency performance. A demonstration of the approach based on mid-sized office buildings with two HVAC system types is shown along with a discussion of potential applicability in the energy code process.« less

  19. Efficient Solution of Three-Dimensional Problems of Acoustic and Electromagnetic Scattering by Open Surfaces

    NASA Technical Reports Server (NTRS)

    Turc, Catalin; Anand, Akash; Bruno, Oscar; Chaubell, Julian

    2011-01-01

    We present a computational methodology (a novel Nystrom approach based on use of a non-overlapping patch technique and Chebyshev discretizations) for efficient solution of problems of acoustic and electromagnetic scattering by open surfaces. Our integral equation formulations (1) Incorporate, as ansatz, the singular nature of open-surface integral-equation solutions, and (2) For the Electric Field Integral Equation (EFIE), use analytical regularizes that effectively reduce the number of iterations required by iterative linear-algebra solution based on Krylov-subspace iterative solvers.

  20. CRISPR-STOP: gene silencing through base-editing-induced nonsense mutations.

    PubMed

    Kuscu, Cem; Parlak, Mahmut; Tufan, Turan; Yang, Jiekun; Szlachta, Karol; Wei, Xiaolong; Mammadov, Rashad; Adli, Mazhar

    2017-07-01

    CRISPR-Cas9-induced DNA damage may have deleterious effects at high-copy-number genomic regions. Here, we use CRISPR base editors to knock out genes by changing single nucleotides to create stop codons. We show that the CRISPR-STOP method is an efficient and less deleterious alternative to wild-type Cas9 for gene-knockout studies. Early stop codons can be introduced in ∼17,000 human genes. CRISPR-STOP-mediated targeted screening demonstrates comparable efficiency to WT Cas9, which indicates the suitability of our approach for genome-wide functional screenings.

  1. Problem-Centered Supplemental Instruction in Biology: Influence on Content Recall, Content Understanding, and Problem Solving Ability

    ERIC Educational Resources Information Center

    Gardner, Joel; Belland, Brian R.

    2017-01-01

    To address the need for effective, efficient ways to apply active learning in undergraduate biology courses, in this paper, we propose a problem-centered approach that utilizes supplemental web-based instructional materials based on principles of active learning. We compared two supplementary web-based modules using active learning strategies: the…

  2. Bio-Photoelectrochemical Solar Cells Incorporating Reaction Center and Reaction Center Plus Light Harvesting Complexes

    NASA Astrophysics Data System (ADS)

    Yaghoubi, Houman

    Harvesting solar energy can potentially be a promising solution to the energy crisis now and in the future. However, material and processing costs continue to be the most important limitations for the commercial devices. A key solution to these problems might lie within the development of bio-hybrid solar cells that seeks to mimic photosynthesis to harvest solar energy and to take advantage of the low material costs, negative carbon footprint, and material abundance. The bio-photoelectrochemical cell technologies exploit biomimetic means of energy conversion by utilizing plant-derived photosystems which can be inexpensive and ultimately the most sustainable alternative. Plants and photosynthetic bacteria harvest light, through special proteins called reaction centers (RCs), with high efficiency and convert it into electrochemical energy. In theory, photosynthetic RCs can be used in a device to harvest solar energy and generate 1.1 V open circuit voltage and ~1 mA cm-2 short circuit photocurrent. Considering the nearly perfect quantum yield of photo-induced charge separation, efficiency of a protein-based solar cell might exceed 20%. In practice, the efficiency of fabricated devices has been limited mainly due to the challenges in the electron transfer between the protein complex and the device electrodes as well as limited light absorption. The overarching goal of this work is to increase the power conversion efficiency in protein-based solar cells by addressing those issues (i.e. electron transfer and light absorption). This work presents several approaches to increase the charge transfer rate between the photosynthetic RC and underlying electrode as well as increasing the light absorption to eventually enhance the external quantum efficiency (EQE) of bio-hybrid solar cells. The first approach is to decrease the electron transfer distance between one of the redox active sites in the RC and the underlying electrode by direct attachment of the of protein complex onto Au electrodes via surface exposed cysteine residues. This resulted in photocurrent densities as large as ~600 nA cm-2 while still the incident photon to generated electron quantum efficiency was as low as %3 x 10-4. 2- The second approach is to immobilize wild type RCs of Rhodobacter sphaeroides on the surface of a Au underlying electrode using self-assembled monolayers of carboxylic acid terminated oligomers and cytochrome c charge mediating layers, with a preferential orientation from the primary electron donor site. This approach resulted in EQE of up to 0.06%, which showed 200 times efficiency improvement comparing to the first approach. In the third approach, instead of isolated protein complexes, RCs plus light harvesting (LH) complexes were employed for a better photon absorption. Direct attachment of RC-LH1 complexes on Au working electrodes, resulted in 0.21% EQE which showed 3.5 times efficiency improvement over the second approach (700 times higher than the first approach). The main impact of this work is the harnessing of biological RCs for efficient energy harvesting in man-made structures. Specifically, the results in this work will advance the application of RCs in devices for energy harvesting and will enable a better understanding of bio and nanomaterial interfaces, thereby advancing the application of biological materials in electronic devices. At the end, this work offers general guidelines that can serve to improve the performance of bio-hybrid solar cells.

  3. A theoretical framework for whole-plant carbon assimilation efficiency based on metabolic scaling theory: a test case using Picea seedlings.

    PubMed

    Wang, Zhiqiang; Ji, Mingfei; Deng, Jianming; Milne, Richard I; Ran, Jinzhi; Zhang, Qiang; Fan, Zhexuan; Zhang, Xiaowei; Li, Jiangtao; Huang, Heng; Cheng, Dongliang; Niklas, Karl J

    2015-06-01

    Simultaneous and accurate measurements of whole-plant instantaneous carbon-use efficiency (ICUE) and annual total carbon-use efficiency (TCUE) are difficult to make, especially for trees. One usually estimates ICUE based on the net photosynthetic rate or the assumed proportional relationship between growth efficiency and ICUE. However, thus far, protocols for easily estimating annual TCUE remain problematic. Here, we present a theoretical framework (based on the metabolic scaling theory) to predict whole-plant annual TCUE by directly measuring instantaneous net photosynthetic and respiratory rates. This framework makes four predictions, which were evaluated empirically using seedlings of nine Picea taxa: (i) the flux rates of CO(2) and energy will scale isometrically as a function of plant size, (ii) whole-plant net and gross photosynthetic rates and the net primary productivity will scale isometrically with respect to total leaf mass, (iii) these scaling relationships will be independent of ambient temperature and humidity fluctuations (as measured within an experimental chamber) regardless of the instantaneous net photosynthetic rate or dark respiratory rate, or overall growth rate and (iv) TCUE will scale isometrically with respect to instantaneous efficiency of carbon use (i.e., the latter can be used to predict the former) across diverse species. These predictions were experimentally verified. We also found that the ranking of the nine taxa based on net photosynthetic rates differed from ranking based on either ICUE or TCUE. In addition, the absolute values of ICUE and TCUE significantly differed among the nine taxa, with both ICUE and temperature-corrected ICUE being highest for Picea abies and lowest for Picea schrenkiana. Nevertheless, the data are consistent with the predictions of our general theoretical framework, which can be used to access annual carbon-use efficiency of different species at the level of an individual plant based on simple, direct measurements. Moreover, we believe that our approach provides a way to cope with the complexities of different ecosystems, provided that sufficient measurements are taken to calibrate our approach to that of the system being studied. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Perspective: Toward efficient GaN-based red light emitting diodes using europium doping

    NASA Astrophysics Data System (ADS)

    Mitchell, Brandon; Dierolf, Volkmar; Gregorkiewicz, Tom; Fujiwara, Yasufumi

    2018-04-01

    While InGaN/GaN blue and green light-emitting diodes (LEDs) are commercially available, the search for an efficient red LED based on GaN is ongoing. The realization of this LED is crucial for the monolithic integration of the three primary colors and the development of nitride-based full-color high-resolution displays. In this perspective, we will address the challenges of attaining red luminescence from GaN under current injection and the methods that have been developed to circumvent them. While several approaches will be mentioned, a large emphasis will be placed on the recent developments of doping GaN with Eu3+ to achieve an efficient red GaN-based LED. Finally, we will provide an outlook to the future of this material as a candidate for small scale displays such as mobile device screens or micro-LED displays.

  5. Solid phase synthesis of phosphorothioate oligonucleotides utilizing diethyldithiocarbonate disulfide (DDD) as an efficient sulfur transfer reagent.

    PubMed

    Cheruvallath, Zacharia S; Kumar, R Krishna; Rentel, Claus; Cole, Douglas L; Ravikumar, Vasulinga T

    2003-04-01

    Diethyldithiodicarbonate (DDD), a cheap and easily prepared compound, is found to be a rapid and efficient sulfurizing reagent in solid phase synthesis of phosphorothioate oligodeoxyribonucleotides via the phosphoramidite approach. Product yield and quality based on IP-LC-MS compares well with high quality oligonucleotides synthesized using phenylacetyl disulfide (PADS) which is being used for manufacture of our antisense drugs.

  6. Organic Semiconductors for Sprayable Solar Cells: Improving Stability and Efficiency

    DTIC Science & Technology

    2008-03-25

    adopt a bulk heterojunction approach (where donor and acceptor are mixed before deposition). This decision immediately removed pentacene - based...derivative (ADTz) was the first screened, and unfortunately did not yield any photovoltaic performance. The fullerene adduct of pentacene and C60 was...continue). The most encouraging acceptor was the dicyano pentacene chromophore (DC_Pn). The derivatives shown above varied in efficiency from

  7. Separation of photoactive conformers based on hindered diarylethenes: efficient modulation in photocyclization quantum yields.

    PubMed

    Li, Wenlong; Jiao, Changhong; Li, Xin; Xie, Yongshu; Nakatani, Keitaro; Tian, He; Zhu, Weihong

    2014-04-25

    Endowing both solvent independency and excellent thermal bistability, the benzobis(thiadiazole)-bridged diarylethene system provides an efficient approach to realize extremely high photocyclization quantum yields (Φo-c , up to 90.6 %) by both separating completely pure anti-parallel conformer and suppressing intramolecular charge transfer (ICT). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Batched matrix computations on hardware accelerators based on GPUs

    DOE PAGES

    Haidar, Azzam; Dong, Tingxing; Luszczek, Piotr; ...

    2015-02-09

    Scientific applications require solvers that work on many small size problems that are independent from each other. At the same time, the high-end hardware evolves rapidly and becomes ever more throughput-oriented and thus there is an increasing need for an effective approach to develop energy-efficient, high-performance codes for these small matrix problems that we call batched factorizations. The many applications that need this functionality could especially benefit from the use of GPUs, which currently are four to five times more energy efficient than multicore CPUs on important scientific workloads. This study, consequently, describes the development of the most common, one-sidedmore » factorizations, Cholesky, LU, and QR, for a set of small dense matrices. The algorithms we present together with their implementations are, by design, inherently parallel. In particular, our approach is based on representing the process as a sequence of batched BLAS routines that are executed entirely on a GPU. Importantly, this is unlike the LAPACK and the hybrid MAGMA factorization algorithms that work under drastically different assumptions of hardware design and efficiency of execution of the various computational kernels involved in the implementation. Thus, our approach is more efficient than what works for a combination of multicore CPUs and GPUs for the problems sizes of interest of the application use cases. The paradigm where upon a single chip (a GPU or a CPU) factorizes a single problem at a time is not at all efficient in our applications’ context. We illustrate all of these claims through a detailed performance analysis. With the help of profiling and tracing tools, we guide our development of batched factorizations to achieve up to two-fold speedup and three-fold better energy efficiency as compared against our highly optimized batched CPU implementations based on MKL library. Finally, the tested system featured two sockets of Intel Sandy Bridge CPUs and we compared with a batched LU factorizations featured in the CUBLAS library for GPUs, we achieve as high as 2.5× speedup on the NVIDIA K40 GPU.« less

  9. Decoupled CFD-based optimization of efficiency and cavitation performance of a double-suction pump

    NASA Astrophysics Data System (ADS)

    Škerlavaj, A.; Morgut, M.; Jošt, D.; Nobile, E.

    2017-04-01

    In this study the impeller geometry of a double-suction pump ensuring the best performances in terms of hydraulic efficiency and reluctance of cavitation is determined using an optimization strategy, which was driven by means of the modeFRONTIER optimization platform. The different impeller shapes (designs) are modified according to the optimization parameters and tested with a computational fluid dynamics (CFD) software, namely ANSYS CFX. The simulations are performed using a decoupled approach, where only the impeller domain region is numerically investigated for computational convenience. The flow losses in the volute are estimated on the base of the velocity distribution at the impeller outlet. The best designs are then validated considering the computationally more expensive full geometry CFD model. The overall results show that the proposed approach is suitable for quick impeller shape optimization.

  10. InGaN working electrodes with assisted bias generated from GaAs solar cells for efficient water splitting.

    PubMed

    Liu, Shu-Yen; Sheu, J K; Lin, Yu-Chuan; Chen, Yu-Tong; Tu, S J; Lee, M L; Lai, W C

    2013-11-04

    Hydrogen generation through water splitting by n-InGaN working electrodes with bias generated from GaAs solar cell was studied. Instead of using an external bias provided by power supply, a GaAs-based solar cell was used as the driving force to increase the rate of hydrogen production. The water-splitting system was tuned using different approaches to set the operating points to the maximum power point of the GaAs solar cell. The approaches included changing the electrolytes, varying the light intensity, and introducing the immersed ITO ohmic contacts on the working electrodes. As a result, the hybrid system comprising both InGaN-based working electrodes and GaAs solar cells operating under concentrated illumination could possibly facilitate efficient water splitting.

  11. On the use of reverse Brownian motion to accelerate hybrid simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakarji, Joseph; Tartakovsky, Daniel M., E-mail: tartakovsky@stanford.edu

    Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategiesmore » for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.« less

  12. Assessing the Relative Performance of Nurses Using Data Envelopment Analysis Matrix (DEAM).

    PubMed

    Vafaee Najar, Ali; Pooya, Alireza; Alizadeh Zoeram, Ali; Emrouznejad, Ali

    2018-05-31

    Assessing employee performance is one of the most important issue in healthcare management services. Because of their direct relationship with patients, nurses are also the most influential hospital staff who play a vital role in providing healthcare services. In this paper, a novel Data Envelopment Analysis Matrix (DEAM) approach is proposed for assessing the performance of nurses based on relative efficiency. The proposed model consists of five input variables (including type of employment, work experience, training hours, working hours and overtime hours) and eight output variables (the outputs are amount of hours each nurse spend on each of the eight activities including documentation, medical instructions, wound care and patient drainage, laboratory sampling, assessment and control care, follow-up and counseling and para-clinical measures, attendance during visiting and discharge suction) have been tested on 30 nurses from the heart department of a hospital in Iran. After determining the relative efficiency of each nurse based on the DEA model, the nurses' performance were evaluated in a DEAM format. As results the nurses were divided into four groups; superstars, potential stars, those who are needed to be trained effectively and question marks. Finally, based on the proposed approach, we have drawn some recommendations to policy makers in order to improve and maintain the performance of each of these groups. The proposed approach provides a practical framework for hospital managers so that they can assess the relative efficiency of nurses, plan and take steps to improve the quality of healthcare delivery.

  13. Web-based data collection: detailed methods of a questionnaire and data gathering tool

    PubMed Central

    Cooper, Charles J; Cooper, Sharon P; del Junco, Deborah J; Shipp, Eva M; Whitworth, Ryan; Cooper, Sara R

    2006-01-01

    There have been dramatic advances in the development of web-based data collection instruments. This paper outlines a systematic web-based approach to facilitate this process through locally developed code and to describe the results of using this process after two years of data collection. We provide a detailed example of a web-based method that we developed for a study in Starr County, Texas, assessing high school students' work and health status. This web-based application includes data instrument design, data entry and management, and data tables needed to store the results that attempt to maximize the advantages of this data collection method. The software also efficiently produces a coding manual, web-based statistical summary and crosstab reports, as well as input templates for use by statistical packages. Overall, web-based data entry using a dynamic approach proved to be a very efficient and effective data collection system. This data collection method expedited data processing and analysis and eliminated the need for cumbersome and expensive transfer and tracking of forms, data entry, and verification. The code has been made available for non-profit use only to the public health research community as a free download [1]. PMID:16390556

  14. a New Approach for Progressive Dense Reconstruction from Consecutive Images Based on Prior Low-Density 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Lari, Z.; El-Sheimy, N.

    2017-09-01

    In recent years, the increasing incidence of climate-related disasters has tremendously affected our environment. In order to effectively manage and reduce dramatic impacts of such events, the development of timely disaster management plans is essential. Since these disasters are spatial phenomena, timely provision of geospatial information is crucial for effective development of response and management plans. Due to inaccessibility of the affected areas and limited budget of first-responders, timely acquisition of the required geospatial data for these applications is usually possible only using low-cost imaging and georefencing sensors mounted on unmanned platforms. Despite rapid collection of the required data using these systems, available processing techniques are not yet capable of delivering geospatial information to responders and decision makers in a timely manner. To address this issue, this paper introduces a new technique for dense 3D reconstruction of the affected scenes which can deliver and improve the needed geospatial information incrementally. This approach is implemented based on prior 3D knowledge of the scene and employs computationally-efficient 2D triangulation, feature descriptor, feature matching and point verification techniques to optimize and speed up 3D dense scene reconstruction procedure. To verify the feasibility and computational efficiency of the proposed approach, an experiment using a set of consecutive images collected onboard a UAV platform and prior low-density airborne laser scanning over the same area is conducted and step by step results are provided. A comparative analysis of the proposed approach and an available image-based dense reconstruction technique is also conducted to prove the computational efficiency and competency of this technique for delivering geospatial information with pre-specified accuracy.

  15. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  16. Including quality attributes in efficiency measures consistent with net benefit: creating incentives for evidence based medicine in practice.

    PubMed

    Eckermann, Simon; Coelli, Tim

    2013-01-01

    Evidence based medicine supports net benefit maximising therapies and strategies in processes of health technology assessment (HTA) for reimbursement and subsidy decisions internationally. However, translation of evidence based medicine to practice is impeded by efficiency measures such as cost per case-mix adjusted separation in hospitals, which ignore health effects of care. In this paper we identify a correspondence method that allows quality variables under control of providers to be incorporated in efficiency measures consistent with maximising net benefit. Including effects framed from a disutility bearing (utility reducing) perspective (e.g. mortality, morbidity or reduction in life years) as inputs and minimising quality inclusive costs on the cost-disutility plane is shown to enable efficiency measures consistent with maximising net benefit under a one to one correspondence. The method combines advantages of radial properties with an appropriate objective of maximising net benefit to overcome problems of inappropriate objectives implicit with alternative methods, whether specifying quality variables with utility bearing output (e.g. survival, reduction in morbidity or life years), hyperbolic or exogenous variables. This correspondence approach is illustrated in undertaking efficiency comparison at a clinical activity level for 45 Australian hospitals allowing for their costs and mortality rates per admission. Explicit coverage and comparability conditions of the underlying correspondence method are also shown to provide a robust framework for preventing cost-shifting and cream-skimming incentives, with appropriate qualification of analysis and support for data linkage and risk adjustment where these conditions are not satisfied. Comparison on the cost-disutility plane has previously been shown to have distinct advantages in comparing multiple strategies in HTA, which this paper naturally extends to a robust method and framework for comparing efficiency of health care providers in practice. Consequently, the proposed approach provides a missing link between HTA and practice, to allow active incentives for evidence based net benefit maximisation in practice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Implementation and Operational Research: Cost and Efficiency of a Hybrid Mobile Multidisease Testing Approach With High HIV Testing Coverage in East Africa.

    PubMed

    Chang, Wei; Chamie, Gabriel; Mwai, Daniel; Clark, Tamara D; Thirumurthy, Harsha; Charlebois, Edwin D; Petersen, Maya; Kabami, Jane; Ssemmondo, Emmanuel; Kadede, Kevin; Kwarisiima, Dalsone; Sang, Norton; Bukusi, Elizabeth A; Cohen, Craig R; Kamya, Moses; Havlir, Diane V; Kahn, James G

    2016-11-01

    In 2013-2014, we achieved 89% adult HIV testing coverage using a hybrid testing approach in 32 communities in Uganda and Kenya (SEARCH: NCT01864603). To inform scalability, we sought to determine: (1) overall cost and efficiency of this approach; and (2) costs associated with point-of-care (POC) CD4 testing, multidisease services, and community mobilization. We applied microcosting methods to estimate costs of population-wide HIV testing in 12 SEARCH trial communities. Main intervention components of the hybrid approach are census, multidisease community health campaigns (CHC), and home-based testing for CHC nonattendees. POC CD4 tests were provided for all HIV-infected participants. Data were extracted from expenditure records, activity registers, staff interviews, and time and motion logs. The mean cost per adult tested for HIV was $20.5 (range: $17.1-$32.1) (2014 US$), including a POC CD4 test at $16 per HIV+ person identified. Cost per adult tested for HIV was $13.8 at CHC vs. $31.7 by home-based testing. The cost per HIV+ adult identified was $231 ($87-$1245), with variability due mainly to HIV prevalence among persons tested (ie, HIV positivity rate). The marginal costs of multidisease testing at CHCs were $1.16/person for hypertension and diabetes, and $0.90 for malaria. Community mobilization constituted 15.3% of total costs. The hybrid testing approach achieved very high HIV testing coverage, with POC CD4, at costs similar to previously reported mobile, home-based, or venue-based HIV testing approaches in sub-Saharan Africa. By leveraging HIV infrastructure, multidisease services were offered at low marginal costs.

  18. Efficient estimation of the attributable fraction when there are monotonicity constraints and interactions.

    PubMed

    Traskin, Mikhail; Wang, Wei; Ten Have, Thomas R; Small, Dylan S

    2013-01-01

    The PAF for an exposure is the fraction of disease cases in a population that can be attributed to that exposure. One method of estimating the PAF involves estimating the probability of having the disease given the exposure and confounding variables. In many settings, the exposure will interact with the confounders and the confounders will interact with each other. Also, in many settings, the probability of having the disease is thought, based on subject matter knowledge, to be a monotone increasing function of the exposure and possibly of some of the confounders. We develop an efficient approach for estimating logistic regression models with interactions and monotonicity constraints, and apply this approach to estimating the population attributable fraction (PAF). Our approach produces substantially more accurate estimates of the PAF in some settings than the usual approach which uses logistic regression without monotonicity constraints.

  19. A variational eigenvalue solver on a photonic quantum processor

    PubMed Central

    Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L.

    2014-01-01

    Quantum computers promise to efficiently solve important problems that are intractable on a conventional computer. For quantum systems, where the physical dimension grows exponentially, finding the eigenvalues of certain operators is one such intractable problem and remains a fundamental challenge. The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution. Here we present an alternative approach that greatly reduces the requirements for coherent evolution and combine this method with a new approach to state preparation based on ansätze and classical optimization. We implement the algorithm by combining a highly reconfigurable photonic quantum processor with a conventional computer. We experimentally demonstrate the feasibility of this approach with an example from quantum chemistry—calculating the ground-state molecular energy for He–H+. The proposed approach drastically reduces the coherence time requirements, enhancing the potential of quantum resources available today and in the near future. PMID:25055053

  20. A hierarchical approach for the design improvements of an Organocat biorefinery.

    PubMed

    Abdelaziz, Omar Y; Gadalla, Mamdouh A; El-Halwagi, Mahmoud M; Ashour, Fatma H

    2015-04-01

    Lignocellulosic biomass has emerged as a potentially attractive renewable energy source. Processing technologies of such biomass, particularly its primary separation, still lack economic justification due to intense energy requirements. Establishing an economically viable and energy efficient biorefinery scheme is a significant challenge. In this work, a systematic approach is proposed for improving basic/existing biorefinery designs. This approach is based on enhancing the efficiency of mass and energy utilization through the use of a hierarchical design approach that involves mass and energy integration. The proposed procedure is applied to a novel biorefinery called Organocat to minimize its energy and mass consumption and total annualized cost. An improved heat exchanger network with minimum energy consumption of 4.5 MJ/kgdry biomass is designed. An optimal recycle network with zero fresh water usage and minimum waste discharge is also constructed, making the process more competitive and economically attractive. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Location-Driven Image Retrieval for Images Collected by a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji

    Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.

  2. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  3. Measuring population health: costs of alternative survey approaches in the Nouna Health and Demographic Surveillance System in rural Burkina Faso

    PubMed Central

    Lietz, Henrike; Lingani, Moustapha; Sié, Ali; Sauerborn, Rainer; Souares, Aurelia; Tozan, Yesim

    2015-01-01

    Background There are more than 40 Health and Demographic Surveillance System (HDSS) sites in 19 different countries. The running costs of HDSS sites are high. The financing of HDSS activities is of major importance, and adding external health surveys to the HDSS is challenging. To investigate the ways of improving data quality and collection efficiency in the Nouna HDSS in Burkina Faso, the stand-alone data collection activities of the HDSS and the Household Morbidity Survey (HMS) were integrated, and the paper-based questionnaires were consolidated into a single tablet-based questionnaire, the Comprehensive Disease Assessment (CDA). Objective The aims of this study are to estimate and compare the implementation costs of the two different survey approaches for measuring population health. Design All financial costs of stand-alone (HDSS and HMS) and integrated (CDA) surveys were estimated from the perspective of the implementing agency. Fixed and variable costs of survey implementation and key cost drivers were identified. The costs per household visit were calculated for both survey approaches. Results While fixed costs of survey implementation were similar for the two survey approaches, there were considerable variations in variable costs, resulting in an estimated annual cost saving of about US$45,000 under the integrated survey approach. This was primarily because the costs of data management for the tablet-based CDA survey were considerably lower than for the paper-based stand-alone surveys. The cost per household visit from the integrated survey approach was US$21 compared with US$25 from the stand-alone surveys for collecting the same amount of information from 10,000 HDSS households. Conclusions The CDA tablet-based survey method appears to be feasible and efficient for collecting health and demographic data in the Nouna HDSS in rural Burkina Faso. The possibility of using the tablet-based data collection platform to improve the quality of population health data requires further exploration. PMID:26257048

  4. The indexed time table approach for planning and acting

    NASA Technical Reports Server (NTRS)

    Ghallab, Malik; Alaoui, Amine Mounir

    1989-01-01

    A representation is discussed of symbolic temporal relations, called IxTeT, that is both powerful enough at the reasoning level for tasks such as plan generation, refinement and modification, and efficient enough for dealing with real time constraints in action monitoring and reactive planning. Such representation for dealing with time is needed in a teleoperated space robot. After a brief survey of known approaches, the proposed representation shows its computational efficiency for managing a large data base of temporal relations. Reactive planning with IxTeT is described and exemplified through the problem of mission planning and modification for a simple surveying satellite.

  5. Adjustable lossless image compression based on a natural splitting of an image into drawing, shading, and fine-grained components

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.

    1993-01-01

    The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.

  6. Automated Measurement of Patient-Specific Tibial Slopes from MRI

    PubMed Central

    Amerinatanzi, Amirhesam; Summers, Rodney K.; Ahmadi, Kaveh; Goel, Vijay K.; Hewett, Timothy E.; Nyman, Edward

    2017-01-01

    Background: Multi-planar proximal tibial slopes may be associated with increased likelihood of osteoarthritis and anterior cruciate ligament injury, due in part to their role in checking the anterior-posterior stability of the knee. Established methods suffer repeatability limitations and lack computational efficiency for intuitive clinical adoption. The aims of this study were to develop a novel automated approach and to compare the repeatability and computational efficiency of the approach against previously established methods. Methods: Tibial slope geometries were obtained via MRI and measured using an automated Matlab-based approach. Data were compared for repeatability and evaluated for computational efficiency. Results: Mean lateral tibial slope (LTS) for females (7.2°) was greater than for males (1.66°). Mean LTS in the lateral concavity zone was greater for females (7.8° for females, 4.2° for males). Mean medial tibial slope (MTS) for females was greater (9.3° vs. 4.6°). Along the medial concavity zone, female subjects demonstrated greater MTS. Conclusion: The automated method was more repeatable and computationally efficient than previously identified methods and may aid in the clinical assessment of knee injury risk, inform surgical planning, and implant design efforts. PMID:28952547

  7. Adaptive correlation filter-based video stabilization without accumulative global motion estimation

    NASA Astrophysics Data System (ADS)

    Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil

    2014-12-01

    We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.

  8. Analysis of case-only studies accounting for genotyping error.

    PubMed

    Cheng, K F

    2007-03-01

    The case-only design provides one approach to assess possible interactions between genetic and environmental factors. It has been shown that if these factors are conditionally independent, then a case-only analysis is not only valid but also very efficient. However, a drawback of the case-only approach is that its conclusions may be biased by genotyping errors. In this paper, our main aim is to propose a method for analysis of case-only studies when these errors occur. We show that the bias can be adjusted through the use of internal validation data, which are obtained by genotyping some sampled individuals twice. Our analysis is based on a simple and yet highly efficient conditional likelihood approach. Simulation studies considered in this paper confirm that the new method has acceptable performance under genotyping errors.

  9. In silico fragment-based drug design.

    PubMed

    Konteatis, Zenon D

    2010-11-01

    In silico fragment-based drug design (FBDD) is a relatively new approach inspired by the success of the biophysical fragment-based drug discovery field. Here, we review the progress made by this approach in the last decade and showcase how it complements and expands the capabilities of biophysical FBDD and structure-based drug design to generate diverse, efficient drug candidates. Advancements in several areas of research that have enabled the development of in silico FBDD and some applications in drug discovery projects are reviewed. The reader is introduced to various computational methods that are used for in silico FBDD, the fragment library composition for this technique, special applications used to identify binding sites on the surface of proteins and how to assess the druggability of these sites. In addition, the reader will gain insight into the proper application of this approach from examples of successful programs. In silico FBDD captures a much larger chemical space than high-throughput screening and biophysical FBDD increasing the probability of developing more diverse, patentable and efficient molecules that can become oral drugs. The application of in silico FBDD holds great promise for historically challenging targets such as protein-protein interactions. Future advances in force fields, scoring functions and automated methods for determining synthetic accessibility will all aid in delivering more successes with in silico FBDD.

  10. Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry.

    PubMed

    Planatscher, Hannes; Supper, Jochen; Poetz, Oliver; Stoll, Dieter; Joos, Thomas; Templin, Markus F; Zell, Andreas

    2010-06-25

    Mass spectrometry (MS) based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. For small datasets (a few hundred proteins) it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.

  11. Evaluating a Collaborative Approach to Improve Prior Authorization Efficiency in the Treatment of Hepatitis C Virus.

    PubMed

    Dunn, Emily E; Vranek, Kathryn; Hynicka, Lauren M; Gripshover, Janet; Potosky, Darryn; Mattingly, T Joseph

    A team-based approach to obtaining prior authorization approval was implemented utilizing a specialty pharmacy, a clinic-based pharmacy technician specialist, and a registered nurse to work with providers to obtain approval for medications for hepatitis C virus (HCV) infection. The objective of this study was to evaluate the time to approval for prescribed treatment of HCV infection. A retrospective observational study was conducted including patients treated for HCV infection by clinic providers who received at least 1 oral direct-acting antiviral HCV medication. Patients were divided into 2 groups, based on whether they were treated before or after the implementation of the team-based approach. Student t tests were used to compare average wait times before and after the intervention. The sample included 180 patients, 68 treated before the intervention and 112 patients who initiated therapy after. All patients sampled required prior authorization approval by a third-party payer to begin therapy. There was a statistically significant reduction (P = .02) in average wait time in the postintervention group (15.6 ± 12.1 days) once adjusted using dates of approval. Pharmacy collaboration may provide increases in efficiency in provider prior authorization practices and reduced wait time for patients to begin treatment.

  12. Evaluating a Collaborative Approach to Improve Prior Authorization Efficiency in the Treatment of Hepatitis C Virus

    PubMed Central

    Dunn, Emily E.; Vranek, Kathryn; Hynicka, Lauren M.; Gripshover, Janet; Potosky, Darryn

    2017-01-01

    Objective: A team-based approach to obtaining prior authorization approval was implemented utilizing a specialty pharmacy, a clinic-based pharmacy technician specialist, and a registered nurse to work with providers to obtain approval for medications for hepatitis C virus (HCV) infection. The objective of this study was to evaluate the time to approval for prescribed treatment of HCV infection. Methods: A retrospective observational study was conducted including patients treated for HCV infection by clinic providers who received at least 1 oral direct-acting antiviral HCV medication. Patients were divided into 2 groups, based on whether they were treated before or after the implementation of the team-based approach. Student t tests were used to compare average wait times before and after the intervention. Results: The sample included 180 patients, 68 treated before the intervention and 112 patients who initiated therapy after. All patients sampled required prior authorization approval by a third-party payer to begin therapy. There was a statistically significant reduction (P = .02) in average wait time in the postintervention group (15.6 ± 12.1 days) once adjusted using dates of approval. Conclusions: Pharmacy collaboration may provide increases in efficiency in provider prior authorization practices and reduced wait time for patients to begin treatment. PMID:28665904

  13. Real-time probabilistic covariance tracking with efficient model update.

    PubMed

    Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li

    2012-05-01

    The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.

  14. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications

    PubMed Central

    Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.

    2018-01-01

    Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918

  15. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications.

    PubMed

    Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A

    2018-04-01

    Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.

  16. Fragment-based lead generation: identification of seed fragments by a highly efficient fragment screening technology

    NASA Astrophysics Data System (ADS)

    Neumann, Lars; Ritscher, Allegra; Müller, Gerhard; Hafenbradl, Doris

    2009-08-01

    For the detection of the precise and unambiguous binding of fragments to a specific binding site on the target protein, we have developed a novel reporter displacement binding assay technology. The application of this technology for the fragment screening as well as the fragment evolution process with a specific modelling based design strategy is demonstrated for inhibitors of the protein kinase p38alpha. In a fragment screening approach seed fragments were identified which were then used to build compounds from the deep-pocket towards the hinge binding area of the protein kinase p38alpha based on a modelling approach. BIRB796 was used as a blueprint for the alignment of the fragments. The fragment evolution of these deep-pocket binding fragments towards the fully optimized inhibitor BIRB796 included the modulation of the residence time as well as the affinity. The goal of our study was to evaluate the robustness and efficiency of our novel fragment screening technology at high fragment concentrations, compare the screening data with biochemical activity data and to demonstrate the evolution of the hit fragments with fast kinetics, into slow kinetic inhibitors in an in silico approach.

  17. Characterization and effectiveness of pay-for-performance in ophthalmology: a systematic review.

    PubMed

    Herbst, Tim; Emmert, Martin

    2017-06-05

    To identify, characterize and compare existing pay-for-performance approaches and their impact on the quality of care and efficiency in ophthalmology. A systematic evidence-based review was conducted. English, French and German written literature published between 2000 and 2015 were searched in the following databases: Medline (via PubMed), NCBI web site, Scopus, Web of Knowledge, Econlit and the Cochrane Library. Empirical as well as descriptive articles were included. Controlled clinical trials, meta-analyses, randomized controlled studies as well as observational studies were included as empirical articles. Systematic characterization of identified pay-for-performance approaches (P4P approaches) was conducted according to the "Model for Implementing and Monitoring Incentives for Quality" (MIMIQ). Methodological quality of empirical articles was assessed according to the Critical Appraisal Skills Programme (CASP) checklists. Overall, 13 relevant articles were included. Eleven articles were descriptive and two articles included empirical analyses. Based on these articles, four different pay-for-performance approaches implemented in the United States were identified. With regard to quality and incentive elements, systematic comparison showed numerous differences between P4P approaches. Empirical studies showed isolated cost or quality effects, while a simultaneous examination of these effects was missing. Research results show that experiences with pay-for-performance approaches in ophthalmology are limited. Identified approaches differ with regard to quality and incentive elements restricting comparability. Two empirical studies are insufficient to draw strong conclusions about the effectiveness and efficiency of these approaches.

  18. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.

  19. Advances in the indirect, descriptive, and experimental approaches to the functional analysis of problem behavior.

    PubMed

    Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier

    2014-05-01

    Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.

  20. Performance improvement of optical CDMA networks with stochastic artificial bee colony optimization technique

    NASA Astrophysics Data System (ADS)

    Panda, Satyasen

    2018-05-01

    This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.

  1. Computer-based learning: interleaving whole and sectional representation of neuroanatomy.

    PubMed

    Pani, John R; Chariker, Julia H; Naaz, Farah

    2013-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). Copyright © 2012 American Association of Anatomists.

  2. 0D-2D Quantum Dot: Metal Dichalcogenide Nanocomposite Photocatalyst Achieves Efficient Hydrogen Generation.

    PubMed

    Liu, Xiao-Yuan; Chen, Hao; Wang, Ruili; Shang, Yuequn; Zhang, Qiong; Li, Wei; Zhang, Guozhen; Su, Juan; Dinh, Cao Thang; de Arquer, F Pelayo García; Li, Jie; Jiang, Jun; Mi, Qixi; Si, Rui; Li, Xiaopeng; Sun, Yuhan; Long, Yi-Tao; Tian, He; Sargent, Edward H; Ning, Zhijun

    2017-06-01

    Hydrogen generation via photocatalysis-driven water splitting provides a convenient approach to turn solar energy into chemical fuel. The development of photocatalysis system that can effectively harvest visible light for hydrogen generation is an essential task in order to utilize this technology. Herein, a kind of cadmium free Zn-Ag-In-S (ZAIS) colloidal quantum dots (CQDs) that shows remarkably photocatalytic efficiency in the visible region is developed. More importantly, a nanocomposite based on the combination of 0D ZAIS CQDs and 2D MoS 2 nanosheet is developed. This can leverage the strong light harvesting capability of CQDs and catalytic performance of MoS 2 simultaneously. As a result, an excellent external quantum efficiency of 40.8% at 400 nm is achieved for CQD-based hydrogen generation catalyst. This work presents a new platform for the development of high-efficiency photocatalyst based on 0D-2D nanocomposite. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. High-efficiency power transfer for silicon-based photonic devices

    NASA Astrophysics Data System (ADS)

    Son, Gyeongho; Yu, Kyoungsik

    2018-02-01

    We demonstrate an efficient coupling of guided light of 1550 nm from a standard single-mode optical fiber to a silicon waveguide using the finite-difference time-domain method and propose a fabrication method of tapered optical fibers for efficient power transfer to silicon-based photonic integrated circuits. Adiabatically-varying fiber core diameters with a small tapering angle can be obtained using the tube etching method with hydrofluoric acid and standard single-mode fibers covered by plastic jackets. The optical power transmission of the fundamental HE11 and TE-like modes between the fiber tapers and the inversely-tapered silicon waveguides was calculated with the finite-difference time-domain method to be more than 99% at a wavelength of 1550 nm. The proposed method for adiabatic fiber tapering can be applied in quantum optics, silicon-based photonic integrated circuits, and nanophotonics. Furthermore, efficient coupling within the telecommunication C-band is a promising approach for quantum networks in the future.

  4. Computer-Based Learning: Interleaving Whole and Sectional Representation of Neuroanatomy

    PubMed Central

    Pani, John R.; Chariker, Julia H.; Naaz, Farah

    2015-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). PMID:22761001

  5. Surface engineering of nanoparticles in suspension for particle based bio-sensing

    PubMed Central

    Sen, Tapas; Bruce, Ian J.

    2012-01-01

    Surface activation of nanoparticles in suspension using amino organosilane has been carried out via strict control of a particle surface ad-layer of water using a simple but efficient protocol ‘Tri-phasic Reverse Emulsion’ (TPRE). This approach produced thin and ordered layers of particle surface functional groups which allowed the efficient conjugation of biomolecules. When used in bio-sensing applications, the resultant conjugates were highly efficient in the hybrid capture of complementary oligonucleotides and the detection of food borne microorganism. TPRE overcomes a number of fundamental problems associated with the surface modification of particles in aqueous suspension viz. particle aggregation, density and organization of resultant surface functional groups by controlling surface condensation of the aminosilane. The approach has potential for application in areas as diverse as nanomedicine, to food technology and industrial catalysis. PMID:22872809

  6. Efficient method to create integration-free, virus-free, Myc and Lin28-free human induced pluripotent stem cells from adherent cells.

    PubMed

    Kamath, Anant; Ternes, Sara; McGowan, Stephen; English, Anthony; Mallampalli, Rama; Moy, Alan B

    2017-08-01

    Nonviral induced pluripotent stem cell (IPSC) reprogramming is not efficient without the oncogenes, Myc and Lin28 . We describe a robust Myc and Lin28 -free IPSC reprogramming approach using reprogramming molecules. IPSC colony formation was compared in the presence and absence of Myc and Lin28 by the mixture of reprogramming molecules and episomal vectors. While more colonies were observed in cultures transfected with the aforementioned oncogenes, the Myc and Lin28 -free method achieved the same reprogramming efficiency as reports that used these oncogenes. Further, all colonies were fully reprogrammed based on expression of SSEA4, even in the absence of Myc and Lin28 . This approach satisfies an important regulatory pathway for developing IPSC cell therapies with lower clinical risk.

  7. Efficient nanoparticle mediated sustained RNA interference in human primary endothelial cells

    NASA Astrophysics Data System (ADS)

    Mukerjee, Anindita; Shankardas, Jwalitha; Ranjan, Amalendu P.; Vishwanatha, Jamboor K.

    2011-11-01

    Endothelium forms an important target for drug and/or gene therapy since endothelial cells play critical roles in angiogenesis and vascular functions and are associated with various pathophysiological conditions. RNA mediated gene silencing presents a new therapeutic approach to overcome many such diseases, but the major challenge of such an approach is to ensure minimal toxicity and effective transfection efficiency of short hairpin RNA (shRNA) to primary endothelial cells. In the present study, we formulated shAnnexin A2 loaded poly(D,L-lactide-co-glycolide) (PLGA) nanoparticles which produced intracellular small interfering RNA (siRNA) against Annexin A2 and brought about the downregulation of Annexin A2. The per cent encapsulation of the plasmid within the nanoparticle was found to be 57.65%. We compared our nanoparticle based transfections with Lipofectamine mediated transfection, and our studies show that nanoparticle based transfection efficiency is very high (~97%) and is more sustained compared to conventional Lipofectamine mediated transfections in primary retinal microvascular endothelial cells and human cancer cell lines. Our findings also show that the shAnnexin A2 loaded PLGA nanoparticles had minimal toxicity with almost 95% of cells being viable 24 h post-transfection while Lipofectamine based transfections resulted in only 30% viable cells. Therefore, PLGA nanoparticle based transfection may be used for efficient siRNA transfection to human primary endothelial and cancer cells. This may serve as a potential adjuvant treatment option for diseases such as diabetic retinopathy, retinopathy of prematurity and age related macular degeneration besides various cancers.

  8. On the sound insulation of acoustic metasurface using a sub-structuring approach

    NASA Astrophysics Data System (ADS)

    Yu, Xiang; Lu, Zhenbo; Cheng, Li; Cui, Fangsen

    2017-08-01

    The feasibility of using an acoustic metasurface (AMS) with acoustic stop-band property to realize sound insulation with ventilation function is investigated. An efficient numerical approach is proposed to evaluate its sound insulation performance. The AMS is excited by a reverberant sound source and the standardized sound reduction index (SRI) is numerically investigated. To facilitate the modeling, the coupling between the AMS and the adjacent acoustic fields is formulated using a sub-structuring approach. A modal based formulation is applied to both the source and receiving room, enabling an efficient calculation in the frequency range from 125 Hz to 2000 Hz. The sound pressures and the velocities at the interface are matched by using a transfer function relation based on ;patches;. For illustration purposes, numerical examples are investigated using the proposed approach. The unit cell constituting the AMS is constructed in the shape of a thin acoustic chamber with tailored inner structures, whose stop-band property is numerically analyzed and experimentally demonstrated. The AMS is shown to provide effective sound insulation of over 30 dB in the stop-band frequencies from 600 to 1600 Hz. It is also shown that the proposed approach has the potential to be applied to a broad range of AMS studies and optimization problems.

  9. A high-resolution peak fractionation approach for streamlined screening of nuclear-factor-E2-related factor-2 activators in Salvia miltiorrhiza.

    PubMed

    Zhang, Hui; Luo, Li-Ping; Song, Hui-Peng; Hao, Hai-Ping; Zhou, Ping; Qi, Lian-Wen; Li, Ping; Chen, Jun

    2014-01-24

    Generation of a high-purity fraction library for efficiently screening active compounds from natural products is challenging because of their chemical diversity and complex matrices. In this work, a strategy combining high-resolution peak fractionation (HRPF) with a cell-based assay was proposed for target screening of bioactive constituents from natural products. In this approach, peak fractionation was conducted under chromatographic conditions optimized for high-resolution separation of the natural product extract. The HRPF approach was automatically performed according to the predefinition of certain peaks based on their retention times from a reference chromatographic profile. The corresponding HRPF database was collected with a parallel mass spectrometer to ensure purity and characterize the structures of compounds in the various fractions. Using this approach, a set of 75 peak fractions on the microgram scale was generated from 4mg of the extract of Salvia miltiorrhiza. After screening by an ARE-luciferase reporter gene assay, 20 diterpene quinones were selected and identified, and 16 of these compounds were reported to possess novel Nrf2 activation activity. Compared with conventional fixed-time interval fractionation, the HRPF approach could significantly improve the efficiency of bioactive compound discovery and facilitate the uncovering of minor active components. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. A probabilistic bridge safety evaluation against floods.

    PubMed

    Liao, Kuo-Wei; Muto, Yasunori; Chen, Wei-Lun; Wu, Bang-Ho

    2016-01-01

    To further capture the influences of uncertain factors on river bridge safety evaluation, a probabilistic approach is adopted. Because this is a systematic and nonlinear problem, MPP-based reliability analyses are not suitable. A sampling approach such as a Monte Carlo simulation (MCS) or importance sampling is often adopted. To enhance the efficiency of the sampling approach, this study utilizes Bayesian least squares support vector machines to construct a response surface followed by an MCS, providing a more precise safety index. Although there are several factors impacting the flood-resistant reliability of a bridge, previous experiences and studies show that the reliability of the bridge itself plays a key role. Thus, the goal of this study is to analyze the system reliability of a selected bridge that includes five limit states. The random variables considered here include the water surface elevation, water velocity, local scour depth, soil property and wind load. Because the first three variables are deeply affected by river hydraulics, a probabilistic HEC-RAS-based simulation is performed to capture the uncertainties in those random variables. The accuracy and variation of our solutions are confirmed by a direct MCS to ensure the applicability of the proposed approach. The results of a numerical example indicate that the proposed approach can efficiently provide an accurate bridge safety evaluation and maintain satisfactory variation.

  11. Design approaches to more energy efficient engines

    NASA Technical Reports Server (NTRS)

    Saunders, N. T.; Colladay, R. S.; Macioce, L. E.

    1978-01-01

    The status of NASA's Energy Efficient Engine Project, a comparative government-industry effort aimed at advancing the technology base for the next generation of large turbofan engines for civil aircraft transports is summarized. Results of recently completed studies are reviewed. These studies involved selection of engine cycles and configurations that offer potential for at least 12% lower fuel consumption than current engines and also are economically attractive and environmentally acceptable. Emphasis is on the advancements required in component technologies and systems design concepts to permit future development of these more energy efficient engines.

  12. A comparison of DEA and SFA using micro- and macro-level perspectives: Efficiency of Chinese local banks

    NASA Astrophysics Data System (ADS)

    Silva, Thiago Christiano; Tabak, Benjamin Miranda; Cajueiro, Daniel Oliveira; Dias, Marina Villas Boas

    2017-03-01

    This study investigates to which extent results produced by a single frontier model are reliable, based on the application of data envelopment analysis and stochastic frontier approach to a sample of Chinese local banks. Our findings show they produce a consistent trend on global efficiency scores over the years. However, rank correlations indicate they diverge with respect to individual performance diagnoses. Therefore, these models provide steady information on the efficiency of the banking system as a whole, but they become divergent at the individual level.

  13. [The transperygoid approach to the removal of a recurrent juvenile angiofibroma at the base of the skull without preoperative embolization].

    PubMed

    Grachev, N S; Vorozhtsov, I N

    The authors report a clinical case of successful elimination of a recurrent juvenile angiofibroma at the base of the skull (JAFBS) with the application of the optical navigation system and a cold plasma scalpel in the absence of preoperative embolization. It has been demonstrated using the proposed transperygoid approach to the extirpation of the tumour that a recurrent juvenile angiofibroma at the base of the skull can be efficiently removed by means of a modern minimally invasive and at the same time radical surgical method.

  14. Towards a semantics-based approach in the development of geographic portals

    NASA Astrophysics Data System (ADS)

    Athanasis, Nikolaos; Kalabokidis, Kostas; Vaitis, Michail; Soulakellis, Nikolaos

    2009-02-01

    As the demand for geospatial data increases, the lack of efficient ways to find suitable information becomes critical. In this paper, a new methodology for knowledge discovery in geographic portals is presented. Based on the Semantic Web, our approach exploits the Resource Description Framework (RDF) in order to describe the geoportal's information with ontology-based metadata. When users traverse from page to page in the portal, they take advantage of the metadata infrastructure to navigate easily through data of interest. New metadata descriptions are published in the geoportal according to the RDF schemas.

  15. Experimental modeling of swirl flows in power plants

    NASA Astrophysics Data System (ADS)

    Shtork, S. I.; Litvinov, I. V.; Gesheva, E. S.; Tsoy, M. A.; Skripkin, S. G.

    2018-03-01

    The article presents an overview of the methods and approaches to experimental modeling of various thermal and hydropower units - furnaces of pulverized coal boilers and flow-through elements of hydro turbines. The presented modeling approaches based on a combination of experimentation and rapid prototyping of working parts may be useful in optimizing energy equipment to improve safety and efficiency of industrial energy systems.

  16. An Ecosystem-Based Approach to Valley Oak Mitigation

    Treesearch

    Marcus S. Rawlings; Daniel A. Airola

    1997-01-01

    The Contra Costa Water District’s (CCWD’s) Los Vaqueros Reservoir Project will inundate 180 acres of valley oak habitats. Instead of using replacement ratios to identify mitigation needs, we designed an approach that would efficiently replace lost ecological values. We developed a habitat quality index model to assess the value of lost wildlife habitat and...

  17. Biochar modification to enhance sorption of inorganics from water.

    PubMed

    Sizmur, Tom; Fresno, Teresa; Akgül, Gökçen; Frost, Harrison; Moreno-Jiménez, Eduardo

    2017-12-01

    Biochar can be used as a sorbent to remove inorganic pollutants from water but the efficiency of sorption can be improved by activation or modification. This review evaluates various methods to increase the sorption efficiency of biochar including activation with steam, acids and bases and the production of biochar-based composites with metal oxides, carbonaceous materials, clays, organic compounds, and biofilms. We describe the approaches, and explain how each modification alters the sorption capacity. Physical and chemical activation enhances the surface area or functionality of biochar, whereas modification to produce biochar-based composites uses the biochar as a scaffold to embed new materials to create surfaces with novel surface properties upon which inorganic pollutants can sorb. Many of these approaches enhance the retention of a wide range of inorganic pollutants in waters, but here we provide a comparative assessment for Cd 2+ , Cu 2+ , Hg 2+ , Pb 2+ , Zn 2+ , NH 4 + , NO 3 - , PO 4 3- , CrO 4 2- and AsO 4 3- . Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. On the Use of CAD and Cartesian Methods for Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.

    2004-01-01

    The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.

  19. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  20. Second principle approach to the analysis of unsteady flow and heat transfer in a tube with arc-shaped corrugation

    NASA Astrophysics Data System (ADS)

    Pagliarini, G.; Vocale, P.; Mocerino, A.; Rainieri, S.

    2017-01-01

    Passive convective heat transfer enhancement techniques are well known and widespread tool for increasing the efficiency of heat transfer equipment. In spite of the ability of the first principle approach to forecast the macroscopic effects of the passive techniques for heat transfer enhancement, namely the increase of both the overall heat exchanged and the head losses, a first principle analysis based on energy, momentum and mass local conservation equations is hardly able to give a comprehensive explanation of how local modifications in the boundary layers contribute to the overall effect. A deeper insight on the heat transfer enhancement mechanisms can be instead obtained within a second principle approach, through the analysis of the local exergy dissipation phenomena which are related to heat transfer and fluid flow. To this aim, the analysis based on the second principle approach implemented through a careful consideration of the local entropy generation rate seems the most suitable, since it allows to identify more precisely the cause of the loss of efficiency in the heat transfer process, thus providing a useful guide in the choice of the most suitable heat transfer enhancement techniques.

  1. Complex Approach to Conceptual Design of Machine Mechanically Extracting Oil from Jatropha curcas L. Seeds for Biomass-Based Fuel Production

    PubMed Central

    Mašín, Ivan

    2016-01-01

    One of important sources of biomass-based fuel is Jatropha curcas L. Great attention is paid to the biofuel produced from the oil extracted from the Jatropha curcas L. seeds. A mechanised extraction is the most efficient and feasible method for oil extraction for small-scale farmers but there is a need to extract oil in more efficient manner which would increase the labour productivity, decrease production costs, and increase benefits of small-scale farmers. On the other hand innovators should be aware that further machines development is possible only when applying the systematic approach and design methodology in all stages of engineering design. Systematic approach in this case means that designers and development engineers rigorously apply scientific knowledge, integrate different constraints and user priorities, carefully plan product and activities, and systematically solve technical problems. This paper therefore deals with the complex approach to design specification determining that can bring new innovative concepts to design of mechanical machines for oil extraction. The presented case study as the main part of the paper is focused on new concept of screw of machine mechanically extracting oil from Jatropha curcas L. seeds. PMID:27668259

  2. Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.

    PubMed

    Park, Jongin; Wi, Seok-Min; Lee, Jin S

    2016-02-01

    Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.

  3. Distributed Efficient Similarity Search Mechanism in Wireless Sensor Networks

    PubMed Central

    Ahmed, Khandakar; Gregory, Mark A.

    2015-01-01

    The Wireless Sensor Network similarity search problem has received considerable research attention due to sensor hardware imprecision and environmental parameter variations. Most of the state-of-the-art distributed data centric storage (DCS) schemes lack optimization for similarity queries of events. In this paper, a DCS scheme with metric based similarity searching (DCSMSS) is proposed. DCSMSS takes motivation from vector distance index, called iDistance, in order to transform the issue of similarity searching into the problem of an interval search in one dimension. In addition, a sector based distance routing algorithm is used to efficiently route messages. Extensive simulation results reveal that DCSMSS is highly efficient and significantly outperforms previous approaches in processing similarity search queries. PMID:25751081

  4. Efficient, footprint-free human iPSC genome editing by consolidation of Cas9/CRISPR and piggyBac technologies.

    PubMed

    Wang, Gang; Yang, Luhan; Grishin, Dennis; Rios, Xavier; Ye, Lillian Y; Hu, Yong; Li, Kai; Zhang, Donghui; Church, George M; Pu, William T

    2017-01-01

    Genome editing of human induced pluripotent stem cells (hiPSCs) offers unprecedented opportunities for in vitro disease modeling and personalized cell replacement therapy. The introduction of Cas9-directed genome editing has expanded adoption of this approach. However, marker-free genome editing using standard protocols remains inefficient, yielding desired targeted alleles at a rate of ∼1-5%. We developed a protocol based on a doxycycline-inducible Cas9 transgene carried on a piggyBac transposon to enable robust and highly efficient Cas9-directed genome editing, so that a parental line can be expeditiously engineered to harbor many separate mutations. Treatment with doxycycline and transfection with guide RNA (gRNA), donor DNA and piggyBac transposase resulted in efficient, targeted genome editing and concurrent scarless transgene excision. Using this approach, in 7 weeks it is possible to efficiently obtain genome-edited clones with minimal off-target mutagenesis and with indel mutation frequencies of 40-50% and homology-directed repair (HDR) frequencies of 10-20%.

  5. Convolutional networks for fast, energy-efficient neuromorphic computing

    PubMed Central

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  6. Convolutional networks for fast, energy-efficient neuromorphic computing.

    PubMed

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  7. Extremely high absolute internal quantum efficiency of photoluminescence in co-doped GaN:Zn,Si

    NASA Astrophysics Data System (ADS)

    Reshchikov, M. A.; Willyard, A. G.; Behrends, A.; Bakin, A.; Waag, A.

    2011-10-01

    We report on the fabrication of GaN co-doped with silicon and zinc by metalorganic vapor phase epitaxy and a detailed study of photoluminescence in this material. We observe an exceptionally high absolute internal quantum efficiency of blue photoluminescence in GaN:Zn,Si. The value of 0.93±0.04 has been obtained from several approaches based on rate equations.

  8. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Chen, Zizhong; Song, Shuaiwen

    2016-01-18

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  9. Energy efficient engine high-pressure turbine detailed design report

    NASA Technical Reports Server (NTRS)

    Thulin, R. D.; Howe, D. C.; Singer, I. D.

    1982-01-01

    The energy efficient engine high-pressure turbine is a single stage system based on technology advancements in the areas of aerodynamics, structures and materials to achieve high performance, low operating economics and durability commensurate with commercial service requirements. Low loss performance features combined with a low through-flow velocity approach results in a predicted efficiency of 88.8 for a flight propulsion system. Turbine airfoil durability goals are achieved through the use of advanced high-strength and high-temperature capability single crystal materials and effective cooling management. Overall, this design reflects a considerable extension in turbine technology that is applicable to future, energy efficient gas-turbine engines.

  10. Investigating the Interplay between Energy Efficiency and Resilience in High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Song, Shuaiwen; Wu, Panruo

    2015-05-29

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  11. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Chen, Zizhong; Song, Shuaiwen Leon

    2015-11-16

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  12. A Parallel Cartesian Approach for External Aerodynamics of Vehicles with Complex Geometry

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2001-01-01

    This workshop paper presents the current status in the development of a new approach for the solution of the Euler equations on Cartesian meshes with embedded boundaries in three dimensions on distributed and shared memory architectures. The approach uses adaptively refined Cartesian hexahedra to fill the computational domain. Where these cells intersect the geometry, they are cut by the boundary into arbitrarily shaped polyhedra which receive special treatment by the solver. The presentation documents a newly developed multilevel upwind solver based on a flexible domain-decomposition strategy. One novel aspect of the work is its use of space-filling curves (SFC) for memory efficient on-the-fly parallelization, dynamic re-partitioning and automatic coarse mesh generation. Within each subdomain the approach employs a variety reordering techniques so that relevant data are on the same page in memory permitting high-performance on cache-based processors. Details of the on-the-fly SFC based partitioning are presented as are construction rules for the automatic coarse mesh generation. After describing the approach, the paper uses model problems and 3- D configurations to both verify and validate the solver. The model problems demonstrate that second-order accuracy is maintained despite the presence of the irregular cut-cells in the mesh. In addition, it examines both parallel efficiency and convergence behavior. These investigations demonstrate a parallel speed-up in excess of 28 on 32 processors of an SGI Origin 2000 system and confirm that mesh partitioning has no effect on convergence behavior.

  13. Driving force analysis of the agricultural water footprint in China based on the LMDI method.

    PubMed

    Zhao, Chunfu; Chen, Bin

    2014-11-04

    China's water scarcity problems have become more severe because of the unprecedented economic development and population explosion. Considering agriculture's large share of water consumption, obtaining a clear understanding of Chinese agricultural consumptive water use plays a key role in addressing China's water resource stress and providing appropriate water mitigation policies. We account for the Chinese agricultural water footprint from 1990 to 2009 based on bottom up approach. Then, the underlying driving forces are decomposed into diet structure effect, efficiency effect, economic activity effect, and population effect, and analyzed by applying a log-mean Divisia index (LMDI) model. The results reveal that the Chinese agricultural water footprint has risen from the 94.1 Gm3 in 1990 to 141 Gm3 in 2009. The economic activity effect is the largest positive contributor to promoting the water footprint growth, followed by the population effect and diet structure effect. Although water efficiency improvement as a significant negative effect has reduced overall water footprint, the water footprint decline from water efficiency improvement cannot compensate for the huge increase from the three positive driving factors. The combination of water efficiency improvement and dietary structure adjustment is the most effective approach for controlling the Chinese agricultural water footprint's further growth.

  14. Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches.

    PubMed

    Rawassizadeh, Reza; Tomitsch, Martin; Nourizadeh, Manouchehr; Momeni, Elaheh; Peery, Aaron; Ulanova, Liudmila; Pazzani, Michael

    2015-09-08

    As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras.

  15. Reactive granular optics for passive tracking of the sun

    NASA Astrophysics Data System (ADS)

    Frenkel, I.; Niv, A.

    2017-08-01

    The growing need for cost-effective renewable energy sources is hampered by the stagnation in solar cell technology, thus preventing a substantial reduction in the module and energy-production price. Lowering the energy-production cost could be achieved by using modules with efficiency. One of the possible means for increasing the module efficiency is concentrated photovoltaics (CPV). CPV, however, requires complex and accurate active tracking of the sun that reduces much of its cost-effectiveness. Here, we propose a passive tracking scheme based on a reactive optical device. The optical reaction is achieved by a new kind of light activated mechanical force that acts on micron-sized particles. This optical force allows the formation of granular disordered optical media that can be switched from being opaque to become transparent based on the intensity of light it interacts with. Such media gives rise to an efficient passive tracking scheme that when combined with an external optical cavity forms a new solar power conversion approach. Being external to the cell itself, this approach is indifferent to the type of semiconducting material that is used, as well as to other aspects of the cell design. This, in turn, liberates the cell layout from its optical constraints thus paving the way to higher efficiencies at lower module price.

  16. Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches

    PubMed Central

    Rawassizadeh, Reza; Tomitsch, Martin; Nourizadeh, Manouchehr; Momeni, Elaheh; Peery, Aaron; Ulanova, Liudmila; Pazzani, Michael

    2015-01-01

    As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras. PMID:26370997

  17. Fast Fragmentation of Networks Using Module-Based Attacks

    PubMed Central

    Requião da Cunha, Bruno; González-Avella, Juan Carlos; Gonçalves, Sebastián

    2015-01-01

    In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from a practical point of view. In this contribution, we present a module-based method to efficiently fragment complex networks. The procedure firstly identifies topological communities through which the network can be represented using a well established heuristic algorithm of community finding. Then only the nodes that participate of inter-community links are removed in descending order of their betweenness centrality. We illustrate the method by applying it to a variety of examples in the social, infrastructure, and biological fields. It is shown that the module-based approach always outperforms targeted attacks to vertices based on node degree or betweenness centrality rankings, with gains in efficiency strongly related to the modularity of the network. Remarkably, in the US power grid case, by deleting 3% of the nodes, the proposed method breaks the original network in fragments which are twenty times smaller in size than the fragments left by betweenness-based attack. PMID:26569610

  18. CuInSe2-Based Thin-Film Photovoltaic Technology in the Gigawatt Production Era

    NASA Astrophysics Data System (ADS)

    Kushiya, Katsumi

    2012-10-01

    The objective of this paper is to review current status and future prospect on CuInSe2 (CIS)-based thin-film photovoltaic (PV) technology. In CIS-based thin-film PV technology, total-area cell efficiency in a small-area (i.e., smaller than 1 cm2) solar cell with top grids has been over 20%, while aperture-area efficiency in a large-area (i.e., larger than 800 cm2 as definition) monolithic module is approaching to an 18% milestone. However, most of the companies with CIS-based thin-film PV technology still stay at a production research stage, except Solar Frontier K.K. In July, 2011, Solar Frontier has joined the gigawatt (GW) group by starting up their third facility with a 0.9-GW/year production capacity. They are keeping the closest position to pass a 16% module-efficiency border by transferring the developed technologies in the R&D and accelerating the preparation for the future based on the concept of a product life-cycle management.

  19. Universal Quantification in a Constraint-Based Planner

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Frank, Jeremy; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Constraints and universal quantification are both useful in planning, but handling universally quantified constraints presents some novel challenges. We present a general approach to proving the validity of universally quantified constraints. The approach essentially consists of checking that the constraint is not violated for all members of the universe. We show that this approach can sometimes be applied even when variable domains are infinite, and we present some useful special cases where this can be done efficiently.

  20. An approach to combining heuristic and qualitative reasoning in an expert system

    NASA Technical Reports Server (NTRS)

    Jiang, Wei-Si; Han, Chia Yung; Tsai, Lian Cheng; Wee, William G.

    1988-01-01

    An approach to combining the heuristic reasoning from shallow knowledge and the qualitative reasoning from deep knowledge is described. The shallow knowledge is represented in production rules and under the direct control of the inference engine. The deep knowledge is represented in frames, which may be put in a relational DataBase Management System. This approach takes advantage of both reasoning schemes and results in improved efficiency as well as expanded problem solving ability.

  1. Ecological risk assessment of agricultural soils for the definition of soil screening values: A comparison between substance-based and matrix-based approaches.

    PubMed

    Pivato, Alberto; Lavagnolo, Maria Cristina; Manachini, Barbara; Vanin, Stefano; Raga, Roberto; Beggio, Giovanni

    2017-04-01

    The Italian legislation on contaminated soils does not include the Ecological Risk Assessment (ERA) and this deficiency has important consequences for the sustainable management of agricultural soils. The present research compares the results of two ERA procedures applied to agriculture (i) one based on the "substance-based" approach and (ii) a second based on the "matrix-based" approach. In the former the soil screening values (SVs) for individual substances were derived according to institutional foreign guidelines. In the latter, the SVs characterizing the whole-matrix were derived originally by the authors by means of experimental activity. The results indicate that the "matrix-based" approach can be efficiently implemented in the Italian legislation for the ERA of agricultural soils. This method, if compared to the institutionalized "substance based" approach is (i) comparable in economic terms and in testing time, (ii) is site specific and assesses the real effect of the investigated soil on a battery of bioassays, (iii) accounts for phenomena that may radically modify the exposure of the organisms to the totality of contaminants and (iv) can be considered sufficiently conservative.

  2. A hybrid approach for efficient anomaly detection using metaheuristic methods

    PubMed Central

    Ghanem, Tamer F.; Elkilani, Wail S.; Abdul-kader, Hatem M.

    2014-01-01

    Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms. PMID:26199752

  3. A hybrid approach for efficient anomaly detection using metaheuristic methods.

    PubMed

    Ghanem, Tamer F; Elkilani, Wail S; Abdul-Kader, Hatem M

    2015-07-01

    Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.

  4. Automated and assisted RNA resonance assignment using NMR chemical shift statistics

    PubMed Central

    Aeschbacher, Thomas; Schmidt, Elena; Blatter, Markus; Maris, Christophe; Duss, Olivier; Allain, Frédéric H.-T.; Güntert, Peter; Schubert, Mario

    2013-01-01

    The three-dimensional structure determination of RNAs by NMR spectroscopy relies on chemical shift assignment, which still constitutes a bottleneck. In order to develop more efficient assignment strategies, we analysed relationships between sequence and 1H and 13C chemical shifts. Statistics of resonances from regularly Watson–Crick base-paired RNA revealed highly characteristic chemical shift clusters. We developed two approaches using these statistics for chemical shift assignment of double-stranded RNA (dsRNA): a manual approach that yields starting points for resonance assignment and simplifies decision trees and an automated approach based on the recently introduced automated resonance assignment algorithm FLYA. Both strategies require only unlabeled RNAs and three 2D spectra for assigning the H2/C2, H5/C5, H6/C6, H8/C8 and H1′/C1′ chemical shifts. The manual approach proved to be efficient and robust when applied to the experimental data of RNAs with a size between 20 nt and 42 nt. The more advanced automated assignment approach was successfully applied to four stem-loop RNAs and a 42 nt siRNA, assigning 92–100% of the resonances from dsRNA regions correctly. This is the first automated approach for chemical shift assignment of non-exchangeable protons of RNA and their corresponding 13C resonances, which provides an important step toward automated structure determination of RNAs. PMID:23921634

  5. Handwashing compliance.

    PubMed

    Antoniak, Jeannie

    2004-09-01

    Undeniably, handwashing remains the single most effective and cost-efficient method for preventing and reducing the transmission of nosocomial infections. Yet the rates and outbreaks of nosocomial infections in Canadian and international healthcare institutions continue to increase. Shaikh Khalifa Medical Center developed and implemented a multidisciplinary approach to address the challenges of handwashing compliance among nurses and healthcare workers in its workplace setting. Supported by evidence-based research, the approach consisted of three components: collaboration, implementation and evaluation. The use of the alcohol-based hand rub sanitizer or "solution" was integral to the multidisciplinary approach. Ongoing education, communication and a committed leadership were essential to promote and sustain handwashing compliance.

  6. Waste Management Using Request-Based Virtual Organizations

    NASA Astrophysics Data System (ADS)

    Katriou, Stamatia Ann; Fragidis, Garyfallos; Ignatiadis, Ioannis; Tolias, Evangelos; Koumpis, Adamantios

    Waste management is on top of the political agenda globally as a high priority environmental issue, with billions spent on it each year. This paper proposes an approach for the disposal, transportation, recycling and reuse of waste. This approach incorporates the notion of Request Based Virtual Organizations (RBVOs) using a Service Oriented Architecture (SOA) and an ontology that serves the definition of waste management requirements. The populated ontology is utilized by a Multi-Agent System which performs negotiations and forms RBVOs. The proposed approach could be used by governments and companies searching for a means to perform such activities in an effective and efficient manner.

  7. Drivers' safety needs, behavioural adaptations and acceptance of new driving support systems.

    PubMed

    Saad, Farida; Van Elslande, Pierre

    2012-01-01

    The aim of this paper is to discuss the contribution of two complementary approaches for designing and evaluating new driver support systems likely to improve the operation and safety of the road traffic system. The first approach is based on detailed analyses of traffic crashes so as to estimate drivers' needs for assistance and the situational constraints that safety functions should address to be efficient. The second approach is based on in depth-analyses of behavioral adaptations induced by the usage of new driver support systems in regular driving situations and on drivers' acceptance of the assistance provided by the systems.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goins, Bobby

    A systems based approach will be used to evaluate the nitrogen delivery process. This approach involves principles found in Lean, Reliability, Systems Thinking, and Requirements. This unique combination of principles and thought process yields a very in depth look into the system to which it is applied. By applying a systems based approach to the nitrogen delivery process there should be improvements in cycle time, efficiency, and a reduction in the required number of personnel needed to sustain the delivery process. This will in turn reduce the amount of demurrage charges that the site incurs. In addition there should bemore » less frustration associated with the delivery process.« less

  9. Evolutions in fragment-based drug design: the deconstruction–reconstruction approach

    PubMed Central

    Chen, Haijun; Zhou, Xiaobin; Wang, Ailan; Zheng, Yunquan; Gao, Yu; Zhou, Jia

    2014-01-01

    Recent advances in the understanding of molecular recognition and protein–ligand interactions have facilitated rapid development of potent and selective ligands for therapeutically relevant targets. Over the past two decades, a variety of useful approaches and emerging techniques have been developed to promote the identification and optimization of leads that have high potential for generating new therapeutic agents. Intriguingly, the innovation of a fragment-based drug design (FBDD) approach has enabled rapid and efficient progress in drug discovery. In this critical review, we focus on the construction of fragment libraries and the advantages and disadvantages of various fragment-based screening (FBS) for constructing such libraries. We also highlight the deconstruction–reconstruction strategy by utilizing privileged fragments of reported ligands. PMID:25263697

  10. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  11. Electrofuels: A New Paradigm for Renewable Fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conrado, Robert J.; Haynes, Chad A.; Haendler, Brenda E.

    2013-01-01

    Biofuels are by now a well-established component of the liquid fuels market and will continue to grow in importance for both economic and environmental reasons. To date, all commercial approaches to biofuels involve photosynthetic capture of solar radiation and conversion to reduced carbon; however, the low efficiency inherent to photosynthetic systems presents significant challenges to scaling. In 2009, the US Department of Energy (DOE) Advanced Research Projects Agency-Energy (ARPA-E) created the Electrofuels program to explore the potential of nonphotosynthetic autotrophic organisms for the conversion of durable forms of energy to energy-dense, infrastructure-compatible liquid fuels. The Electrofuels approach expands the boundariesmore » of traditional biofuels and could offer dramatically higher conversion efficiencies while providing significant reductions in requirements for both arable land and water relative to photosynthetic approaches. The projects funded under the Electrofuels program tap the enormous and largely unexplored diversity of the natural world, and may offer routes to advanced biofuels that are significantly more efficient, scalable and feedstock-flexible than routes based on photosynthesis. Here, we describe the rationale for the creation of the Electrofuels program, and outline the challenges and opportunities afforded by chemolithoautotrophic approaches to liquid fuels.« less

  12. Greens, suits, and bureaucrats: A sociological study of dynamic organizational relationships in energy efficient appliance policy

    NASA Astrophysics Data System (ADS)

    Shwom-Evelich, Rachael Leah

    In this dissertation I develop an approach to understanding dynamic organizational relations and the processes of environmental degradation and reform. To do this, I draw on environmental and organizational sociology to inform an empirical study of interorganizational relationships in defining and promoting energy efficient appliances in the United States (US). The dissertation follows a three paper approach which involves (a) an overall introduction to the substantive issue of appliance energy efficiency in the US; (b) producing three separate and stand alone articles of publishable quality to be submitted to professional journals; and (c) an overall conclusion. The three articles are as follows: (1) a synthetic literature review identifying five lessons that organizational sociology and environmental sociology can learn from each other to advance our sociological understanding of organizations, energy issues, and climate change (2) a qualitative case study of the changing relationships between business, government and environmental and energy advocacy organizations around mandatory appliance efficiency standards supporting the development of a context-dependent theory of ecological modernization and treadmill of production theories in environmental sociology and (3) a network analysis of public government, business and energy efficiency advocate's interorganizational relationships and its influence on subsequent organizational behaviors in the appliance energy efficiency field. The second and third articles are based on extensive archival research on organizational negotiations of public record over defining energy efficient appliances in both regulatory and voluntary settings. Finally I will provide an overall conclusion that brings together the most significant findings of each individual article in anticipation of a synthetic approach to the study of organizations in environmental reform.

  13. Novel approach to an effective community-based chlamydia screening program within the routine operation of a primary healthcare service.

    PubMed

    Buhrer-Skinner, Monika; Muller, Reinhold; Menon, Arun; Gordon, Rose

    2009-03-01

    A prospective study was undertaken to develop an evidence-based outreach chlamydia screening program and to assess the viability and efficiency of this complementary approach to chlamydia testing within the routine operations of a primary healthcare service. A primary healthcare service based in Townsville, Queensland, Australia, identified high-prevalence groups for chlamydia in the community. Subsequently, a series of outreach clinics were established and conducted between August 2004 and November 2005 at a defence force unit, a university, high school leavers' festivities, a high school catering for Indigenous students, youth service programs, and backpacker accommodations. All target groups were easily accessible and yielded high participation. Chlamydia prevalence ranged between 5 and 15% for five of the six groups; high school leavers had no chlamydia. All participants were notified of their results and all positive cases were treated (median treatment interval 7 days). Five of the six assessed groups were identified as viable for screening and form the basis for the ongoing outreach chlamydia screening program. The present study developed an evidence-based outreach chlamydia screening program and demonstrated its viability as a complementary approach to chlamydia testing within the routine operations of the primary healthcare service, i.e. without the need for additional funding. It contributes to the evidence base necessary for a viable and efficient chlamydia management program. Although the presented particulars may not be directly transferable to other communities or health systems, the general two-step approach of identifying local high-risk populations and then collaborating with community groups to access these populations is.

  14. A Module-Based Approach: Training Paraeducators on Evidence-Based Practices

    ERIC Educational Resources Information Center

    Da Fonte, M. Alexandra; Capizzi, Andrea M.

    2015-01-01

    Paraeducators are on the front lines in special education settings, providing support to teachers and students with significant disabilities and specific health-care needs. The important role they play demands efficient and cost-effective training in core skills. This study utilized a multiple-baseline across behaviors design to evaluate a…

  15. An Impact-Based Filtering Approach for Literature Searches

    ERIC Educational Resources Information Center

    Vista, Alvin

    2013-01-01

    This paper aims to present an alternative and simple method to improve the filtering of search results so as to increase the efficiency of literature searches, particularly for individual researchers who have limited logistical resources. The method proposed here is scope restriction using an impact-based filter, made possible by the emergence of…

  16. Team-Based Learning in Anatomy: An Efficient, Effective, and Economical Strategy

    ERIC Educational Resources Information Center

    Vasan, Nagaswami S.; DeFouw, David O.; Compton, Scott

    2011-01-01

    Team-based learning (TBL) strategy is being adopted in medical education to implement interactive small group learning. We have modified classical TBL to fit our curricular needs and approach. Anatomy lectures were replaced with TBL that required preparation of assigned content specific discussion topics (in the text referred as "discussion…

  17. Nonorthogonal orbital based N-body reduced density matrices and their applications to valence bond theory. I. Hamiltonian matrix elements between internally contracted excited valence bond wave functions

    NASA Astrophysics Data System (ADS)

    Chen, Zhenhua; Chen, Xun; Wu, Wei

    2013-04-01

    In this series, the n-body reduced density matrix (n-RDM) approach for nonorthogonal orbitals and their applications to ab initio valence bond (VB) methods are presented. As the first paper of this series, Hamiltonian matrix elements between internally contracted VB wave functions are explicitly provided by means of nonorthogonal orbital based RDM approach. To this end, a more generalized Wick's theorem, called enhanced Wick's theorem, is presented both in arithmetical and in graphical forms, by which the deduction of expressions for the matrix elements between internally contracted VB wave functions is dramatically simplified, and the matrix elements are finally expressed in terms of tensor contractions of electronic integrals and n-RDMs of the reference VB self-consistent field wave function. A string-based algorithm is developed for the purpose of evaluating n-RDMs in an efficient way. Using the techniques presented in this paper, one is able to develop new methods and efficient algorithms for nonorthogonal orbital based many-electron theory much easier than by use of the first quantized formulism.

  18. Sequential time interleaved random equivalent sampling for repetitive signal.

    PubMed

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  19. The relationship between hospital specialization and hospital efficiency: do different measures of specialization lead to different results?

    PubMed

    Lindlbauer, Ivonne; Schreyögg, Jonas

    2014-12-01

    This study investigated the relationship between hospital specialization and technical efficiency using different measures of specialization, including two novel approaches based on patient volumes rather than patient proportions. It was motivated by the observation that most studies to date have quantified hospital specialization using information about hospital patients grouped into different categories based on their diagnosis, and in doing so have used proportions-thus indirectly assuming that these categories are dependent on one other. In order to account for the diversification of organizations and the idea that hospitals can be specialized in terms of professional expertise or technical equipment within a given diagnosis category, we developed our two specialization measures based on patient volume in each category. Using a one-step stochastic frontier approach on randomly selected data from the annual reports of 1,239 acute care German hospitals for the years 2000 through 2010, we estimated the relationship of inefficiency to exogenous variables, such as specialization. The results show that specialization as quantified by our novel measures has effects on efficiency that are the opposite of those obtained using earlier measures of specialization. These results underscore the importance of always providing an exact definition of specialization when studying its effects. Additionally, a Monte Carlo simulation based on three scenarios is provided to facilitate the choice of a specialization measure for further analysis.

  20. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Implementation methodology for interoperable personal health devices with low-voltage low-power constraints.

    PubMed

    Martinez-Espronceda, Miguel; Martinez, Ignacio; Serrano, Luis; Led, Santiago; Trigo, Jesús Daniel; Marzo, Asier; Escayola, Javier; Garcia, José

    2011-05-01

    Traditionally, e-Health solutions were located at the point of care (PoC), while the new ubiquitous user-centered paradigm draws on standard-based personal health devices (PHDs). Such devices place strict constraints on computation and battery efficiency that encouraged the International Organization for Standardization/IEEE11073 (X73) standard for medical devices to evolve from X73PoC to X73PHD. In this context, low-voltage low-power (LV-LP) technologies meet the restrictions of X73PHD-compliant devices. Since X73PHD does not approach the software architecture, the accomplishment of an efficient design falls directly on the software developer. Therefore, computational and battery performance of such LV-LP-constrained devices can even be outperformed through an efficient X73PHD implementation design. In this context, this paper proposes a new methodology to implement X73PHD into microcontroller-based platforms with LV-LP constraints. Such implementation methodology has been developed through a patterns-based approach and applied to a number of X73PHD-compliant agents (including weighing scale, blood pressure monitor, and thermometer specializations) and microprocessor architectures (8, 16, and 32 bits) as a proof of concept. As a reference, the results obtained in the weighing scale guarantee all features of X73PHD running over a microcontroller architecture based on ARM7TDMI requiring only 168 B of RAM and 2546 B of flash memory.

  2. Rapid Nucleic Acid Extraction and Purification Using a Miniature Ultrasonic Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branch, Darren W.; Vreeland, Erika C.; McClain, Jamie L.

    Miniature ultrasonic lysis for biological sample preparation is a promising technique for efficient and rapid extraction of nucleic acids and proteins from a wide variety of biological sources. Acoustic methods achieve rapid, unbiased, and efficacious disruption of cellular membranes while avoiding the use of harsh chemicals and enzymes, which interfere with detection assays. In this work, a miniature acoustic nucleic acid extraction system is presented. Using a miniature bulk acoustic wave (BAW) transducer array based on 36° Y-cut lithium niobate, acoustic waves were coupled into disposable laminate-based microfluidic cartridges. To verify the lysing effectiveness, the amount of liberated ATP andmore » the cell viability were measured and compared to untreated samples. The relationship between input power, energy dose, flow-rate, and lysing efficiency were determined. DNA was purified on-chip using three approaches implemented in the cartridges: a silica-based sol-gel silica-bead filled microchannel, nucleic acid binding magnetic beads, and Nafion-coated electrodes. Using E. coli, the lysing dose defined as ATP released per joule was 2.2× greater, releasing 6.1× more ATP for the miniature BAW array compared to a bench-top acoustic lysis system. An electric field-based nucleic acid purification approach using Nafion films yielded an extraction efficiency of 69.2% in 10 min for 50 µL samples.« less

  3. Rapid Nucleic Acid Extraction and Purification Using a Miniature Ultrasonic Technique

    DOE PAGES

    Branch, Darren W.; Vreeland, Erika C.; McClain, Jamie L.; ...

    2017-07-21

    Miniature ultrasonic lysis for biological sample preparation is a promising technique for efficient and rapid extraction of nucleic acids and proteins from a wide variety of biological sources. Acoustic methods achieve rapid, unbiased, and efficacious disruption of cellular membranes while avoiding the use of harsh chemicals and enzymes, which interfere with detection assays. In this work, a miniature acoustic nucleic acid extraction system is presented. Using a miniature bulk acoustic wave (BAW) transducer array based on 36° Y-cut lithium niobate, acoustic waves were coupled into disposable laminate-based microfluidic cartridges. To verify the lysing effectiveness, the amount of liberated ATP andmore » the cell viability were measured and compared to untreated samples. The relationship between input power, energy dose, flow-rate, and lysing efficiency were determined. DNA was purified on-chip using three approaches implemented in the cartridges: a silica-based sol-gel silica-bead filled microchannel, nucleic acid binding magnetic beads, and Nafion-coated electrodes. Using E. coli, the lysing dose defined as ATP released per joule was 2.2× greater, releasing 6.1× more ATP for the miniature BAW array compared to a bench-top acoustic lysis system. An electric field-based nucleic acid purification approach using Nafion films yielded an extraction efficiency of 69.2% in 10 min for 50 µL samples.« less

  4. Group-based variant calling leveraging next-generation supercomputing for large-scale whole-genome sequencing studies.

    PubMed

    Standish, Kristopher A; Carland, Tristan M; Lockwood, Glenn K; Pfeiffer, Wayne; Tatineni, Mahidhar; Huang, C Chris; Lamberth, Sarah; Cherkas, Yauheniya; Brodmerkel, Carrie; Jaeger, Ed; Smith, Lance; Rajagopal, Gunaretnam; Curran, Mark E; Schork, Nicholas J

    2015-09-22

    Next-generation sequencing (NGS) technologies have become much more efficient, allowing whole human genomes to be sequenced faster and cheaper than ever before. However, processing the raw sequence reads associated with NGS technologies requires care and sophistication in order to draw compelling inferences about phenotypic consequences of variation in human genomes. It has been shown that different approaches to variant calling from NGS data can lead to different conclusions. Ensuring appropriate accuracy and quality in variant calling can come at a computational cost. We describe our experience implementing and evaluating a group-based approach to calling variants on large numbers of whole human genomes. We explore the influence of many factors that may impact the accuracy and efficiency of group-based variant calling, including group size, the biogeographical backgrounds of the individuals who have been sequenced, and the computing environment used. We make efficient use of the Gordon supercomputer cluster at the San Diego Supercomputer Center by incorporating job-packing and parallelization considerations into our workflow while calling variants on 437 whole human genomes generated as part of large association study. We ultimately find that our workflow resulted in high-quality variant calls in a computationally efficient manner. We argue that studies like ours should motivate further investigations combining hardware-oriented advances in computing systems with algorithmic developments to tackle emerging 'big data' problems in biomedical research brought on by the expansion of NGS technologies.

  5. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  6. Balancing efficiency, equity and feasibility of HIV treatment in South Africa – development of programmatic guidance

    PubMed Central

    2013-01-01

    South Africa, the country with the largest HIV epidemic worldwide, has been scaling up treatment since 2003 and is rapidly expanding its eligibility criteria. The HIV treatment programme has achieved significant results, and had 1.8 million people on treatment per 2011. Despite these achievements, it is now facing major concerns regarding (i) efficiency: alternative treatment policies may save more lives for the same budget; (ii) equity: there are large inequalities in who receives treatment; (iii) feasibility: still only 52% of the eligible population receives treatment. Hence, decisions on the design of the present HIV treatment programme in South Africa can be considered suboptimal. We argue there are two fundamental reasons to this. First, while there is a rapidly growing evidence-base to guide priority setting decisions on HIV treatment, its included studies typically consider only one criterion at a time and thus fail to capture the broad range of values that stakeholders have. Second, priority setting on HIV treatment is a highly political process but it seems no adequate participatory processes are in place to incorporate stakeholders’ views and evidences of all sorts. We propose an alternative approach that provides a better evidence base and outlines a fair policy process to improve priority setting in HIV treatment. The approach integrates two increasingly important frameworks on health care priority setting: accountability for reasonableness (A4R) to foster procedural fairness, and multi-criteria decision analysis (MCDA) to construct an evidence-base on the feasibility, efficiency, and equity of programme options including trade-offs. The approach provides programmatic guidance on the choice of treatment strategies at various decisions levels based on a sound conceptual framework, and holds large potential to improve HIV priority setting in South Africa. PMID:24107435

  7. Combining Model-Based and Feature-Driven Diagnosis Approaches - A Case Study on Electromechanical Actuators

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Roychoudhury, Indranil; Balaban, Edward; Saxena, Abhinav

    2010-01-01

    Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this approach does not work very well when it is not feasible to create analytic relations describing all the observed data, e.g., for vibration data which is usually sampled at very high rates and requires very detailed finite element models to describe its behavior. In such cases, features (in time and frequency domains) that contain diagnostic information are extracted from the data. Since this is a computationally intensive process, it is not efficient to extract all the features all the time. In this paper we present an approach that combines the analytic model-based and feature-driven diagnosis approaches. The analytic approach is used to reduce the set of possible faults and then features are chosen to best distinguish among the remaining faults. We describe an implementation of this approach on the Flyable Electro-mechanical Actuator (FLEA) test bed.

  8. SGFSC: speeding the gene functional similarity calculation based on hash tables.

    PubMed

    Tian, Zhen; Wang, Chunyu; Guo, Maozu; Liu, Xiaoyan; Teng, Zhixia

    2016-11-04

    In recent years, many measures of gene functional similarity have been proposed and widely used in all kinds of essential research. These methods are mainly divided into two categories: pairwise approaches and group-wise approaches. However, a common problem with these methods is their time consumption, especially when measuring the gene functional similarities of a large number of gene pairs. The problem of computational efficiency for pairwise approaches is even more prominent because they are dependent on the combination of semantic similarity. Therefore, the efficient measurement of gene functional similarity remains a challenging problem. To speed current gene functional similarity calculation methods, a novel two-step computing strategy is proposed: (1) establish a hash table for each method to store essential information obtained from the Gene Ontology (GO) graph and (2) measure gene functional similarity based on the corresponding hash table. There is no need to traverse the GO graph repeatedly for each method with the help of the hash table. The analysis of time complexity shows that the computational efficiency of these methods is significantly improved. We also implement a novel Speeding Gene Functional Similarity Calculation tool, namely SGFSC, which is bundled with seven typical measures using our proposed strategy. Further experiments show the great advantage of SGFSC in measuring gene functional similarity on the whole genomic scale. The proposed strategy is successful in speeding current gene functional similarity calculation methods. SGFSC is an efficient tool that is freely available at http://nclab.hit.edu.cn/SGFSC . The source code of SGFSC can be downloaded from http://pan.baidu.com/s/1dFFmvpZ .

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Training programs at DOE facilities should prepare personnel to safely and efficiently operate and maintain the facilities in accordance with DOE requirements. This guide presents good practices for a systematic approach to on-the-job training (OJT) and OJT programs and should be used in conjunction with DOE Training Program Handbook: A Systematic Approach to Training, and with the DOE Handbook entitled Alternative Systematic Approaches to Training to develop performance-based OJT programs. DOE contractors may also use this guide to modify existing OJT programs that do not meet the systematic approach to training (SAT) objectives.

  10. Dynamics of Coupled Electron-Boson Systems with the Multiple Davydov D1 Ansatz and the Generalized Coherent State.

    PubMed

    Chen, Lipeng; Borrelli, Raffaele; Zhao, Yang

    2017-11-22

    The dynamics of a coupled electron-boson system is investigated by employing a multitude of the Davydov D 1 trial states, also known as the multi-D 1 Ansatz, and a second trial state based on a superposition of the time-dependent generalized coherent state (GCS Ansatz). The two Ansätze are applied to study population dynamics in the spin-boson model and the Holstein molecular crystal model, and a detailed comparison with numerically exact results obtained by the (multilayer) multiconfiguration time-dependent Hartree method and the hierarchy equations of motion approach is drawn. It is found that the two methodologies proposed here have significantly improved over that with the single D 1 Ansatz, yielding quantitatively accurate results even in the critical cases of large energy biases and large transfer integrals. The two methodologies provide new effective tools for accurate, efficient simulation of many-body quantum dynamics thanks to a relatively small number of parameters which characterize the electron-nuclear wave functions. The wave-function-based approaches are capable of tracking explicitly detailed bosonic dynamics, which is absent by construct in approaches based on the reduced density matrix. The efficiency and flexibility of our methods are also advantages as compared with numerically exact approaches such as QUAPI and HEOM, especially at low temperatures and in the strong coupling regime.

  11. Development and testing of an optimized method for DNA-based identification of jaguar (Panthera onca) and puma (Puma concolor) faecal samples for use in ecological and genetic studies.

    PubMed

    Haag, Taiana; Santos, Anelisie S; De Angelo, Carlos; Srbek-Araujo, Ana Carolina; Sana, Dênis A; Morato, Ronaldo G; Salzano, Francisco M; Eizirik, Eduardo

    2009-07-01

    The elusive nature and endangered status of most carnivore species imply that efficient approaches for their non-invasive sampling are required to allow for genetic and ecological studies. Faecal samples are a major potential source of information, and reliable approaches are needed to foster their application in this field, particularly in areas where few studies have been conducted. A major obstacle to the reliable use of faecal samples is their uncertain species-level identification in the field, an issue that can be addressed with DNA-based assays. In this study we describe a sequence-based approach that efficiently distinguishes jaguar versus puma scats, and that presents several desirable properties: (1) considerably high amplification and sequencing rates; (2) multiple diagnostic sites reliably differentiating the two focal species; (3) high information content that allows for future application in other carnivores; (4) no evidence of amplification of prey DNA; and (5) no evidence of amplification of a nuclear mitochondrial DNA insertion known to occur in the jaguar. We demonstrate the reliability and usefulness of this approach by evaluating 55 field-collected samples from four locations in the highly fragmented Atlantic Forest biome of Brazil and Argentina, and document the presence of one or both of these endangered felids in each of these areas.

  12. [A retrieval method of drug molecules based on graph collapsing].

    PubMed

    Qu, J W; Lv, X Q; Liu, Z M; Liao, Y; Sun, P H; Wang, B; Tang, Z

    2018-04-18

    To establish a compact and efficient hypergraph representation and a graph-similarity-based retrieval method of molecules to achieve effective and efficient medicine information retrieval. Chemical structural formula (CSF) was a primary search target as a unique and precise identifier for each compound at the molecular level in the research field of medicine information retrieval. To retrieve medicine information effectively and efficiently, a complete workflow of the graph-based CSF retrieval system was introduced. This system accepted the photos taken from smartphones and the sketches drawn on tablet personal computers as CSF inputs, and formalized the CSFs with the corresponding graphs. Then this paper proposed a compact and efficient hypergraph representation for molecules on the basis of analyzing factors that directly affected the efficiency of graph matching. According to the characteristics of CSFs, a hierarchical collapsing method combining graph isomorphism and frequent subgraph mining was adopted. There was yet a fundamental challenge, subgraph overlapping during the collapsing procedure, which hindered the method from establishing the correct compact hypergraph of an original CSF graph. Therefore, a graph-isomorphism-based algorithm was proposed to select dominant acyclic subgraphs on the basis of overlapping analysis. Finally, the spatial similarity among graphical CSFs was evaluated by multi-dimensional measures of similarity. To evaluate the performance of the proposed method, the proposed system was firstly compared with Wikipedia Chemical Structure Explorer (WCSE), the state-of-the-art system that allowed CSF similarity searching within Wikipedia molecules dataset, on retrieval accuracy. The system achieved higher values on mean average precision, discounted cumulative gain, rank-biased precision, and expected reciprocal rank than WCSE from the top-2 to the top-10 retrieved results. Specifically, the system achieved 10%, 1.41, 6.42%, and 1.32% higher than WCSE on these metrics for top-10 retrieval results, respectively. Moreover, several retrieval cases were presented to intuitively compare with WCSE. The results of the above comparative study demonstrated that the proposed method outperformed the existing method with regard to accuracy and effectiveness. This paper proposes a graph-similarity-based retrieval approach for medicine information. To obtain satisfactory retrieval results, an isomorphism-based algorithm is proposed for dominant subgraph selection based on the subgraph overlapping analysis, as well as an effective and efficient hypergraph representation of molecules. Experiment results demonstrate the effectiveness of the proposed approach.

  13. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2013-02-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally very efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore) and Canning River (Western Australia)) representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  14. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  15. a Novel Approach of Indexing and Retrieving Spatial Polygons for Efficient Spatial Region Queries

    NASA Astrophysics Data System (ADS)

    Zhao, J. H.; Wang, X. Z.; Wang, F. Y.; Shen, Z. H.; Zhou, Y. C.; Wang, Y. L.

    2017-10-01

    Spatial region queries are more and more widely used in web-based applications. Mechanisms to provide efficient query processing over geospatial data are essential. However, due to the massive geospatial data volume, heavy geometric computation, and high access concurrency, it is difficult to get response in real time. Spatial indexes are usually used in this situation. In this paper, based on k-d tree, we introduce a distributed KD-Tree (DKD-Tree) suitbable for polygon data, and a two-step query algorithm. The spatial index construction is recursive and iterative, and the query is an in memory process. Both the index and query methods can be processed in parallel, and are implemented based on HDFS, Spark and Redis. Experiments on a large volume of Remote Sensing images metadata have been carried out, and the advantages of our method are investigated by comparing with spatial region queries executed on PostgreSQL and PostGIS. Results show that our approach not only greatly improves the efficiency of spatial region query, but also has good scalability, Moreover, the two-step spatial range query algorithm can also save cluster resources to support a large number of concurrent queries. Therefore, this method is very useful when building large geographic information systems.

  16. Improved interface control for high-performance graphene-based organic solar cells

    NASA Astrophysics Data System (ADS)

    Jung, Seungon; Lee, Junghyun; Choi, Yunseong; Myeon Lee, Sang; Yang, Changduk; Park, Hyesung

    2017-12-01

    The demand for high-efficiency flexible optoelectronic devices is ever-increasing because next-generation electronic devices that comprise portable or wearable electronic systems are set to play an important role. Graphene has received extensive attention as it is considered to be a promising candidate material for transparent flexible electrode platforms owing to its outstanding electrical, optical, and physical properties. Despite these properties, the inert and hydrophobic nature of graphene surfaces renders it difficult to use in optoelectronic devices. In particular, commonly used charge transporting layer (CTL) materials for organic solar cells (OSCs) cannot uniformly coat a graphene surface, which leads to such devices failing. Herein, this paper proposes an approach that will enable CTL materials to completely cover a graphene electrode; this is done with the assistance of commonly accessible polar solvents. These are successfully applied to various configurations of OSCs, with power conversion efficiencies of 8.17% for graphene electrode-based c-OSCs (OSCs with conventional structures), 8.38% for i-OSCs (OSCs with inverted structures), and 7.53% for flexible solar cells. The proposed approach is expected to bring about significant advances for efficiency enhancements in graphene-based optoelectronic devices, and it is expected that it will open up new possibilities for flexible optoelectronic systems.

  17. Using the nonlinear aquifer storage-discharge relationship to simulate the base flow of glacier- and snowmelt-dominated basins in northwest China

    NASA Astrophysics Data System (ADS)

    Gan, R.; Luo, Y.

    2013-09-01

    Base flow is an important component in hydrological modeling. This process is usually modeled by using the linear aquifer storage-discharge relation approach, although the outflow from groundwater aquifers is nonlinear. To identify the accuracy of base flow estimates in rivers dominated by snowmelt and/or glacier melt in arid and cold northwestern China, a nonlinear storage-discharge relationship for use in SWAT (Soil Water Assessment Tool) modeling was developed and applied to the Manas River basin in the Tian Shan Mountains. Linear reservoir models and a digital filter program were used for comparisons. Meanwhile, numerical analysis of recession curves from 78 river gauge stations revealed variation in the parameters of the nonlinear relationship. It was found that the nonlinear reservoir model can improve the streamflow simulation, especially for low-flow period. The higher Nash-Sutcliffe efficiency, logarithmic efficiency, and volumetric efficiency, and lower percent bias were obtained when compared to the one-linear reservoir approach. The parameter b of the aquifer storage-discharge function varied mostly between 0.0 and 0.1, which is much smaller than the suggested value of 0.5. The coefficient a of the function is related to catchment properties, primarily the basin and glacier areas.

  18. Direct procurement of wound management products.

    PubMed

    Jenkins, Trevor

    2014-03-01

    This article describes a collaborative project between Bedfordshire Community Health Services and Primary Care Trusts/Clinical Commissioning Groups to improve provision of dressings to nurses for the patients they treat. Commissioners have facilitated a transformational approach and encouraged development of efficient systems of increased cost-effectiveness rather than a transactional approach based on opportunistic cost improvement plans. Reconfiguration to direct procurement from GP prescribing has reduced wastage, released nurse time from processes to spend on clinical contact time with patients, increased efficiency, and reduced prescription workload for GPs, all without adverse effects on expenditure. Establishing a wound care products formulary placed control under the nurses treating patients and facilitated decision-making based on cost-effectiveness in clinical use. Nurses now manage 60% of expenditure in the local community health economy, and this is increasing. Relationships with the dressings manufacturing industry have also changed in a positive, constructive direction.

  19. Quantification and Segmentation of Brain Tissues from MR Images: A Probabilistic Neural Network Approach

    PubMed Central

    Wang, Yue; Adalý, Tülay; Kung, Sun-Yuan; Szabo, Zsolt

    2007-01-01

    This paper presents a probabilistic neural network based technique for unsupervised quantification and segmentation of brain tissues from magnetic resonance images. It is shown that this problem can be solved by distribution learning and relaxation labeling, resulting in an efficient method that may be particularly useful in quantifying and segmenting abnormal brain tissues where the number of tissue types is unknown and the distributions of tissue types heavily overlap. The new technique uses suitable statistical models for both the pixel and context images and formulates the problem in terms of model-histogram fitting and global consistency labeling. The quantification is achieved by probabilistic self-organizing mixtures and the segmentation by a probabilistic constraint relaxation network. The experimental results show the efficient and robust performance of the new algorithm and that it outperforms the conventional classification based approaches. PMID:18172510

  20. Market-Based Coordination of Thermostatically Controlled Loads—Part II: Unknown Parameters and Case Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Lian, Jianming

    This two-part paper considers the coordination of a population of Thermostatically Controlled Loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the bidding and market clearing strategy to motivate self-interested users to realize efficient energy allocation subject to a peak power constraint. The companion paper (Part I) formulates the problem and proposes a load coordination framework using the mechanism design approach. To address the unknown parameters, Part II of this paper presents a joint state and parameter estimation framework based on the expectation maximization algorithm. The overall framework is then validated using real-world weather data andmore » price data, and is compared with other approaches in terms of aggregated power response. Simulation results indicate that our coordination framework can effectively improve the efficiency of the power grid operations and reduce power congestion at key times.« less

Top